Binary Calculator

[OC] Punt Rank 2020: Week 5 - Brett Kern Appreciation Club, the continued painful existence of Kevin Huber, PUNTERS THROWING TDs and the birth of Air Townsend. All this and the best video highlights of the week...

Welcome back, Punt Fans, to your slightly later than usual but there's no Thursday Night Football so what else are you going to be doing edition of our weekly hunt for the King of Punt – it’s /NFL’s own Punt Rank. If you haven’t been here with me before, the concept is both simple and fantastically over-engineered. Lemme break it down:
Each punter’s performance against five vital punting metrics is ranked against every other punter in the league.
Those rankings are combined into a weighted average ranking – the 2020 NFL Punt Rank.
Punt Heroes rise to the top; Punt Zeros sink to the bottom. Last week’s post and Week 4 standings are available here for the archivists, and all of this week’s stats analysis and highlights and lowlights in video form are just moments away.
As always I’m excited to get your perspectives on your team’s punter, and you can point me to things that I may have missed or overlooked, so please hit me with your feedback and questions in the comments!

Punt Rank Standings

Punt Rank 2020: Week 5 Overall Standings
2020 Week 5: Punt Performance Summary

Good Week for

Brett Kern (TEN, +1 to #3). Eh what do you want to know. If you’re reading this it means you like punting. If you like punting, you know that Brett Kern is a really, really great punter. And, Q.E.D – Brett was demonstrably great against the Bills on (the other) TNF. His three punts this week for the no-longer-significantly-infectious-Titans pinned Josh Allen and his shorts at the 9, 9 and 3 yard lines – covering 86% of Average Available Field which is GOAT tier punting. Here’s the pick of the bunch (his 41 yard precisiobomb corralled at the 3 yard line by Chris Milton) covering 93% of Available Field, and measuring in 7.6 yards better than an average punt from the opposing 44 yard line. Tidy.
In addition to his really really really great punting, the Kerninator also wrangled at least two uttely horrible snaps into decent holds for Gostkowski to continue his kicking renaissance tour, which is a majorly underrated part of the punter job description...
Logan Cooke (JAX, +12 to #13). SPEAKING OF PUNTER HOLDS AND THE EFFECT IT HAS ON KICKERS. Now I’m not saying that Chef had anything to do with the end of Stephen Hauschka’s NFL career on Sunday (0 for 2 within less than two minutes at the end of the first half, not called upon again, then cut PDQ after the weekend), but then I’m not not saying that either. Luckily for Logan (shoot I think I used that joke last week as well) the punting element of his game was without such ugly question marks. 100% of his three punts ended inside the Houston 20 yard line, covering 73%, 83% and 89% of Available Field, sneaking him up to 13th overall. Now let’s see if he can hold onto it. Geddit? Hold?! Pah.

Bad Week for

Kevin Huber (CIN, -8 to #24). On a game where the Bengals only managed the paltry total of 12 first downs (an average of one, yes ONE first down on their 12 offensive drives), K-Hub’s Bad Day was at least somewhat salvaged by the first half holy trinity of Turnover on Downs, INT and Fumble on consecutive drives (2, 3 and 4 – if you’re counting). Without that magical offensive incompetence, he could have been looking at double figure punts (I see you, Tress Way in Washington). As it was, he escaped with just the seven (!), but he takes a slide in the Punt Rank rankings as two of those (admittedly 57 and 60 yard boots) snuck for touchbacks, taking his season touchback percentage total to 26.1% which is second last in the league, just behind Tommy Townsend (more on him later). None of the magnificent seven made it inside the 20, wiping 13% off his season long percentage. However, in Kev’s defence, the first of his two end-zone-botherers this week was another case of coulda woulda shoulda from his coverage team. Alex Erikson heroically made up all the ground to reach the ball as it took a hop into the end zone, but his flailing scoopitty-scoop only managed to floopitty-floop the ball into the wrong side of the pylon.
Bengals bungle.
Football is a game of inches, and those couple cost Kev. And, after last week’s feature in Egregious Touchback of the Week where basically exactly the same thing happened, it’s entirely possible that Kevin Huber is stuck in some kind of awful groundhog day based time loop. That would at least explain this instagram account.
Ty Long (LAC, -5 to #23). Ty Long was the victim of the binary brain of Saints rookie receivereturnerobot automaton Marquez Callaway this week. In Marquez’s awesome little computer mind, he’s going:
IF
punt_catch_loc > 15 THEN SELECT Return_Like_Craycray FROM Return.Options
ELSE Fair_Catch_That_MF
Unfortunately for Ty, six of his seven punts were outside that 15 yard threshold and the big red light on Robot Marquez's head went off like WOO WOO, and he went HAM on bringing those suckers back. 69 (nice) return yards on the day with a long of 19 wiped almost ten yards off Long's Gross Average for the day and left him at just 53% of Average Available Field covered. The Chargers have now leaked 149 return yards for the season which is second worst in the league (behind those irrepressibly awful Jets) and almost three times the league average of 56 through five weeks. Ty will be hoping that they can turn that around before… long. Sorry.

Punt of the week – Week 5

Corey Bojorquez (BUF) continues his wild oscillation between the sublime and the ridiculous. It’s an odd-week so I guess this week it’s Sublime Corey, whose 71 yard scud missile from his own ten yard line in the second quarter of this week’s edition of Tuesday Night Football Bought To You By COVID-19 was an astonishing 28.3 yards longer than my Expected Net Gain model for an average punt from that spot. Look at this baby fly!
Bojorquez booms one.

Punters doin’ shit – Week 5

Hey, it’s Corey Bojorquez again! Guess he can do sublime AND ridiculous in a single week now. It’s Puntception. Corey’s first punt of the day was coming alllll the way back for 6 until he decided to put his face on the line to put an end to Kalif Raymond’s 40 yard return. BLOOF. Look at him putting on his cap all swag afterwards like yeah I blew that dude up
Yeah I think tackling with your head is good form?
But that’s not all for Punters Doin’ Shit in Week 5, oh no. We have a bonus double edition! and I include this clip with great enjoyment but also great sadness. Gentlemen and Gentlemen (just being real here), this week Riley Dixon (NYG) threw a Touchdown pass! For Giants fans reading this is when someone on your team throws the ball into the big painted area at the end of the field and a player (also on your team) catches it. I know this sounds strange and unusual, but it can happen. And it did happen for Riley on this awesome fake field goal toss to Evan Engram, brilliantly narrated by the incomparable Tony Romo in the clip below. Seriously, this call is outstanding…
Nobody look at me, doo doo do, you cant see me... Jim Nantz, don't talK to.. IM OPEN, THROW IT
Unfortunately, the play itself was called back due to a player not lined up on the line of scrimmage and the Giants had to settle for a 50 yard field goal. For Chargers and Jags fans reading, this is when your kicker kicks the ball and it goes between the two big tall standy uppy line things. I know this sounds strange and unusual, but it can happen. No TD for Riley, but we have the memories…

Egregious touchback of the week – Week 5

I might start calling this the Kevin Huber Touchback Memorial Column, after ANOTHER narrow miss by the Bengals coverage left Kev high and dry this week against the Ravens (see Bad Week).
Outside of that shambles, there were only 6 touchbacks on the other 102 punts in Week 5, and most of them were fairly ordinary so there isn’t much egregiousity (not a word but I’m going with it) to discuss. Instead today we’re going to take some time to appreciate Tommy Townsend (KC) who has apparently got some kind of nuclear powered leg and is playing a game called “look how far away I can kick a touchback from”. For those who haven’t been paying close attention, here’s how Tommy’s rookie season has gone so far in touchback terms.
Week 1 – 44 yards, modest.
Week 2 – 55 yards, expressive.
Week 3 – only punted once so gave myself a week off from this.
Week 4 – fucken LOLs this is, how about a 60 AND a 65!
Week 5 – hold my beer…
Oh my god Becky, look at this punt.
67 yards! SIXTY SEVEN! And that’s from the line of scrimmage - that sucker went almost EIGHTY YARDS in the AIR. It bounced at the two and I think the returner just never even saw it. He probably thought it went into orbit or something. Absolutely ludicrous distance and hangtime here from Tommy. And, thus, I think we have our new moniker for the lad: Air Townsend. Which is also funny because it sounds like hair and he has got long hair.
I’m wasted doing this.

Future of Punt Rank: desperate data plea

So part of my data collection for this analysis used to come from the brilliant Pro Football Reference gameplay finder. Which, as of this week, appears to have been absorbed into Stathead. And they’re now charging $8 a month for access to these individual play description tables, which is a massive punt in the balls.
Without this data, I’ve got no way to calculate Average Available Field coverage, no plus/minus performance against the Punt Expected Net Gain, and no data on punts inside the 5 and 10 yard lines – all of which come from that analysis of the individual punt plays. Whilst this data doesn’t feed the actual rankings (which come from free NFL.com data tables), they are all metrics that really help add context to the basic stats, and are things that people reading have commented on in the past and said they found interesting.
So, if anyone knows of anywhere else where I can access and download these play descriptions for each individual punt (without manually sifting the ESPN play by play reports!!), then please please let me know in the comments below. Alternatively if the eight people who read this each wanna chip in a buck a month on an ongoing basis so we can pay Stathead then that’d be cool too.
A sad day for punt stat fans to be sure. Fucken big corporate…
And on that note, all that's left is to say I will see you again next week for a likely more analytically constrained but still enthusiastically trying my bestest edition of Punt Rank.
Yours,
Eyebrows.
submitted by erictaylorseyebrows to nfl [link] [comments]

No gods, no kings, only NOPE - or divining the future with options flows. [Part 3: Hedge Winding, Unwinding, and the NOPE]

Hello friends!
We're on the last post of this series ("A Gentle Introduction to NOPE"), where we get to use all the Big Boy Concepts (TM) we've discussed in the prior posts and put them all together. Some words before we begin:
  1. This post will be massively theoretical, in the sense that my own speculation and inferences will be largely peppered throughout the post. Are those speculations right? I think so, or I wouldn't be posting it, but they could also be incorrect.
  2. I will briefly touch on using the NOPE this slide, but I will make a secondary post with much more interesting data and trends I've observed. This is primarily for explaining what NOPE is and why it potentially works, and what it potentially measures.
My advice before reading this is to glance at my prior posts, and either read those fully or at least make sure you understand the tl;drs:
https://www.reddit.com/thecorporation/collection/27dc72ad-4e78-44cd-a788-811cd666e32a
Depending on popular demand, I will also make a last-last post called FAQ, where I'll tabulate interesting questions you guys ask me in the comments!
---
So a brief recap before we begin.
Market Maker ("Mr. MM"): An individual or firm who makes money off the exchange fees and bid-ask spread for an asset, while usually trying to stay neutral about the direction the asset moves.
Delta-gamma hedging: The process Mr. MM uses to stay neutral when selling you shitty OTM options, by buying/selling shares (usually) of the underlying as the price moves.
Law of Surprise [Lily-ism]: Effectively, the expected profit of an options trade is zero for both the seller and the buyer.
Random Walk: A special case of a deeper probability probability called a martingale, which basically models stocks or similar phenomena randomly moving every step they take (for stocks, roughly every millisecond). This is one of the most popular views of how stock prices move, especially on short timescales.
Future Expected Payoff Function [Lily-ism]: This is some hidden function that every market participant has about an asset, which more or less models all the possible future probabilities/values of the assets to arrive at a "fair market price". This is a more generalized case of a pricing model like Black-Scholes, or DCF.
Counter-party: The opposite side of your trade (if you sell an option, they buy it; if you buy an option, they sell it).
Price decoherence ]Lily-ism]: A more generalized notion of IV Crush, price decoherence happens when instead of the FEPF changing gradually over time (price formation), the FEPF rapidly changes, due usually to new information being added to the system (e.g. Vermin Supreme winning the 2020 election).
---
One of the most popular gambling events for option traders to play is earnings announcements, and I do owe the concept of NOPE to hypothesizing specifically about the behavior of stock prices at earnings. Much like a black hole in quantum mechanics, most conventional theories about how price should work rapidly break down briefly before, during, and after ER, and generally experienced traders tend to shy away from playing earnings, given their similar unpredictability.
Before we start: what is NOPE? NOPE is a funny backronym from Net Options Pricing Effect, which in its most basic sense, measures the impact option delta has on the underlying price, as compared to share price. When I first started investigating NOPE, I called it OPE (options pricing effect), but NOPE sounds funnier.
The formula for it is dead simple, but I also have no idea how to do LaTeX on reddit, so this is the best I have:

https://preview.redd.it/ais37icfkwt51.png?width=826&format=png&auto=webp&s=3feb6960f15a336fa678e945d93b399a8e59bb49
Since I've already encountered this, put delta in this case is the absolute value (50 delta) to represent a put. If you represent put delta as a negative (the conventional way), do not subtract it; add it.
To keep this simple for the non-mathematically minded: the NOPE today is equal to the weighted sum (weighted by volume) of the delta of every call minus the delta of every put for all options chains extending from today to infinity. Finally, we then divide that number by the # of shares traded today in the market session (ignoring pre-market and post-market, since options cannot trade during those times).
Effectively, NOPE is a rough and dirty way to approximate the impact of delta-gamma hedging as a function of share volume, with us hand-waving the following factors:
  1. To keep calculations simple, we assume that all counter-parties are hedged. This is obviously not true, especially for idiots who believe theta ganging is safe, but holds largely true especially for highly liquid tickers, or tickers will designated market makers (e.g. any ticker in the NASDAQ, for instance).
  2. We assume that all hedging takes place via shares. For SPY and other products tracking the S&P, for instance, market makers can actually hedge via futures or other options. This has the benefit for large positions of not moving the underlying price, but still makes up a fairly small amount of hedges compared to shares.

Winding and Unwinding

I briefly touched on this in a past post, but two properties of NOPE seem to apply well to EER-like behavior (aka any binary catalyst event):
  1. NOPE measures sentiment - In general, the options market is seen as better informed than share traders (e.g. insiders trade via options, because of leverage + easier to mask positions). Therefore, a heavy call/put skew is usually seen as a bullish sign, while the reverse is also true.
  2. NOPE measures system stability
I'm not going to one-sentence explain #2, because why say in one sentence what I can write 1000 words on. In short, NOPE intends to measure sensitivity of the system (the ticker) to disruption. This makes sense, when you view it in the context of delta-gamma hedging. When we assume all counter-parties are hedged, this means an absolutely massive amount of shares get sold/purchased when the underlying price moves. This is because of the following:
a) Assume I, Mr. MM sell 1000 call options for NKLA 25C 10/23 and 300 put options for NKLA 15p 10/23. I'm just going to make up deltas because it's too much effort to calculate them - 30 delta call, 20 delta put.
This implies Mr. MM needs the following to delta hedge: (1000 call options * 30 shares to buy for each) [to balance out writing calls) - (300 put options * 20 shares to sell for each) = 24,000 net shares Mr. MM needs to acquire to balance out his deltas/be fully neutral.
b) This works well when NKLA is at $20. But what about when it hits $19 (because it only can go down, just like their trucks). Thanks to gamma, now we have to recompute the deltas, because they've changed for both the calls (they went down) and for the puts (they went up).
Let's say to keep it simple that now my calls are 20 delta, and my puts are 30 delta. From the 24,000 net shares, Mr. MM has to now have:
(1000 call options * 20 shares to have for each) - (300 put options * 30 shares to sell for each) = 11,000 shares.
Therefore, with a $1 shift in price, now to hedge and be indifferent to direction, Mr. MM has to go from 24,000 shares to 11,000 shares, meaning he has to sell 13,000 shares ASAP, or take on increased risk. Now, you might be saying, "13,000 shares seems small. How would this disrupt the system?"
(This process, by the way, is called hedge unwinding)
It won't, in this example. But across thousands of MMs and millions of contracts, this can - especially in highly optioned tickers - make up a substantial fraction of the net flow of shares per day. And as we know from our desk example, the buying or selling of shares directly changes the price of the stock itself.
This, by the way, is why the NOPE formula takes the shape it does. Some astute readers might notice it looks similar to GEX, which is not a coincidence. GEX however replaces daily volume with open interest, and measures gamma over delta, which I did not find good statistical evidence to support, especially for earnings.
So, with our example above, why does NOPE measure system stability? We can assume for argument's sake that if someone buys a share of NKLA, they're fine with moderate price swings (+- $20 since it's NKLA, obviously), and in it for the long/medium haul. And in most cases this is fine - we can own stock and not worry about minor swings in price. But market makers can't* (they can, but it exposes them to risk), because of how delta works. In fact, for most institutional market makers, they have clearly defined delta limits by end of day, and even small price changes require them to rebalance their hedges.
This over the whole market adds up to a lot shares moving, just to balance out your stupid Robinhood YOLOs. While there are some tricks (dark pools, block trades) to not impact the price of the underlying, the reality is that the more options contracts there are on a ticker, the more outsized influence it will have on the ticker's price. This can technically be exactly balanced, if option put delta is equal to option call delta, but never actually ends up being the case. And unlike shares traded, the shares representing the options are more unstable, meaning they will be sold/bought in response to small price shifts. And will end up magnifying those price shifts, accordingly.

NOPE and Earnings

So we have a new shiny indicator, NOPE. What does it actually mean and do?
There's much literature going back to the 1980s that options markets do have some level of predictiveness towards earnings, which makes sense intuitively. Unlike shares markets, where you can continue to hold your share even if it dips 5%, in options you get access to expanded opportunity to make riches... and losses. An options trader betting on earnings is making a risky and therefore informed bet that he or she knows the outcome, versus a share trader who might be comfortable bagholding in the worst case scenario.
As I've mentioned largely in comments on my prior posts, earnings is a special case because, unlike popular misconceptions, stocks do not go up and down solely due to analyst expectations being meet, beat, or missed. In fact, stock prices move according to the consensus market expectation, which is a function of all the participants' FEPF on that ticker. This is why the price moves so dramatically - even if a stock beats, it might not beat enough to justify the high price tag (FSLY); even if a stock misses, it might have spectacular guidance or maybe the market just was assuming it would go bankrupt instead.
To look at the impact of NOPE and why it may play a role in post-earnings-announcement immediate price moves, let's review the following cases:
  1. Stock Meets/Exceeds Market Expectations (aka price goes up) - In the general case, we would anticipate post-ER market participants value the stock at a higher price, pushing it up rapidly. If there's a high absolute value of NOPE on said ticker, this should end up magnifying the positive move since:
a) If NOPE is high negative - This means a ton of put buying, which means a lot of those puts are now worthless (due to price decoherence). This means that to stay delta neutral, market makers need to close out their sold/shorted shares, buying them, and pushing the stock price up.
b) If NOPE is high positive - This means a ton of call buying, which means a lot of puts are now worthless (see a) but also a lot of calls are now worth more. This means that to stay delta neutral, market makers need to close out their sold/shorted shares AND also buy more shares to cover their calls, pushing the stock price up.
2) Stock Meets/Misses Market Expectations (aka price goes down) - Inversely to what I mentioned above, this should push to the stock price down, fairly immediately. If there's a high absolute value of NOPE on said ticker, this should end up magnifying the negative move since:
a) If NOPE is high negative - This means a ton of put buying, which means a lot of those puts are now worth more, and a lot of calls are now worth less/worth less (due to price decoherence). This means that to stay delta neutral, market makers need to sell/short more shares, pushing the stock price down.
b) If NOPE is high positive - This means a ton of call buying, which means a lot of calls are now worthless (see a) but also a lot of puts are now worth more. This means that to stay delta neutral, market makers need to sell even more shares to keep their calls and puts neutral, pushing the stock price down.
---
Based on the above two cases, it should be a bit more clear why NOPE is a measure of sensitivity to system perturbation. While we previously discussed it in the context of magnifying directional move, the truth is it also provides a directional bias to our "random" walk. This is because given a price move in the direction predicted by NOPE, we expect it to be magnified, especially in situations of price decoherence. If a stock price goes up right after an ER report drops, even based on one participant deciding to value the stock higher, this provides a runaway reaction which boosts the stock price (due to hedging factors as well as other participants' behavior) and inures it to drops.

NOPE and NOPE_MAD

I'm going to gloss over this section because this is more statistical methods than anything interesting. In general, if you have enough data, I recommend using NOPE_MAD over NOPE. While NOPE in theory represents a "real" quantity (net option delta over net share delta), NOPE_MAD (the median absolute deviation of NOPE) does not. NOPE_MAD simply answecompare the following:
  1. How exceptional is today's NOPE versus historic baseline (30 days prior)?
  2. How do I compare two tickers' NOPEs effectively (since some tickers, like TSLA, have a baseline positive NOPE, because Elon memes)? In the initial stages, we used just a straight numerical threshold (let's say NOPE >= 20), but that quickly broke down. NOPE_MAD aims to detect anomalies, because anomalies in general give you tendies.
I might add the formula later in Mathenese, but simply put, to find NOPE_MAD you do the following:
  1. Calculate today's NOPE score (this can be done end of day or intraday, with the true value being EOD of course)
  2. Calculate the end of day NOPE scores on the ticker for the previous 30 trading days
  3. Compute the median of the previous 30 trading days' NOPEs
  4. From the median, find the 30 days' median absolute deviation (https://en.wikipedia.org/wiki/Median_absolute_deviation)
  5. Find today's deviation as compared to the MAD calculated by: [(today's NOPE) - (median NOPE of last 30 days)] / (median absolute deviation of last 30 days)
This is usually reported as sigma (σ), and has a few interesting properties:
  1. The mean of NOPE_MAD for any ticker is almost exactly 0.
  2. [Lily's Speculation's Speculation] NOPE_MAD acts like a spring, and has a tendency to reverse direction as a function of its magnitude. No proof on this yet, but exploring it!

Using the NOPE to predict ER

So the last section was a lot of words and theory, and a lot of what I'm mentioning here is empirically derived (aka I've tested it out, versus just blabbered).
In general, the following holds true:
  1. 3 sigma NOPE_MAD tends to be "the threshold": For very low NOPE_MAD magnitudes (+- 1 sigma), it's effectively just noise, and directionality prediction is low, if not non-existent. It's not exactly like 3 sigma is a play and 2.9 sigma is not a play; NOPE_MAD accuracy increases as NOPE_MAD magnitude (either positive or negative) increases.
  2. NOPE_MAD is only useful on highly optioned tickers: In general, I introduce another parameter for sifting through "candidate" ERs to play: option volume * 100/share volume. When this ends up over let's say 0.4, NOPE_MAD provides a fairly good window into predicting earnings behavior.
  3. NOPE_MAD only predicts during the after-market/pre-market session: I also have no idea if this is true, but my hunch is that next day behavior is mostly random and driven by market movement versus earnings behavior. NOPE_MAD for now only predicts direction of price movements right between the release of the ER report (AH or PM) and the ending of that market session. This is why in general I recommend playing shares, not options for ER (since you can sell during the AH/PM).
  4. NOPE_MAD only predicts direction of price movement: This isn't exactly true, but it's all I feel comfortable stating given the data I have. On observation of ~2700 data points of ER-ticker events since Mar 2019 (SPY 500), I only so far feel comfortable predicting whether stock price goes up (>0 percent difference) or down (<0 price difference). This is +1 for why I usually play with shares.
Some statistics:
#0) As a baseline/null hypothesis, after ER on the SPY500 since Mar 2019, 50-51% price movements in the AH/PM are positive (>0) and ~46-47% are negative (<0).
#1) For NOPE_MAD >= +3 sigma, roughly 68% of price movements are positive after earnings.
#2) For NOPE_MAD <= -3 sigma, roughly 29% of price movements are positive after earnings.
#3) When using a logistic model of only data including NOPE_MAD >= +3 sigma or NOPE_MAD <= -3 sigma, and option/share vol >= 0.4 (around 25% of all ERs observed), I was able to achieve 78% predictive accuracy on direction.

Caveats/Read This

Like all models, NOPE is wrong, but perhaps useful. It's also fairly new (I started working on it around early August 2020), and in fact, my initial hypothesis was exactly incorrect (I thought the opposite would happen, actually). Similarly, as commenters have pointed out, the timeline of data I'm using is fairly compressed (since Mar 2019), and trends and models do change. In fact, I've noticed significantly lower accuracy since the coronavirus recession (when I measured it in early September), but I attribute this mostly to a smaller date range, more market volatility, and honestly, dumber option traders (~65% accuracy versus nearly 80%).
My advice so far if you do play ER with the NOPE method is to use it as following:
  1. Buy/short shares approximately right when the market closes before ER. Ideally even buying it right before the earnings report drops in the AH session is not a bad idea if you can.
  2. Sell/buy to close said shares at the first sign of major weakness (e.g. if the NOPE predicted outcome is incorrect).
  3. Sell/buy to close shares even if it is correct ideally before conference call, or by the end of the after-market/pre-market session.
  4. Only play tickers with high NOPE as well as high option/share vol.
---
In my next post, which may be in a few days, I'll talk about potential use cases for SPY and intraday trends, but I wanted to make sure this wasn't like 7000 words by itself.
Cheers.
- Lily
submitted by the_lilypad to thecorporation [link] [comments]

Rank 18 Challenger Mech One Trick Guide 10.16

Edit - Its been a few days since I posted so I wont be checking in to answer new questions here. If you have any questions feel free to reach out to be through my stream. This guide should still be relevant to patch 10.17 but I'd recommend QuickSilver > GA due to Titans resolve nerfs.
Hello, I'm Atornyo and I first hit challenger in NA as a mech one-trick last patch and achieved as high as rank 18 in patch 10.16. I really enjoy mech as I mostly played reroll mech to hit diamond last set and think it is the most interesting composition in the game. I will be referring to mech pilots with a focus on Viktor carry as Viktor Mech.
My lolchess: https://lolchess.gg/profile/na/atornyo
Ideal Viktor Mech Level 8: https://lolchess.gg/buildeset3.5?deck=f6e3df00de7c11ea85825783e5dd3235 (legendaries can replace units with similar traits if you find a 2 star version of them or find a legendary before 2 starring the unit they replace: Lulu>Cass gp 1star > ziggs if you have an extra defensive item to give gp Ekko>shaco)
Level 9:https://lolchess.gg/buildeset3.5?deck=1c8af8c0de7d11ea8f93e91782b06499
Items that can be used in Mech Viktor:
For the Mech -
Titan’s Resolve - If your mech has one of Hand of Justice or Guardian Angel or both I recommend building this item, without either of these items you won’t see much value from Titan’s Resolve until you have a level 5 or 6 mech which means you have 2 star annie rumble and fizz. This item has the potential to be the single strongest item that your mech can use and is worth playing for every game. The downside to this item is that there is zero value in slamming the item early game as it will never hit 50 stacks until you have a mech online. The only time you are looking to potentially not have this item on your mech is if there are many people contesting (a 4+ mech lobby) the reason for this is because this item greatly increases in value the higher level your mech is. Once it hits 50 stacks your mech will 1v9 especially when coupled with a Hand of Justice or Guardian Angel.
Hand of Justice - This item is so good worth slamming every game as it works well on early game carries and is really solid on mech.
Guardian Angel - Solid item but ONLY place this on your mech if you are certain there will be a Hand of Justice or a Titan’s Resolve with it. Works well with Titan’s because your mech doesn’t lose its Titan’s stacks after its first death and can slap around the enemy team after reviving. Works well with Hand of Justice as it can heal a significant amount of HP post mortem. This item also works really well with rumble as he will oftentimes cast after coming out of the mech and his spell doesn’t go away while reviving.
Quicksilver - This item is BiS for mech IF you are unable to complete the trifecta mentioned above. In lobbies with many zephyrs this item can result in insane value, however, with optimal scouting you can sacrifice Ziggs and Cass to the Zephyr gods. The reason I believe this item isn’t as godly as many others make it out to be is the fact that it does absolutely nothing in a number of matchups other than provide 20% dodge. The problem with this item is that it is NOT slammable until you have a mech online.
Bramble Vest - One of the strongest items to slam early game. If you take an armor off the starting carousel and are blessed enough to find another by 2-1 you are building this item.
TrapClaw - This item is mostly just a 20% dodge stat boost. This item isn’t very slammable early personally, only build if I feel I don’t have any other options.
Shroud of Stillness - This item is a 20% dodge stat boost that can turn a fight with optimal positioning. If you build this item you need to scout EVERY round. Relatively slammable early but not on the same tier as bramble.
ZZ’rot - You are building this item because you want to win streak early. Neat thing with this item is that you get two voidlings over the course of the fight.
Warmog’s Armor - Probably the single strongest early game item in the game, give a protector this item and go afk until stage 4.
Ionic Spark - Another very slamable item, if you have a rod and a cloak at any point before krugs it is worth slamming as this item will save you infinite HP.
Thieve’s Gloves - This item is a bait on mech. In the past I would play Thief’s Gloves mech as a transition unit while I pivot to a non mech composition. Nowadays I only play mech so don’t recommend giving the mech this item. Not a bad shaco item and once you replace shaco with Ekko he loves it.
Itemizing Viktor - Viktor wants a morellonomicon in order to nuke the enemy team’s healing potential along with blue buff or Spear of shojin as viktor should be able to kill the backline in 2-3 spells.
If you're considering playing mech here is what you should look to do in each stage:
Stage 1: look to grab Armor>Tear>Crit Glove on first carousel units holding these specific items such as armor Malphite/Illaoi or tear Ziggs can be free tickets to winstreaking early. After carousel I try to hold brawlers rebels and infiltrators as I believe it is the strongest opener for mech, however if it is clear that a stronger board is available, such as a 2 star poppy or jarvan while you only have 1 star Illaoi/Malphite, it is worth pivoting to that. On the round that Kayn appears (1-4) I will prelevel which means I buy experience in order to achieve a level 4 shop on 2-1. This is very important as a unit like rumble/shaco/neeko with a belt can win streak the entirety of stage 2. I try to hold on to any Annie I find as I like to hold one whenever possible but it is worth selling her in order to pick up any brawlerebel/infiltrator or to ensure that you can pre level.
Stage 2: I attempt to win streak through stage 2 every single game, Viktor Mech and Mech infiltrators are not very item dependent and you can switch between the two depending on what items the game gives you. If you have any of Bramble Vest, Hand of Justice, Guardian Angel, Ionic Spark, Warmogs,blue buff, morellonomicon, or ZZ'rot Portal it is best to slam the item as the Mech can hold any of those items other than bluebuff and morellonomicon and those last 2 items are vital for viktor, Illaoi is a great holder for Mech items and Ziggs/ahri are great holders for Viktor items. On 2-1 play whatever your strongest board is as with any non-hyperroll compositions. On 2-3 before the stage 2 carousel I will prelevel in order to get a level 5 shop on 2-5 post carousel, this is extremely strong for Mech Pilot compositions as it gives you the opportunity to hit a full Mech on stage 2 or other strong early game units like rumble gnar wukong and fizz. In the case that you are on a 2 or 3 loss streak after the stage 2 carousel it is best to attempt a full loss streak in order to maximize early gold, this is the ONLY time that I would ever consider attempting to lose a round. If you are running infiltrators in your early game composition it is extra important to scout EVERY round as the difference between an infiltrator hitting a Ziggs or a 2 star frontliner is winning or losing a fight.
Stage 3: This is where a lot of decision making enters the game. If I am winstreaking with a streak of 3 or greater and I will have more than 10 gold after leveling I will level on 3-1. otherwise I will level on 3-2. If I have fizz and rumble by 3-2 and am level 6 I am willing to roll down to 10gold in order to hit an annie. If you roll down this early into the game it is vital that you do not tunnel only on units that go in your final composition, you are not rolling solely to hit a Mech you are rolling to maintain win streak this means that you will look to complete any pairs or to add unit upgrades to your current board. If you roll down and do not upgrade your board at all you will be in a very bad place so it is important to keep a very open mind on what can be thrown in to improve your composition. If I don't roll down on 3-2 I usually do not roll at all unless I am taking a large amount of damage every round in which case it can be a good idea to level to 7 post stage 3 carousel (3-5) and roll some gold to stabilize. If you are rolling it is important to not roll below 10 in stage 3 unless you have a great reason to, such as winstreaking and holding 4-6 pairs while knowing there are opponents that can beat you if you don't hit those upgrades.
What to do if you hit early Mech: Mech in stage 3 can be played in many different ways. Most of the time you will sell your frontline and be looking to play Mech + whatever your strongest backliners are which are usually the level 2 units you already had. Ideally you want to have a ziggs and infiltrator or be running 4 sorcs + Mech but it is not vital in stage 3.
Stage 4: This is where the decision between Viktor Mech and Mech infiltrators is made. If you are bleeding out and approaching death <40hp 4-1 it is worth leveling to 7 and rolling down to stabilize. Which means you are playing the level 8 board minus ziggs if by some miracle you hit aurelion sol feel free to play zed/ziggs/asol instead of the mystic units. However, in the majority of games you will level to 8 on 4-3 and roll for your board.
The 4-3 rolldown (Viktor Mech) - While rolling you are looking to hit this board https://lolchess.gg/buildeset3.5?deck=f6e3df00de7c11ea85825783e5dd3235 it is discussed earlier when to replace units with legendaries. Also I value cass and Karma over Soraka as before the mech dies other units tend to take very little to 0 damage. If you run into a GP Mercenary upgrade in this roll down it is only worth purchasing double strike as they are so expensive. You can stop rolling once you hit the units in the composition and have a level 6 mech (2 star annie rumble and fizz), a 2 star legendary or 2 star Viktor. If you hit any of those requirements with more than 20 gold and are somewhat healthy you can usually go to level 9 later in the game in order to increase your chances at first place. If you hit a 2 star asol and do not have blue buff Asol can replace Viktor at levels 8 and 9.
If you hit it is very likely that you will win streak through stage 4 and into stage 5.
Stage 5: If you rolled down at level 7 on 4-1 you are leveling to 8 and rolling on 5-1 in a last ditch effort to survive. This rolldown is the same as the standard 4-3 one. If you were able to stop rolling early and have hoarded a large amount of gold, look to go level 9. Only go level 9 if you have at least 30 gold to roll or have more than 15 gold and already hold 1 or more legendary pairs. If you are about to die feel free to roll on 8 in order to complete vital 2 stars which are any mech pilot unit +viktor and shaco. The winconditions for Mech Viktor are good mech items +perfect item 2 star Viktor or Level 9 with 2 star legendaries. The optimal level 9 composition looks like this https://lolchess.gg/buildeset3.5?deck=1c8af8c0de7d11ea8f93e91782b06499 with the option to replace Viktor with urgot 2 and giving the bluebuff to urgot and the morellos to Asol. While it is situational it is almost always better to run a 2 star unit over a 1 star legendary. In the case that you were fortunate enough to find an infiltrator spatula play it on either viktor or gangplank and instead of running Asol play 4 infiltrator level 9: https://lolchess.gg/buildeset3.5?deck=7aa7b960de8511ea9ce08d2f4408daad
If you hit either of these level 9 boards with 2 star units it is a 1st unless an opponent has a 3 star 4 cost unit or out positions you really badly.
General advice when playing Mech Viktor:
Differences between Galaxies
Dwarf Planet - Mech is so busted on this galaxy, I have seen Mech compositions hold hands 1-5 multiple times in challenger elo games. Look for titans resolve as if it procs your Mech will hit the backline. Infiltrators are weaker on this map so keep that in mind when building early game boards. Gangplank is also OMEGABUSTED on this galaxy.
Neekoverse - I just wanted to thank riot for removing this Galaxy
Superdense - I tend to run 4 infiltrator instead of ziggs at level 8. Also if winstreaking you might roll more in stage 3 as any round you win it is likely you're doing an extra 2 damage which puts a lot of pressure on a lobby.
Trade Sector - Greatly dislike this galaxy for Mech but never miss the chance to level if you can afford it while winstreaking. Going level 7 right after stage 3 carousel can be the difference between hitting an early legendary or hitting important mech units.
Treasure Trove - Not a great galaxy for Mech as you have 4 units in your composition that do not benefit greatly from items (Mystic units and annie/fizz) Also Mech doesn't benefit too greatly by having perfect items so the benefit that other compositions get is much greater.
Galactic Armory - Great for pushing early winstreaks. Always look to slam 2 full items before any pvp rounds even begin.
Binary Star - Look to take glove or tear on the first carousel. NEED to win streak as mech isn't as strong later in the game. Not as bad for mech as people make it seem but you usually need 2 dodge items (QSS, HOJ, Trapclaw, and shroud of stillness) in order to make your mech survive versus the 4 cyber players in the lobby. Need perfect Viktor items as another issue mech has in this galaxy is the fact that mystic units along with other mech units can't utilize items well.
Plunder Planet - Always push levels and try to bully other players around. Anytime you can prevent another player from killing any of your units you are denying them 2-3 gold which is a huge early game. Most of the time you will level to 8 on 4-1 and be 9 in late stage 4 or early stage 5. Can also decide to roll down on 3-5 after stage 5 carousel at level 7 in order to get as much gold as possible off the galaxy and prevent other players from killing units. Everyone spikes really hard in stage 4 on this galaxy.
Salvage world - I'm still unsure of this galaxy, I have only played 5 games on this galaxy but in 2 of them I opened with a redbuff ludens lucian with blaster buff that felt really strong. Not as important to run an early game composition that can utilize mech items well.
I'm sure I missed some stuff within this guide and will try to answer any questions in the comments over the next few days.
submitted by TtvBananaNationss to CompetitiveTFT [link] [comments]

Strategy Tester Tradingview

So basically this is my strategy (testing it for binary options)
// Version 0 - Created by UCS_Gears
// Version 1 - Modified by Chris Moody "Added B/S"
// Version 2 - Modified by UCS_Gears, "Replaced B/S with arrows", "Ability to change Overbought / Oversold Levels"

strategy(title="DMI Stochastic Extreme", shorttitle="DMI-Stochastic", overlay=false)
// Wells Wilders MA
wwma(l,p) =>
wwma = (nz(wwma[1]) * (l - 1) + p) / l

// Inputs
DMIlength = input(10, title = "DMI Length")
Stolength = input(3, title = "Stochastic Length")
Oversold = input(10, title = "Oversold")
Overbought = input(90, title="Overbought")

// DMI Osc Calc
hiDiff = high - high[1]
loDiff = low[1] - low
plusDM = (hiDiff > loDiff) and (hiDiff > 0) ? hiDiff : 0
minusDM = (loDiff > hiDiff) and (loDiff > 0) ? loDiff : 0
ATR = wwma(DMIlength, tr)
PlusDI = 100 * wwma(DMIlength,plusDM) / ATR
MinusDI = 100 * wwma(DMIlength,minusDM) / ATR
osc = PlusDI - MinusDI

// DMI Stochastic Calc
hi = highest(osc, Stolength)
lo = lowest(osc, Stolength)
Stoch = sum((osc-lo),Stolength) / sum((hi-lo),Stolength) *100
plot(Stoch, color = gray, title = 'Stochastic', linewidth = 2, style = line)

crossUp = Stoch[1] < Oversold and Stoch > Oversold ? 1 : 0
crossDown = Stoch[1] > Overbought and Stoch < Overbought ? 1 : 0

plot (Overbought, color = red, linewidth = 1, title = 'Over Bought')
plot (Oversold, color = green, linewidth = 1, title = 'Over Sold')

plotchar(crossUp, title="Crossing Up", char='↑', location=location.bottom, color=aqua, transp=0, offset=0)
plotchar(crossDown, title="Crossing Down",char='↓', offset=0, location=location.top, color=aqua, transp=0)


strategy.entry("Call", strategy.long, when = crossUp == true)
strategy.entry("Put", strategy.short, when = crossDown == true )

strategy.close("Put", when = barstate.isnew)
strategy.close("Call", when = barstate.isnew)
its supposed to go in at a crossdown hold for 1 candle and then close the trade and vice versa but the thing is if for example I get a 40% winrate with it I should get a 60% winrate if I just reverse the settings but instead I get a 45% winrate
submitted by PresentationOk5418 to algotrading [link] [comments]

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

Cross-compiling to AVR with clang

Hi,
I am trying to cross-compile a C++ file to AVR using clang. I am running Ubuntu from a windows laptop using WSL. I am using clang built from source to enable AVR support.

I checked that AVR was indeed a registered target by running the command llc --version.
Here is the output running llc --version:
LLVM (http://llvm.org/): LLVM version 12.0.0git Optimized build. Default target: x86_64-unknown-linux-gnu Host CPU: skylake Registered Targets: aarch64 - AArch64 (little endian) aarch64_32 - AArch64 (little endian ILP32) aarch64_be - AArch64 (big endian) amdgcn - AMD GCN GPUs arm - ARM arm64 - ARM64 (little endian) arm64_32 - ARM64 (little endian ILP32) armeb - ARM (big endian) avr - Atmel AVR Microcontroller [...] 
Here is a screenshot (imgur) of the terminal as well.

When I try to compile a simple C++ file, it fails to link. You can find the code I am trying to compile here (pastebin).
This is the command I run to compile: [path-to-custom-build-of-clang] -Os -std=c++17 -fno-exceptions -ffunction-sections -fdata-sections -fno-threadsafe-statics -flto --target=avr -mmcu=atmega328p -DF_CPU=16000000L -DARDUINO=10813 -DARDUINO_AVR_UNO -DARDUINO_ARCH_AVR blink.cpp -v
This is the error I get when running the said command:
clang-12: warning: no avr-gcc installation can be found on the system, cannot link standard libraries [-Wavr-rtlib-linking-quirks] clang-12: warning: standard library not linked and so no interrupt vector table or compiler runtime routines will be linked [-Wavr-rtlib-linking-quirks] clang version 12.0.0 (https://github.com/llvm/llvm-project.git e1c38dd55d9dab332ccabb7c83a80ca92c373af0) Target: avr Thread model: posix InstalledDir: [path-to-custom-clang-build-directory] clang-12: error: 'avr': unable to pass LLVM bit-code files to linker 
Here is a screenshot (imgur) of the terminal as well.
I have tried specifying the path to the linker with the argument --ld-path but I got the same error.

I have read somewhere that the option -flto was not supported, so I tried the same command minus the -flto option.
Here is the output of compiling without -flto:
clang-12: warning: no avr-gcc installation can be found on the system, cannot link standard libraries [-Wavr-rtlib-linking-quirks] clang-12: warning: standard library not linked and so no interrupt vector table or compiler runtime routines will be linked [-Wavr-rtlib-linking-quirks] clang version 12.0.0 (https://github.com/llvm/llvm-project.git e1c38dd55d9dab332ccabb7c83a80ca92c373af0) Target: avr Thread model: posix InstalledDir: [path-to-custom-clang-build-directory] "[path-to-custom-clang-build-directory]/build/bin/clang-12" -cc1 -triple avr -emit-obj --mrelax-relocations -disable-free -disable-llvm-verifier -discard-value-names -main-file-name blink.cpp -mrelocation-model static -mframe-pointer=all -fmath-errno -fno-rounding-math -mconstructor-aliases -target-cpu atmega328p -fno-split-dwarf-inlining -debugger-tuning=gdb -v -ffunction-sections -fdata-sections -resource-dir [path-to-custom-clang-build-directory]/build/lib/clang/12.0.0 -D F_CPU=16000000L -D ARDUINO=10813 -D ARDUINO_AVR_UNO -D ARDUINO_ARCH_AVR -Os -std=c++17 -fdeprecated-macro -fdebug-compilation-dir [path-to-project]/source -ferror-limit 19 -fgnuc-version=4.2.1 -fno-threadsafe-statics -fcolor-diagnostics -vectorize-loops -vectorize-slp -faddrsig -o /tmp/blink-1f28ec.o -x c++ blink.cpp clang -cc1 version 12.0.0 based upon LLVM 12.0.0git default target x86_64-unknown-linux-gnu #include "..." search starts here: #include <...> search starts here: /uslocal/include [path-to-custom-clang-build-directory]/build/lib/clang/12.0.0/include /usinclude End of search list. "/usbin/avr-ld" /tmp/blink-1f28ec.o -o a.out --gc-sections 
Here is a screenshot (imgur) of the terminal as well.

It does produce a binary file; however, when checking the size, it outputs zero.
This is the output of llvm-size ran on the output binary a.out:
 text data bss dec hex filename 0 0 0 0 0 a.out 
And sure enough, when I try to upload it with AVRDude, nothing gets written and nothing happens.

I can compile the same program using gcc (built from source) upload it and run it on an Arduino board.
Finally, the question, are there people who have run into similar issues when trying to cross-compile to AVR and more importantly, how did you solve them?
If you are missing pieces of information to understand the situation, I am happy to provide more details.
Cheers.
submitted by onipart22 to cpp_questions [link] [comments]

The motion to delay and revise the re-entry plan is not a binary issue. Don't buy the politics.

TLDR:
People are being goaded into taking sides on whether or not we should open schools. This is not the issue at hand when considering the motions made by Mr. Shurr at the most recent Special Session of the school board. The issue at hand is whether the current plan (Published June 30) is the most inventive solution we can offer that minimizes the risk of lifelong disability and/or death for the students and, more immediately, the many high-risk adults who work in the public schools around the country. If you’ve ever been in an American workplace, you know that leaders (especially exhausted ones) can find running out the clock on a decision period more desirable than engaging in critical discussion. With stakes as high as they are, the motions are meant to ensure this does not happen with our public schools.
Here is a link to the most critical 20 minutes of the Special Session of the school board meeting from Tuesday, July 21st.
The Details:
This is a throwaway account, and an attempt at a complete statement of my opinion. This does not reflect anyone’s opinion but my own based on public information. Feel free to share any and all of this if you’d like. I don’t plan to respond to comments or DMs.
A considerable number of parents, students, and teachers (many of whom are at high risk of contracting COVID-19 or live with an elder who is) feel the traditional school model poses too much risk and that we need to pause and revise the plan. Fairly, many people who need the child care/specialized services provided by the schools have voiced their frustration and unwillingness to support such a measure because they believe this must mean that schools will be closed for an extended period of time. This fabricated binary allows an outdated plan to look preferable to pausing and revising because:
All this said, the parents who need their students in school are justified in their attitudes and arguments.
At one of the large high schools
At the other,
Even if you ignore the certain occurrence of some crossover within these categories, this is still less than half of the total student population. A number of these students may still choose to stay home with the online option. Similarly, there are probably students who do not fall into these categories but still need to come to school sometimes for some reason or another. Either way, this suggests that there is an opportunity to serve a MUCH smaller number of students in the building and reduce risk to everyone involved.
I will admit this would be harder to organize at the elementary level, where districting decisions have left some schools in a more difficult situation than others in terms of student needs because some schools have:
Perhaps the lesser risk in general at the elementary level doesn’t demand an alternative-to-traditional model. Perhaps identifying students who need to be in a learning center and finding a way to get them to a less crowded school should be part of the conversation. Regardless, I imagine that some schools will already be operating at a much lower capacity than normal due to the online option while others will be close to full.
All of this should show that this is:
This is what pause and revise is really about. While I’m sure people are exhausted, I refuse to believe that Bloomington has exhausted its creativity, resources, and inventiveness on the current plan especially in light of the totally changed context. If you agree that we can do better, please reach out to the Board of School Trustees and the Monroe County Health Department in time for the last meeting before the school year. The meeting is on Tuesday, July 28.
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
I have edited for the sake of clarity and organization. I wish I could edit the title u/Smease1 made a great point below.
submitted by Dry-Consequence-539 to bloomington [link] [comments]

Unable to run custom scripts via dmenu when it is started with i3's mod+d key

I have encountered strange behaviour regarding dmenu_run and dmenu_recency. When I run dmenu_run or dmenu_recency from terminal and then execute simple script like echo "test" value test is printed in the terminal. However when I run dmenu_recency or dmenu_run with i3 keybinding like:
bindsym $mod+d exec --no-startup-id dmenu_recency
and then execute same simple script, then nothing happens. Dmenu lunches for other installed programs works well, it just doesen't work for execuution of my custom scripts.
What am I missing here? I suspect I have to add something else to my scripts but i dont know what. For now it is jsut plain this:
echo "test"

EDIT: Ok maybe script: echo "test" is not the best example since it is true that there is no opened terminal to write to.
But same thing happens if I try to execute script that looks like this:
code ~/.i3/config
This jsut opens the i33 config file with visual studio code. Again this works when I execute it via dmenu_run that was called from existing termina but it doesen't work when executed via dmenu_run that was called via i3 keybinding mod+d
EDIT 2:
.i3/config
# i3 config file (v4) # Please see http://i3wm.org/docs/userguide.html for a complete reference! # Set mod key (Mod1=, Mod4=) set $mod Mod4 # My testing shortcuts bindsym $mod+c exec code bindsym $mod+Shift+x exec terminal; exec terminal bindsym $mod+F4 exec /home/erik/Programs/pycharm-community-2020.2.1/bin/pycharm.sh bindsym $mod+Shift+F2 exec /home/erik/CustomScripts/google_calendar # CONFIGURABLE PRINTSCREENS OPTIONS # take a screenshot of a screen region and copy it to a clipboard #bindsym --release Shift+Print exec "ScreenCapture.sh -s /home/erik/Pictures/Screenshots/" # take a screenshot of a whole window and copy it to a clipboard #bindsym --release Print exec "ScreenCapture.sh /home/erik/Pictures/Screenshots/" # set default desktop layout (default is tiling) # workspace_layout tabbed  # Configure border style  default_border pixel 2 default_floating_border normal # Hide borders hide_edge_borders none # change borders bindsym $mod+u border none bindsym $mod+y border pixel 1 bindsym $mod+n border normal # You can also use any non-zero value if you'd like to have a border (this is to prevent issues with gaps) # for_window [class=".*"] border pixel 1 # Font for window titles. Will also be used by the bar unless a different font # is used in the bar {} block below. font xft:URWGothic-Book 11 # Use Mouse+$mod to drag floating windows floating_modifier $mod # start a terminal bindsym $mod+Return exec terminal # kill focused window bindsym $mod+Shift+q kill # start program launcher # bindsym $mod+d exec --no-startup-id dmenu_recency bindsym $mod+d exec --no-startup-id home/erik/CustomScripts/redit_solution dmenu_recency # launch categorized menu bindsym $mod+z exec --no-startup-id morc_menu ################################################################################################ ## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ## ################################################################################################ #exec --no-startup-id volumeicon #bindsym $mod+Ctrl+m exec terminal -e 'alsamixer' exec --no-startup-id start-pulseaudio-x11 exec --no-startup-id pa-applet bindsym $mod+Ctrl+m exec pavucontrol ################################################################################################ # Screen brightness controls # bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'" # bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'" # Start Applications bindsym $mod+Ctrl+b exec terminal -e 'bmenu' bindsym $mod+F2 exec chromium bindsym $mod+F3 exec pcmanfm # bindsym $mod+F3 exec ranger bindsym $mod+Shift+F3 exec pcmanfm_pkexec bindsym $mod+F5 exec terminal -e 'mocp' bindsym $mod+t exec --no-startup-id pkill compton bindsym $mod+Ctrl+t exec --no-startup-id compton -b bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'" bindsym Print exec --no-startup-id i3-scrot bindsym $mod+Print --release exec --no-startup-id i3-scrot -w bindsym $mod+Shift+Print --release exec --no-startup-id i3-scrot -s bindsym $mod+Shift+h exec xdg-open /usshare/doc/manjaro/i3_help.pdf bindsym $mod+Ctrl+x --release exec --no-startup-id xkill focus_follows_mouse no # change focus bindsym $mod+j focus left bindsym $mod+k focus down bindsym $mod+l focus up bindsym $mod+semicolon focus right # alternatively, you can use the cursor keys: bindsym $mod+Left focus left bindsym $mod+Down focus down bindsym $mod+Up focus up bindsym $mod+Right focus right # move focused window bindsym $mod+Shift+j move left bindsym $mod+Shift+k move down bindsym $mod+Shift+l move up bindsym $mod+Shift+semicolon move right # alternatively, you can use the cursor keys: bindsym $mod+Shift+Left move left bindsym $mod+Shift+Down move down bindsym $mod+Shift+Up move up bindsym $mod+Shift+Right move right # workspace back and forth (with/without active container) workspace_auto_back_and_forth yes bindsym $mod+b workspace back_and_forth bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth # split orientation bindsym $mod+h split h;exec notify-send 'tile horizontally' bindsym $mod+v split v;exec notify-send 'tile vertically' bindsym $mod+q split toggle # toggle fullscreen mode for the focused container bindsym $mod+f fullscreen toggle # change container layout (stacked, tabbed, toggle split) bindsym $mod+s layout stacking bindsym $mod+w layout tabbed bindsym $mod+e layout toggle split # toggle tiling / floating bindsym $mod+Shift+space floating toggle # change focus between tiling / floating windows bindsym $mod+space focus mode_toggle # toggle sticky bindsym $mod+Shift+s sticky toggle # focus the parent container bindsym $mod+a focus parent # move the currently focused window to the scratchpad bindsym $mod+Shift+minus move scratchpad # Show the next scratchpad window or hide the focused scratchpad window. # If there are multiple scratchpad windows, this command cycles through them. bindsym $mod+minus scratchpad show #navigate workspaces next / previous bindsym $mod+Ctrl+Right workspace next bindsym $mod+Ctrl+Left workspace prev # Workspace names # to display names or symbols instead of plain workspace numbers you can use # something like: set $ws1 1:mail # set $ws2 2: set $ws1 1 set $ws2 2 set $ws3 3 set $ws4 4 set $ws5 5 set $ws6 6 set $ws7 7 set $ws8 8 # switch to workspace bindsym $mod+1 workspace $ws1 bindsym $mod+2 workspace $ws2 bindsym $mod+3 workspace $ws3 bindsym $mod+4 workspace $ws4 bindsym $mod+5 workspace $ws5 bindsym $mod+6 workspace $ws6 bindsym $mod+7 workspace $ws7 bindsym $mod+8 workspace $ws8 # Move focused container to workspace bindsym $mod+Ctrl+1 move container to workspace $ws1 bindsym $mod+Ctrl+2 move container to workspace $ws2 bindsym $mod+Ctrl+3 move container to workspace $ws3 bindsym $mod+Ctrl+4 move container to workspace $ws4 bindsym $mod+Ctrl+5 move container to workspace $ws5 bindsym $mod+Ctrl+6 move container to workspace $ws6 bindsym $mod+Ctrl+7 move container to workspace $ws7 bindsym $mod+Ctrl+8 move container to workspace $ws8 # Move to workspace with focused container bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1 bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2 bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3 bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4 bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5 bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6 bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7 bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8 # Open applications on specific workspaces # assign [class="Thunderbird"] $ws1 # assign [class="Pale moon"] $ws2 # assign [class="Pcmanfm"] $ws3 # assign [class="Skype"] $ws5 # Open specific applications in floating mode for_window [title="alsamixer"] floating enable border pixel 1 for_window [class="calamares"] floating enable border normal for_window [class="Clipgrab"] floating enable for_window [title="File Transfer*"] floating enable for_window [class="fpakman"] floating enable for_window [class="Galculator"] floating enable border pixel 1 for_window [class="GParted"] floating enable border normal for_window [title="i3_help"] floating enable sticky enable border normal for_window [class="Lightdm-settings"] floating enable for_window [class="Lxappearance"] floating enable sticky enable border normal for_window [class="Manjaro-hello"] floating enable for_window [class="Manjaro Settings Manager"] floating enable border normal for_window [title="MuseScore: Play Panel"] floating enable for_window [class="Nitrogen"] floating enable sticky enable border normal for_window [class="Oblogout"] fullscreen enable for_window [class="octopi"] floating enable for_window [title="About Pale Moon"] floating enable for_window [class="Pamac-manager"] floating enable for_window [class="Pavucontrol"] floating enable for_window [class="qt5ct"] floating enable sticky enable border normal for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal for_window [class="Simple-scan"] floating enable border normal for_window [class="(?i)System-config-printer.py"] floating enable border normal for_window [class="Skype"] floating enable border normal for_window [class="Timeset-gui"] floating enable border normal for_window [class="(?i)virtualbox"] floating enable border normal for_window [class="Xfburn"] floating enable # switch to workspace with urgent window automatically for_window [urgent=latest] focus # reload the configuration file bindsym $mod+Shift+c reload # restart i3 inplace (preserves your layout/session, can be used to upgrade i3) bindsym $mod+Shift+r restart # exit i3 (logs you out of your X session) bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'" # Set shut down, restart and locking features bindsym $mod+0 mode "$mode_system" set $mode_system (l)ock, (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown mode "$mode_system" { bindsym l exec --no-startup-id i3exit lock, mode "default" bindsym s exec --no-startup-id i3exit suspend, mode "default" bindsym u exec --no-startup-id i3exit switch_user, mode "default" bindsym e exec --no-startup-id i3exit logout, mode "default" bindsym h exec --no-startup-id i3exit hibernate, mode "default" bindsym r exec --no-startup-id i3exit reboot, mode "default" bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default" # exit system mode: "Enter" or "Escape" bindsym Return mode "default" bindsym Escape mode "default" } # Resize window (you can also use the mouse for that) bindsym $mod+r mode "resize" mode "resize" { # These bindings trigger as soon as you enter the resize mode # Pressing left will shrink the window’s width. # Pressing right will grow the window’s width. # Pressing up will shrink the window’s height. # Pressing down will grow the window’s height. bindsym j resize shrink width 5 px or 5 ppt bindsym k resize grow height 5 px or 5 ppt bindsym l resize shrink height 5 px or 5 ppt bindsym semicolon resize grow width 5 px or 5 ppt # same bindings, but for the arrow keys bindsym Left resize shrink width 5 px or 5 ppt bindsym Down resize grow height 5 px or 5 ppt bindsym Up resize shrink height 5 px or 5 ppt bindsym Right resize grow width 5 px or 5 ppt # exit resize mode: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default" } # Lock screen bindsym $mod+9 exec --no-startup-id blurlock # Autostart applications exec --no-startup-id /uslib/polkit-gnome/polkit-gnome-authentication-agent-1 exec --no-startup-id nitrogen --restore; sleep 1; compton -b # exec --no-startup-id manjaro-hello exec --no-startup-id nm-applet exec --no-startup-id xfce4-power-manager exec --no-startup-id pamac-tray exec --no-startup-id clipit exec --no-startup-id picom # exec --no-startup-id blueman-applet # exec_always --no-startup-id sbxkb exec --no-startup-id start_conky_maia # exec --no-startup-id start_conky_green exec --no-startup-id xautolock -time 10 -locker blurlock exec_always --no-startup-id ff-theme-util exec_always --no-startup-id fix_xcursor # Color palette used for the terminal ( ~/.Xresources file ) # Colors are gathered based on the documentation: # https://i3wm.org/docs/userguide.html#xresources # Change the variable name at the place you want to match the color # of your terminal like this: # [example] # If you want your bar to have the same background color as your # terminal background change the line 362 from: # background #14191D # to: # background $term_background # Same logic applied to everything else. set_from_resource $term_background background set_from_resource $term_foreground foreground set_from_resource $term_color0 color0 set_from_resource $term_color1 color1 set_from_resource $term_color2 color2 set_from_resource $term_color3 color3 set_from_resource $term_color4 color4 set_from_resource $term_color5 color5 set_from_resource $term_color6 color6 set_from_resource $term_color7 color7 set_from_resource $term_color8 color8 set_from_resource $term_color9 color9 set_from_resource $term_color10 color10 set_from_resource $term_color11 color11 set_from_resource $term_color12 color12 set_from_resource $term_color13 color13 set_from_resource $term_color14 color14 set_from_resource $term_color15 color15 # Start i3bar to display a workspace bar (plus the system information i3status if available) bar { i3bar_command i3bar status_command i3status position bottom ## please set your primary output first. Example: 'xrandr --output eDP1 --primary' # tray_output primary # tray_output eDP1 bindsym button4 nop bindsym button5 nop # font xft:URWGothic-Book 11 strip_workspace_numbers yes colors { background #222D31 statusline #F9FAF9 separator #ff9a1f # border backgr. text focused_workspace #ff9a1f #ff9a1f #292F34 active_workspace #595B5B #353836 #FDF6E3 inactive_workspace #595B5B #222D31 #EEE8D5 binding_mode #16a085 #2C2C2C #F9FAF9 urgent_workspace #16a085 #FDF6E3 #E5201D } } # hide/unhide i3status bar bindsym $mod+m bar mode toggle # Theme colors # class border backgr. text indic. child_border client.focused #ff9a1f #ff9a1f #000000 #ff9a1f client.focused_inactive #2F3D44 #2F3D44 #1ABC9C #454948 client.unfocused #2F3D44 #2F3D44 #1ABC9C #454948 client.urgent #CB4B16 #FDF6E3 #1ABC9C #268BD2 client.placeholder #000000 #0c0c0c #ffffff #000000 client.background #2B2C2B ############################# ### settings for i3-gaps: ### ############################# # Set inneouter gaps gaps inner 0 gaps outer 0 # Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size. # gaps inner|outer current|all set|plus|minus  # gaps inner all set 10 # gaps outer all plus 5 # Smart gaps (gaps used if only more than one container on the workspace) smart_gaps on # Smart borders (draw borders around container only if it is not the only container on this workspace) # on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0) smart_borders on # Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outeinner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces. set $mode_gaps Gaps: (o) outer, (i) inner set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global) set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global) bindsym $mod+Shift+g mode "$mode_gaps" mode "$mode_gaps" { bindsym o mode "$mode_gaps_outer" bindsym i mode "$mode_gaps_inner" bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_inner" { bindsym plus gaps inner current plus 5 bindsym minus gaps inner current minus 5 bindsym 0 gaps inner current set 0 bindsym Shift+plus gaps inner all plus 5 bindsym Shift+minus gaps inner all minus 5 bindsym Shift+0 gaps inner all set 0 bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_outer" { bindsym plus gaps outer current plus 5 bindsym minus gaps outer current minus 5 bindsym 0 gaps outer current set 0 bindsym Shift+plus gaps outer all plus 5 bindsym Shift+minus gaps outer all minus 5 bindsym Shift+0 gaps outer all set 0 bindsym Return mode "default" bindsym Escape mode "default" } 
.bashrc
# # ~/.bashrc # [[ $- != *i* ]] && return colors() { local fgc bgc vals seq0 printf "Color escapes are %s\n" '\e[${value};...;${value}m' printf "Values 30..37 are \e[33mforeground colors\e[m\n" printf "Values 40..47 are \e[43mbackground colors\e[m\n" printf "Value 1 gives a \e[1mbold-faced look\e[m\n\n" # foreground colors for fgc in {30..37}; do # background colors for bgc in {40..47}; do fgc=${fgc#37} # white bgc=${bgc#40} # black vals="${fgc:+$fgc;}${bgc}" vals=${vals%%;} seq0="${vals:+\e[${vals}m}" printf " %-9s" "${seq0:-(default)}" printf " ${seq0}TEXT\e[m" printf " \e[${vals:+${vals+$vals;}}1mBOLD\e[m" done echo; echo done } [ -r /usshare/bash-completion/bash_completion ] && . /usshare/bash-completion/bash_completion # Change the window title of X terminals case ${TERM} in xterm*|rxvt*|Eterm*|aterm|kterm|gnome*|interix|konsole*) PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\007"' ;; screen*) PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\033\\"' ;; esac use_color=true # Set colorful PS1 only on colorful terminals. # dircolors --print-database uses its own built-in database # instead of using /etc/DIR_COLORS. Try to use the external file # first to take advantage of user additions. Use internal bash # globbing instead of external grep binary. safe_term=${TERM//[^[:alnum:]]/?} # sanitize TERM match_lhs="" [[ -f ~/.dir_colors ]] && match_lhs="${match_lhs}$(<~/.dir_colors)" [[ -f /etc/DIR_COLORS ]] && match_lhs="${match_lhs}$(/dev/null \ && match_lhs=$(dircolors --print-database) [[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true if ${use_color} ; then # Enable colors for ls, etc. Prefer ~/.dir_colors #64489 if type -P dircolors >/dev/null ; then if [[ -f ~/.dir_colors ]] ; then eval $(dircolors -b ~/.dir_colors) elif [[ -f /etc/DIR_COLORS ]] ; then eval $(dircolors -b /etc/DIR_COLORS) fi fi if [[ ${EUID} == 0 ]] ; then PS1='\[\033[01;31m\][\h\[\033[01;36m\] \W\[\033[01;31m\]]\$\[\033[00m\] ' else PS1='\[\033[01;32m\][\[email protected]\h\[\033[01;37m\] \W\[\033[01;32m\]]\$\[\033[00m\] ' fi alias ls='ls --color=auto' alias grep='grep --colour=auto' alias egrep='egrep --colour=auto' alias fgrep='fgrep --colour=auto' else if [[ ${EUID} == 0 ]] ; then # show [email protected] when we don't have colors PS1='\[email protected]\h \W \$ ' else PS1='\[email protected]\h \w \$ ' fi fi unset use_color safe_term match_lhs sh alias cp="cp -i" # confirm before overwriting something alias df='df -h' # human-readable sizes alias free='free -m' # show sizes in MB alias np='nano -w PKGBUILD' alias more=less xhost +local:root > /dev/null 2>&1 complete -cf sudo # Bash won't get SIGWINCH if another process is in the foreground. # Enable checkwinsize so that bash will check the terminal size when # it regains control. #65623 # http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11) shopt -s checkwinsize shopt -s expand_aliases # export QT_SELECT=4 # Enable history appending instead of overwriting. #139609 shopt -s histappend # # # ex - archive extractor # # usage: ex  ex () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xjf $1 ;; *.tar.gz) tar xzf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjf $1 ;; *.tgz) tar xzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted via ex()" ;; esac else echo "'$1' is not a valid file" fi } #Custom programs export PATH="/home/uusePrograms/pycharm-community-2020.2.1/bin:$PATH" # Custom scritps export PATH="/home/useCustomScripts:$PATH" 

submitted by Amuoeba8 to i3wm [link] [comments]

10.16 Rank 18 Challenger Mech One Trick Guide

Hey all I posted this on CompetitiveTFT yesterday and someone recommended that I post it here as well so here you go!
My stream is https://www.twitch.tv/banananationss and I’ll be streaming for a bit after posting if you would like to come and ask questions.
Hello, I'm Atornyo and I first hit challenger in NA as a mech one-trick last patch and achieved as high as rank 18 in patch 10.16. I really enjoy mech as I mostly played reroll mech to hit diamond last set and think it is the most interesting composition in the game. I will be referring to mech pilots with a focus on Viktor carry as Viktor Mech.
My lolchess: https://lolchess.gg/profile/na/atornyo
Ideal Viktor Mech Level 8: https://lolchess.gg/buildeset3.5?deck=f6e3df00de7c11ea85825783e5dd3235 (legendaries can replace units with similar traits if you find a 2 star version of them or find a legendary before 2 starring the unit they replace: Lulu>Cass gp 1star > ziggs if you have an extra defensive item to give gp Ekko>shaco)
Level 9:https://lolchess.gg/buildeset3.5?deck=1c8af8c0de7d11ea8f93e91782b06499
Items that can be used in Mech Viktor:
For the Mech -
Titan’s Resolve - If your mech has one of Hand of Justice or Guardian Angel or both I recommend building this item, without either of these items you won’t see much value from Titan’s Resolve until you have a level 5 or 6 mech which means you have 2 star annie rumble and fizz. This item has the potential to be the single strongest item that your mech can use and is worth playing for every game. The downside to this item is that there is zero value in slamming the item early game as it will never hit 50 stacks until you have a mech online. The only time you are looking to potentially not have this item on your mech is if there are many people contesting (a 4+ mech lobby) the reason for this is because this item greatly increases in value the higher level your mech is. Once it hits 50 stacks your mech will 1v9 especially when coupled with a Hand of Justice or Guardian Angel.
Hand of Justice - This item is so good worth slamming every game as it works well on early game carries and is really solid on mech.
Guardian Angel - Solid item but ONLY place this on your mech if you are certain there will be a Hand of Justice or a Titan’s Resolve with it. Works well with Titan’s because your mech doesn’t lose its Titan’s stacks after its first death and can slap around the enemy team after reviving. Works well with Hand of Justice as it can heal a significant amount of HP post mortem. This item also works really well with rumble as he will oftentimes cast after coming out of the mech and his spell doesn’t go away while reviving.
Quicksilver - This item is BiS for mech IF you are unable to complete the trifecta mentioned above. In lobbies with many zephyrs this item can result in insane value, however, with optimal scouting you can sacrifice Ziggs and Cass to the Zephyr gods. The reason I believe this item isn’t as godly as many others make it out to be is the fact that it does absolutely nothing in a number of matchups other than provide 20% dodge. The problem with this item is that it is NOT slammable until you have a mech online.
Bramble Vest - One of the strongest items to slam early game. If you take an armor off the starting carousel and are blessed enough to find another by 2-1 you are building this item.
TrapClaw - This item is mostly just a 20% dodge stat boost. This item isn’t very slammable early personally, only build if I feel I don’t have any other options.
Shroud of Stillness - This item is a 20% dodge stat boost that can turn a fight with optimal positioning. If you build this item you need to scout EVERY round. Relatively slammable early but not on the same tier as bramble.
ZZ’rot - You are building this item because you want to win streak early. Neat thing with this item is that you get two voidlings over the course of the fight.
Warmog’s Armor - Probably the single strongest early game item in the game, give a protector this item and go afk until stage 4.
Ionic Spark - Another very slamable item, if you have a rod and a cloak at any point before krugs it is worth slamming as this item will save you infinite HP.
Thieve’s Gloves - This item is a bait on mech. In the past I would play Thief’s Gloves mech as a transition unit while I pivot to a non mech composition. Nowadays I only play mech so don’t recommend giving the mech this item. Not a bad shaco item and once you replace shaco with Ekko he loves it.
Itemizing Viktor - Viktor wants a morellonomicon in order to nuke the enemy team’s healing potential along with blue buff or Spear of shojin as viktor should be able to kill the backline in 2-3 spells.
If you're considering playing mech here is what you should look to do in each stage:
Stage 1: look to grab Armor>Tear>Crit Glove on first carousel units holding these specific items such as armor Malphite/Illaoi or tear Ziggs can be free tickets to winstreaking early. After carousel I try to hold brawlers rebels and infiltrators as I believe it is the strongest opener for mech, however if it is clear that a stronger board is available, such as a 2 star poppy or jarvan while you only have 1 star Illaoi/Malphite, it is worth pivoting to that. On the round that Kayn appears (1-4) I will prelevel which means I buy experience in order to achieve a level 4 shop on 2-1. This is very important as a unit like rumble/shaco/neeko with a belt can win streak the entirety of stage 2. I try to hold on to any Annie I find as I like to hold one whenever possible but it is worth selling her in order to pick up any brawlerebel/infiltrator or to ensure that you can pre level.
Stage 2: I attempt to win streak through stage 2 every single game, Viktor Mech and Mech infiltrators are not very item dependent and you can switch between the two depending on what items the game gives you. If you have any of Bramble Vest, Hand of Justice, Guardian Angel, Ionic Spark, Warmogs,blue buff, morellonomicon, or ZZ'rot Portal it is best to slam the item as the Mech can hold any of those items other than bluebuff and morellonomicon and those last 2 items are vital for viktor, Illaoi is a great holder for Mech items and Ziggs/ahri are great holders for Viktor items. On 2-1 play whatever your strongest board is as with any non-hyperroll compositions. On 2-3 before the stage 2 carousel I will prelevel in order to get a level 5 shop on 2-5 post carousel, this is extremely strong for Mech Pilot compositions as it gives you the opportunity to hit a full Mech on stage 2 or other strong early game units like rumble gnar wukong and fizz. In the case that you are on a 2 or 3 loss streak after the stage 2 carousel it is best to attempt a full loss streak in order to maximize early gold, this is the ONLY time that I would ever consider attempting to lose a round. If you are running infiltrators in your early game composition it is extra important to scout EVERY round as the difference between an infiltrator hitting a Ziggs or a 2 star frontliner is winning or losing a fight.
Stage 3: This is where a lot of decision making enters the game. If I am winstreaking with a streak of 3 or greater and I will have more than 10 gold after leveling I will level on 3-1. otherwise I will level on 3-2. If I have fizz and rumble by 3-2 and am level 6 I am willing to roll down to 10gold in order to hit an annie. If you roll down this early into the game it is vital that you do not tunnel only on units that go in your final composition, you are not rolling solely to hit a Mech you are rolling to maintain win streak this means that you will look to complete any pairs or to add unit upgrades to your current board. If you roll down and do not upgrade your board at all you will be in a very bad place so it is important to keep a very open mind on what can be thrown in to improve your composition. If I don't roll down on 3-2 I usually do not roll at all unless I am taking a large amount of damage every round in which case it can be a good idea to level to 7 post stage 3 carousel (3-5) and roll some gold to stabilize. If you are rolling it is important to not roll below 10 in stage 3 unless you have a great reason to, such as winstreaking and holding 4-6 pairs while knowing there are opponents that can beat you if you don't hit those upgrades.
What to do if you hit early Mech: Mech in stage 3 can be played in many different ways. Most of the time you will sell your frontline and be looking to play Mech + whatever your strongest backliners are which are usually the level 2 units you already had. Ideally you want to have a ziggs and infiltrator or be running 4 sorcs + Mech but it is not vital in stage 3.
Stage 4: This is where the decision between Viktor Mech and Mech infiltrators is made. If you are bleeding out and approaching death <40hp 4-1 it is worth leveling to 7 and rolling down to stabilize. Which means you are playing the level 8 board minus ziggs if by some miracle you hit aurelion sol feel free to play zed/ziggs/asol instead of the mystic units. However, in the majority of games you will level to 8 on 4-3 and roll for your board.
The 4-3 rolldown (Viktor Mech) - While rolling you are looking to hit this board https://lolchess.gg/buildeset3.5?deck=f6e3df00de7c11ea85825783e5dd3235 it is discussed earlier when to replace units with legendaries. Also I value cass and Karma over Soraka as before the mech dies other units tend to take very little to 0 damage. If you run into a GP Mercenary upgrade in this roll down it is only worth purchasing double strike as they are so expensive. You can stop rolling once you hit the units in the composition and have a level 6 mech (2 star annie rumble and fizz), a 2 star legendary or 2 star Viktor. If you hit any of those requirements with more than 20 gold and are somewhat healthy you can usually go to level 9 later in the game in order to increase your chances at first place. If you hit a 2 star asol and do not have blue buff Asol can replace Viktor at levels 8 and 9.
If you hit it is very likely that you will win streak through stage 4 and into stage 5.
Stage 5: If you rolled down at level 7 on 4-1 you are leveling to 8 and rolling on 5-1 in a last ditch effort to survive. This rolldown is the same as the standard 4-3 one. If you were able to stop rolling early and have hoarded a large amount of gold, look to go level 9. Only go level 9 if you have at least 30 gold to roll or have more than 15 gold and already hold 1 or more legendary pairs. If you are about to die feel free to roll on 8 in order to complete vital 2 stars which are any mech pilot unit +viktor and shaco. The winconditions for Mech Viktor are good mech items +perfect item 2 star Viktor or Level 9 with 2 star legendaries. The optimal level 9 composition looks like this https://lolchess.gg/buildeset3.5?deck=1c8af8c0de7d11ea8f93e91782b06499 with the option to replace Viktor with urgot 2 and giving the bluebuff to urgot and the morellos to Asol. While it is situational it is almost always better to run a 2 star unit over a 1 star legendary. In the case that you were fortunate enough to find an infiltrator spatula play it on either viktor or gangplank and instead of running Asol play 4 infiltrator level 9: https://lolchess.gg/buildeset3.5?deck=7aa7b960de8511ea9ce08d2f4408daad
If you hit either of these level 9 boards with 2 star units it is a 1st unless an opponent has a 3 star 4 cost unit or out positions you really badly.
General advice when playing Mech Viktor:
Differences between Galaxies
Dwarf Planet - Mech is so busted on this galaxy, I have seen Mech compositions hold hands 1-5 multiple times in challenger elo games. Look for titans resolve as if it procs your Mech will hit the backline. Infiltrators are weaker on this map so keep that in mind when building early game boards. Gangplank is also OMEGABUSTED on this galaxy.
Neekoverse - I just wanted to thank riot for removing this Galaxy
Superdense - I tend to run 4 infiltrator instead of ziggs at level 8. Also if winstreaking you might roll more in stage 3 as any round you win it is likely you're doing an extra 2 damage which puts a lot of pressure on a lobby.
Trade Sector - Greatly dislike this galaxy for Mech but never miss the chance to level if you can afford it while winstreaking. Going level 7 right after stage 3 carousel can be the difference between hitting an early legendary or hitting important mech units.
Treasure Trove - Not a great galaxy for Mech as you have 4 units in your composition that do not benefit greatly from items (Mystic units and annie/fizz) Also Mech doesn't benefit too greatly by having perfect items so the benefit that other compositions get is much greater.
Galactic Armory - Great for pushing early winstreaks. Always look to slam 2 full items before any pvp rounds even begin.
Binary Star - Look to take glove or tear on the first carousel. NEED to win streak as mech isn't as strong later in the game. Not as bad for mech as people make it seem but you usually need 2 dodge items (QSS, HOJ, Trapclaw, and shroud of stillness) in order to make your mech survive versus the 4 cyber players in the lobby. Need perfect Viktor items as another issue mech has in this galaxy is the fact that mystic units along with other mech units can't utilize items well.
Plunder Planet - Always push levels and try to bully other players around. Anytime you can prevent another player from killing any of your units you are denying them 2-3 gold which is a huge early game. Most of the time you will level to 8 on 4-1 and be 9 in late stage 4 or early stage 5. Can also decide to roll down on 3-5 after stage 5 carousel at level 7 in order to get as much gold as possible off the galaxy and prevent other players from killing units. Everyone spikes really hard in stage 4 on this galaxy.
Salvage world - I'm still unsure of this galaxy, I have only played 5 games on this galaxy but in 2 of them I opened with a redbuff ludens lucian with blaster buff that felt really strong. Not as important to run an early game composition that can utilize mech items well.
I'm sure I missed some stuff within this guide and will try to answer any questions in the comments over the next few days.
submitted by TtvBananaNationss to TeamfightTactics [link] [comments]

C++ Best Practices For a C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?


... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:

/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?


To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?


If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?


Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?


One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp_questions [link] [comments]

[Eustacchio Raulli] Simmons is the main reason Philly has a real shot this year

Very long thread from an NBA scout discussing Simmons' value and the way defense in general is played in high leverage games, worth the read imo.
Simmons is the X-factor that could put PHI over the top this year, and it has nothing to do with whether he starts taking 3s.
Let's start by discussing rim protection, which typically is the primary battleground between offense and defense.
Most of the time when discussing rim protection we talk about degree of impact at the rim (lower opp FG% in the paint) or degree of deterrence (lower frequency of FGA in the paint). Behemoths like Gobert and Embiid shine in these areas. 6 of the Top 7 in 3 yr RA-DeFG% are bigs.
From a macro perspective, these are the factors that matters most. Whatever plus-minus variant you prefer, or whichever angle you tend to watch film from, these are the players that will consistently make the highest impact defensive plays (outside the occasional pick-6).
There is a 3rd factor, however, that often goes overlooked. Moreover, it has much greater relative importance in the playoffs than the regular season -- under what circumstances can a defense maintain a measure of rim protection? This is at the core of why versatility matters.
Rudy Gobert makes the greatest degree of impact when protecting the rim of any NBA player. He also provides no rim protection when forced to defend 26 feet from the basket. The goal of the offense, then, is to create situations where he cannot protect the rim.
This isn't easy, and most offenses can't do so consistently within a 24 second shot clock. However, if you remove the subset of bad teams things change. Only the 8 best teams remain in R2 of the playoffs. Within this context sustainability of rim protection grows more important.
As far as I can tell, two key factors influence sustainability of rim protection for a defensive unit:
1) Point of attack defense
2) Rim protection redundancies
Let's discuss each in more detail.
1) Point of attack defense
Questions that are tested for each defensive unit in each matchup:
  • How frequently will the on-ball defender require help?
  • What degree of help is needed?
  • How predictable is the ensuing defensive rotation?
There are many layers to this subject.
First, how many worthwhile angles of attack does the offense have at their disposal? It's not always possible to match up the best POA defender with the ball-handler, so redundancies are needed in this area as well.
Moreover, much of the offensive strategy for each possession involves manipulating the point of attack. Pick-and-rolls, DHOs, etc are all methods of creating an advantage at the point of attack, with the value measured in X time spent to create advantage Y.
The reason teams spend so much time manipulating the point of attack is that most high value shot attempts stem from winning that battle and driving into the paint: driving layups, dump offs to bigs, kick outs to spot-up shooters.
A brief aside about how we think about what constitutes a good shot:
We need to think less about individual shots, and more about the network of shots produced by an action. A pull-up jumper may not be ideal, but if each one opens up two drives it's a good network of shots. So, the battle at the point of attack is very important to the eventual outcome of the possession. The difficulty is in determining how much value to ascribe to individual POA defenders in this regard.
One point that needs to be made:
Unless there is a significant talent gap, most POA defenders will 'lose' on most possessions. What we're really looking for is whether they lose slowly enough for the help rotation to arrive, or if they get burned and give up an easy shot.
Moreover, the results in any individual matchup will be... not quite binary, but certainly polarized. A player either holds up against his assignment, or he doesn't. I'm not certain of this, but my inclination is that it's less of a spectrum than many other facets of basketball.
A good example of this is the 2015 NBA Finals. Despite the injuries to 2/3 of the CLE Big Three, GSW had major problems at the POA early in the series. Barnes, Klay, Liv just weren't strong enough to check LBJ. Dray wasn't fast enough. CLE managed to go up 2-1. What changed?
In short, Andre Iguodala happened. He certainly didn't 'win' at the point of attack. But he did consistently lose more slowly than his teammates. This allowed Dray and Bogut to time their help defense more effectively. This illustrates why POA defense can vary greatly in value. The right defender for LBJ or KD is unlikely to be the right defender for Dame or Kyrie. This can dilute its value over the course of 82 games in +/- metrics. But in a playoff series, having the right guy matters a lot.
This is why versatility is a key characteristic for good POA defenders. Avery Bradley can defend the POA... if that POA is under 6-4, and not too strong (AKA not a playoff initiator). Teams relying on narrow players need guys that can match up with various sizes & speeds.
Bringing this back around to the original subject: Ben Simmons, the single most versatile defender in the league. He can guard almost anyone, which makes it very difficult (or simply sub-optimal) for the offense to shift the POA away from him.
This allows Philly to dictate the terms of the engagement far more than most defenses when they choose to do so. They have the personnel to pit their No. 1 POA defender against the opp No. 1 option, No. 2 vs No. 2, etc. That's rare, valuable, and could swing a postseason matchup.
The specific type(s) of POA defenders that carry the most value in a given year are dictated by the most dangerous offensive threats on contenders that season. In 2020, that's Giannis, LeBron, and Kawhi primarily, then to a lesser extent Luka, Harden, Kemba, Jimmy, Siakam, and... whomever Philly decides to run their offense through when the playoffs start.
The supporting casts matter here, too, of course. But in general how your defense matches up with MIL, LAL, and LAC is what matters most this year. Any other team will have to go through at least two of them to win. For 4 years, this was largely about Curry and LeBron. And for 4 years, there was never a defense that was equipped to handle both Curry and LeBron.
POA defense as a unit has significant value. Typically, that value is divided among many players due to varied angles of attack, skewing toward guard size players. Versatility can concentrate that value somewhat. In the playoffs, wing & forward POA defense matters most
Re: versatility, the key trait is strength for smaller players (e.g. Marcus Smart, Kyle Lowry), and lateral agility for larger players (e.g. Ben Simmons, Paul George)
Ultimately, what matters most in a team vs team matchup is how quickly the offense forces a help rotation, also, 'losing slowly' at the POA produces little value without good help defense around the POA defender. This make it a secondary trait for good team defense, but one that has magnified importance when only good defenses are left
The 2nd key for sustainable rim protection is having rim protection redundancies as a team.
How much of a gap is there between the primary (5) and secondary (4) rim protector in a lineup? Is there any tertiary rim protection provided by 1-3?
The answers to these questions impact how appealing it is for an opposing offense to try and draw the 5 out to the perimeter to defend primary actions. How much value is there drawing Dwight Howard out to the 3 point line knowing that AD will still be lurking in help defense?
On one hand, if you can force the switch a pull-up jumper vs a big does raise the baseline for a HC possession. But that's the catch-22, because pull-ups are typically a baseline rather than a desired endpoint. If that's the entirety of the plan, it rules out higher EV looks. While that higher baseline is nice, it's more valuable used as a tool to create higher EV looks. If the big is afraid of a pull-up, it will open driving lanes.
However, with redundant rim protection, the big can 'sit' on the jumper without worrying much about getting blown by. Think Kevin Love defending Steph Curry in the closing minutes of G7. He never holds up in that situation without knowing that Curry is chasing a 3PA.
That's an extreme example, but it illustrates how redundant rim protection can change the dynamic of that situation and alter the network of shots it can produce. In turn, that alters the amount of effort the offense will put into creating that situation in the first place.
So, then, what types of players create the most value in this regard? Players that provide a measure of rim protection while also being capable of holding up in perimeter defense. Draymond is the ultimate example, but also Giannis, AD, Siakam, Tucker, Isaac, Millsap, etc.
Notice a theme here? Pretty much every elite defense has one of these connecting pieces, a player that overlaps between rim protection and perimeter defense.
Moreover, these players are at the root of every successful form of small-ball. The key isn't going smaller just for the sake of more speed & skill. It's adding that w/o sacrificing rim protection. GSW was so successful because they had Dray, KD, Iggy, and Klay to defend the rim.
Bringing this back around to Ben Simmons & Philly, in addition to being the most versatile POA defender in the league he also provides a (small) measure of secondary rim protection when away from the POA. So do Horford, Tobi, and J-Rich. Also Matisse, if he gets any PS burn.
From a tactical perspective, what this means is that Philly has rim protection that is impactful, deterring, and sustainable. This will make them a tough out for any postseason opponent, regardless of their RS struggles. Joel Embiid will likely make the highest impact defensive plays for Philly in the postseason. Just realize that the multi-faceted skill set of Ben Simmons (and the rest of the supporting cast) is key role in keeping him in a position to make those plays.
Also, generally speaking, this is part of why I value versatile POA defenders like PG or Klay and connecting pieces like Siakam and Giannis more highly than +/- metrics. They help their defenses run at peak efficiency in varied circumstances.
I don't care how much you shut down bad teams in the RS. I care if you can hold up against good teams in the PS. For example, I thought Paul George deserved DPOY last year, with Giannis 2nd, and Gobert 3rd. Maybe this POV is too slanted toward versatility, but it is what it is.
Legend:
  • RS = Regular season
  • PS = Post season
  • POA = Point of Attack
  • EV = Expected Value
  • R2 = Round 2
  • DeFG% = Defensive FG%
  • DHO = Dribble Hand Off
Tweet thread
submitted by kobmug_v2 to nba [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 1:

[Edit: I didn't mean to put a colon in the title.]
In this post we'll be looking at some of the things that make LISP 1.5 and Common Lisp different. There isn't too much surviving LISP 1.5 code, but some of the code that is still around is interesting and worthy of study.
Here are some conventions used in this post of which you might take notice:
Sources are linked sometimes below, but here is a list of links that were helpful while writing this:
The differences between LISP 1.5 and Common Lisp can be classified into the following groups:
  1. Superficial differences—matters of syntax
  2. Conventional differences—matters of code style and form
  3. Fundamental differences—matters of semantics
  4. Library differences—matters of available functions
This post will go through the first three of these groups in that order. A future post will discuss library differences, except for some functions dealing with character-based input and output, since they are a little world unto their own.
[Originally the library differences were part of this post, but it exceeded the length limit on posts (40000 characters)].

Superficial differences.

LISP 1.5 was used initially on computers that had very limited character sets. The machine on which it ran at MIT, the IBM 7090, used a six-bit, binary-coded decimal encoding for characters, which could theoretically represent up to sixty-four characters. In practice, only fourty-six were widely used. The repertoire of this character set consisted of the twenty-six uppercase letters, the nine digits, the blank character '', and the ten special characters '-', '/', '=', '.', '$', ',', '(', ')', '*', and '+'. You might note the absence of the apostrophe/single quote—there was no shorthand for the quote operator in LISP 1.5 because no sensical character was available.
When the LISP 1.5 system read input from cards, it treated the end of a card not like a blank character (as is done in C, TeX, etc.), but as nothing. Therefore the first character of a symbol's name could be the last character of a card, the remaining characters appearing at the beginning of the next card. Lisp's syntax allowed for the omission of almost all whitespace besides that which was used as delimiters to separate tokens.
List syntax. Lists were contained within parentheses, as is the case in Common Lisp. From the beginning Lisp had the consing dot, which was written as a period in LISP 1.5; the interaction between the period when used as the consing dot and the period when used as the decimal point will be described shortly.
In LISP 1.5, the comma was equivalent to a blank character; both could be used to delimit items within a list. The LISP I Programmer's Manual, p. 24, tells us that
The commas in writing S-expressions may be omitted. This is an accident.
Number syntax. Numbers took one of three forms: fixed-point integers, floating-point numbers, and octal numbers. (Of course octal numbers were just an alternative notation for the fixed-point integers.)
Fixed-point integers were written simply as the decimal representation of the integers, with an optional sign. It isn't explicitly mentioned whether a plus sign is allowed in this case or if only a minus sign is, but floating-point syntax does allow an initial plus sign, so it makes sense that the fixed-point number syntax would as well.
Floating-point numbers had the syntax described by the following context-free grammar, where a term in square brackets indicates that the term is optional:
float: [sign] integer '.' [integer] exponent [sign] integer '.' integer [exponent] exponent: 'E' [sign] digit [digit] integer: digit integer digit digit: one of '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' sign: one of '+' '-' 
This grammar generates things like 100.3 and 1.E5 but not things like .01 or 14E2 or 100.. The manual seems to imply that if you wrote, say, (100. 200), the period would be treated as a consing dot [the result being (cons 100 200)].
Floating-point numbers are limited in absolute value to the interval (2-128, 2128), and eight digits are significant.
Octal numbers are defined by the following grammar:
octal: [sign] octal-digits 'Q' [integer] octal-digits: octal-digit [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] octal-digit: one of '0' '1' '2' '3' '4' '5' '6' '7' 
The optional integer following 'Q' is a scale factor, which is a decimal integer representing an exponent with a base of 8. Positive octal numbers behave as one would expect: The value is shifted to the left 3×s bits, where s is the scale factor. Octal was useful on the IBM 7090, since it used thirty-six-bit words; twelve octal digits (which is the maximum allowed in an octal number in LISP 1.5) thus represent a single word in a convenient way that is more compact than binary (but still being easily convertable to and from binary). If the number has a negative sign, then the thirty-sixth bit is logically ored with 1.
The syntax of Common Lisp's numbers is a superset of that of LISP 1.5. The only major difference is in the notation of octal numbers; Common Lisp uses the sharpsign reader macro for that purpose. Because of the somewhat odd semantics of the minus sign in octal numbers in LISP 1.5, it is not necessarily trivial to convert a LISP 1.5 octal number into a Common Lisp expression resulting in the same value.
Symbol syntax. Symbol names can be up to thirty characters in length. While the actual name of a symbol was kept on its property list under the pname indicator and could be any sequence of thirty characters, the syntax accepted by the read program for symbols was limited in a few ways. First, it must not begin with a digit or with either of the characters '+' or '-', and the first two characters cannot be '$'. Otherwise, all the alphanumeric characters, along with the special characters '+', '-', '=', '*', '/', and '$'. The fact that a symbol can't begin with a sign character or a digit has to do with the number syntax; the fact that a symbol can't begin with '$$' has to do with the mechanism by which the LISP 1.5 reader allowed you to write characters that are usually not allowed in symbols, which is described next.
Two dollar signs initiated the reading of what we today might call an "escape sequence". An escape sequence had the form "$$xSx", where x was any character and S was a sequence of up to thirty characters not including x. For example, $$x()x would get the symbol whose name is '()' and would print as '()'. Thus it is similar in purpose to Common Lisp's | syntax. There is a significant difference: It could not be embedded within a symbol, unlike Common Lisp's |. In this respect it is closer to Maclisp's | reader macro (which created a single token) than it is to Common Lisp's multiple escape character. In LISP 1.5, "A$$X()X$" would be read as (1) the symbol A$$X, (2) the empty list, (3) the symbol X.
The following code sets up a $ reader macro so that symbols using the $$ notation will be read in properly, while leaving things like $eof$ alone.
(defun dollar-sign-reader (stream character) (declare (ignore character)) (let ((next (read-char stream t nil t))) (cond ((char= next #\$) (let ((terminator (read-char stream t nil t))) (values (intern (with-output-to-string (name) (loop for c := (read-char stream t nil t) until (char= c terminator) do (write-char c name))))))) (t (unread-char next stream) (with-standard-io-syntax (read (make-concatenated-stream (make-string-input-stream "$") stream) t nil t)))))) (set-macro-character #\$ #'dollar-sign-reader t) 

Conventional differences.

LISP 1.5 is an old programming language. Generally, compared to its contemporaries (such as FORTRANs I–IV), it holds up well to modern standards, but sometimes its age does show. And there were some aspects of LISP 1.5 that might be surprising to programmers familiar only with Common Lisp or a Scheme.
M-expressions. John McCarthy's original concept of Lisp was a language with a syntax like this (from the LISP 1.5 Programmer's Manual, p. 11):
equal[x;y]=[atom[x]→[atom[y]→eq[x;y]; T→F]; equal[car[x];car[Y]]→equal[cdr[x];cdr[y]]; T→F] 
There are several things to note. First is the entirely different phrase structure. It's is an infix language looking much closer to mathematics than the Lisp we know and love. Square brackets are used instead of parentheses, and semicolons are used instead of commas (or blanks). When square brackets do not enclose function arguments (or parameters when to the left of the equals sign), they set up a conditional expression; the arrows separate predicate expressions and consequent expressions.
If that was Lisp, then where do s-expressions come in? Answer: quoting. In the m-expression notation, uppercase strings of characters represent quoted symbols, and parenthesized lists represent quoted lists. Here is an example from page 13 of the manual:
λ[[x;y];cons[car[x];y]][(A B);(C D)] 
As an s-expressions, this would be
((lambda (x y) (cons (car x) y)) '(A B) '(C D)) 
The majority of the code in the manual is presented in m-expression form.
So why did s-expressions stick? There are a number of reasons. The earliest Lisp interpreter was a translation of the program for eval in McCarthy's paper introducing Lisp, which interpreted quoted data; therefore it read code in the form of s-expressions. S-expressions are much easier for a computer to parse than m-expressions, and also more consistent. (Also, the character set mentioned above includes neither square brackets nor a semicolon, let alone a lambda character.) But in publications m-expressions were seen frequently; perhaps the syntax was seen as a kind of "Lisp pseudocode".
Comments. LISP 1.5 had no built-in commenting mechanism. It's easy enough to define a comment operator in the language, but it seemed like nobody felt a need for them.
Interestingly, FORTRAN I had comments. Assembly languages of the time sort of had comments, in that they had a portion of each line/card that was ignored in which you could put any text. FORTRAN was ahead of its time.
(Historical note: The semicolon comment used in Common Lisp comes from Maclisp. Maclisp likely got it from PDP-10 assembly language, which let a semicolon and/or a line break terminate a statement; thus anything following a semicolon is ignored. The convention of octal numbers by default, decimal numbers being indicated by a trailing decimal point, of Maclisp too comes from the assembly language.)
Code formatting. The code in the manual that isn't written using m-expression syntax is generally lacking in meaningful indentation and spacing. Here is an example (p. 49):
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Nowadays we might indent it like so:
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Part of the lack of formatting stems probably from the primarily punched-card-based programming world of the time; you would see the indented structure only by printing a listing of your code, so there is no need to format the punched cards carefully. LISP 1.5 allowed a very free format, especially when compared to FORTRAN; the consequence is that early LISP 1.5 programs are very difficult to read because of the lack of spacing, while old FORTRAN programs are limited at least to one statement per line.
The close relationship of Lisp and pretty-printing originates in programs developed to produce nicely formatted listings of Lisp code.
Lisp code from the mid-sixties used some peculiar formatting conventions that seem odd today. Here is a quote from Steele and Gabriel's Evolution of Lisp:
This intermediate example is derived from a 1966 coding style:
DEFINE(( (MEMBER (LAMBDA (A X) (COND ((NULL X) F) ((EQ A (CAR X) ) T) (T (MEMBER A (CDR X))) ))) )) 
The design of this style appears to take the name of the function, the arguments, and the very beginning of the COND as an idiom, and hence they are on the same line together. The branches of the COND clause line up, which shows the structure of the cases considered.
This kind of indentation is somewhat reminiscent of the formatting of Algol programs in publications.
Programming style. Old LISP 1.5 programs can seem somewhat primitive. There is heavy use of the prog feature, which is related partially to the programming style that was common at the time and partially to the lack of control structures in LISP 1.5. You could express iteration only by using recursion or by using prog+go; there wasn't a built-in looping facility. There is a library function called for that is something like the early form of Maclisp's do (the later form would be inherited in Common Lisp), but no surviving LISP 1.5 code uses it. [I'm thinking of making another post about converting programs using prog to the more structured forms that Common Lisp supports, if doing so would make the logic of the program clearer. Naturally there is a lot of literature on so called "goto elimination" and doing it automatically, so it would not present any new knowledge, but it would have lots of Lisp examples.]
LISP 1.5 did not have a let construct. You would use either a prog and setq or a lambda:
(let ((x y)) ...) 
is equivalent to
((lambda (x) ...) y) 
Something that stands out immediately when reading LISP 1.5 code is the heavy, heavy use of combinations of car and cdr. This might help (though car and cdr should be left alone when they are used with dotted pairs):
(car x) = (first x) (cdr x) = (rest x) (caar x) = (first (first x)) (cadr x) = (second x) (cdar x) = (rest (first x)) (cddr x) = (rest (rest x)) (caaar x) = (first (first (first x))) (caadr x) = (first (second x)) (cadar x) = (second (first x)) (caddr x) = (third x) (cdaar x) = (rest (first (first x))) (cdadr x) = (rest (second x)) (cddar x) = (rest (rest (first x))) (cdddr x) = (rest (rest (rest x))) 
Here are some higher compositions, even though LISP 1.5 doesn't have them.
(caaaar x) = (first (first (first (first x)))) (caaadr x) = (first (first (second x))) (caadar x) = (first (second (first x))) (caaddr x) = (first (third x)) (cadaar x) = (second (first (first x))) (cadadr x) = (second (second x)) (caddar x) = (third (first x)) (cadddr x) = (fourth x) (cdaaar x) = (rest (first (first (first x)))) (cdaadr x) = (rest (first (second x))) (cdadar x) = (rest (second (first x))) (cdaddr x) = (rest (third x)) (cddaar x) = (rest (rest (first (first x)))) (cddadr x) = (rest (rest (second x))) (cdddar x) = (rest (rest (rest (first x)))) (cddddr x) = (rest (rest (rest (rest x)))) 
Things like defstruct and Flavors were many years away. For a long time, Lisp dialects had lists as the only kind of structured data, and programmers rarely defined functions with meaningful names to access components of data structures that are represented as lists. Part of understanding old Lisp code is figuring out how data structures are built up and what their components signify.
In LISP 1.5, it's fairly common to see nil used where today we'd use (). For example:
(LAMBDA NIL ...) 
instead of
(LAMBDA () ...) 
or (PROG NIL ...)
instead of
(PROG () ...) 
Actually this practice was used in other Lisp dialects as well, although it isn't really seen in newer code.
Identifiers. If you examine the list of all the symbols described in the LISP 1.5 Programmer's Manual, you will notice that none of them differ only in the characters after the sixth character. In other words, it is as if symbol names have only six significant characters, so that abcdef1 and abcdef2 would be considered equal. But it doesn't seem like that was actually the case, since there is no mention of such a limitation in the manual. Another thing of note is that many symbols are six characters or fewer in length.
(A sequence of six characters is nice to store on the hardware on which LISP 1.5 was running. The processor used thirty-six-bit words, and characters were six-bit; therefore six characters fit in a single word. It is conceivable that it might be more efficient to search for names that take only a single word to store than for names that take more than one word to store, but I don't know enough about the computer or implementation of LISP 1.5 to know if that's true.)
Even though the limit on names was thirty characters (the longest symbol names in standard Common Lisp are update-instance-for-different-class and update-instance-for-redefined-class, both thirty-five characters in length), only a few of the LISP 1.5 names are not abbreviated. Things like terpri ("terminate print") and even car and cdr ("contents of adress part of register" and "contents of decrement part of register"), which have stuck around until today, are pretty inscrutable if you don't know what they mean.
Thankfully the modern style is to limit abbreviations. Comparing the names that were introduced in Common Lisp versus those that have survived from LISP 1.5 (see the "Library" section below) shows a clear preference for good naming in Common Lisp, even at the risk of lengthy names. The multiple-value-bind operator could easily have been named mv-bind, but it wasn't.

Fundamental differences.

Truth values. Common Lisp has a single value considered to be false, which happens to be the same as the empty list. It can be represented either by the symbol nil or by (); either of these may be quoted with no difference in meaning. Anything else, when considered as a boolean, is true; however, there is a self-evaluating symbol, t, that traditionally is used as the truth value whenever there is no other more appropriate one to use.
In LISP 1.5, the situation was similar: Just like Common Lisp, nil or the empty list are false and everything else is true. But the symbol nil was used by programmers only as the empty list; another symbol, f, was used as the boolean false. It turns out that f is actually a constant whose value is nil. LISP 1.5 had a truth symbol t, like Common Lisp, but it wasn't self-evaluating. Instead, it was a constant whose permanent value was *t*, which was self-evaluating. The following code will set things up so that the LISP 1.5 constants work properly:
(defconstant *t* t) ; (eq *t* t) is true (defconstant f nil) 
Recall the practice in older Lisp code that was mentioned above of using nil in forms like (lambda nil ...) and (prog nil ...), where today we would probably use (). Perhaps this usage is related to the fact that nil represented an empty list more than it did a false value; or perhaps the fact that it seems so odd to us now is related to the fact that there is even less of a distinction between nil the empty list and nil the false value in Common Lisp (there is no separate f constant).
Function storage. In Common Lisp, when you define a function with defun, that definition gets stored somehow in the global environment. LISP 1.5 stores functions in a much simpler way: A function definition goes on the property list of the symbol naming it. The indicator under which the definition is stored is either expr or fexpr or subr or fsubr. The expr/fexpr indicators were used when the function was interpreted (written in Lisp); the subr/fsubr indicators were used when the function was compiled (or written in machine code). Functions can be referred to based on the property under which their definitions are stored; for example, if a function named f has a definition written in Lisp, we might say that "f is an expr."
When a function is interpreted, its lambda expression is what is stored. When a function is compiled or machine coded, a pointer to its address in memory is what is stored.
The choice between expr and fexpr and between subr and fsubr is based on evaluation. Functions that are exprs and subrs are evaluated normally; for example, an expr is effectively replaced by its lambda expression. But when an fexpr or an fsubr is to be processed, the arguments are not evaluated. Instead they are put in a list. The fexpr or fsubr definition is then passed that list and the current environment. The reason for the latter is so that the arguments can be selectively evaluated using eval (which took a second argument containing the environment in which evaluation is to occur). Here is an example of what the definition of an fexpr might look like, LISP 1.5 style. This function takes any number of arguments and prints them all, returning nil.
(LAMBDA (A E) (PROG () LOOP (PRINT (EVAL (CAR A) E)) (COND ((NULL (CDR A)) (RETURN NIL))) (SETQ A (CDR A)) (GO LOOP))) 
The "f" in "fexpr" and "fsubr" seems to stand for "form", since fexpr and fsubr functions got passed a whole form.
The top level: evalquote. In Common Lisp, the interpreter is usually available interactively in the form of a "Read-Evaluate-Print-Loop", for which a common abbreviation is "REPL". Its structure is exactly as you would expect from that name: Repeatedly read a form, evaluate it (using eval), and print the results. Note that this model is the same as top level file processing, except that the results of only the last form are printed, when it's done.
In LISP 1.5, the top level is not eval, but evalquote. Here is how you could implement evalquote in Common Lisp:
(defun evalquote (operator arguments) (eval (cons operator arguments))) 
LISP 1.5 programs commonly look like this (define takes a list of function definitions):
DEFINE (( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
which evalquote would process as though it had been written
(DEFINE ( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
Evaluation, scope, extent. Before further discussion, here the evaluator for LISP 1.5 as presented in Appendix B, translated from m-expressions to approximate Common Lisp syntax. This code won't run as it is, but it should give you an idea of how the LISP 1.5 interpreter worked.
(defun evalquote (function arguments) (if (atom function) (if (or (get function 'fexpr) (get function 'fsubr)) (eval (cons function arguments) nil)) (apply function arguments nil))) (defun apply (function arguments environment) (cond ((null function) nil) ((atom function) (let ((expr (get function 'expr)) (subr (get function 'subr))) (cond (expr (apply expr arguments environment)) (subr ; see below ) (t (apply (cdr (sassoc function environment (lambda () (error "A2")))) arguments environment))))) ((eq (car function 'label)) (apply (caddr function) arguments (cons (cons (cadr function) (caddr function)) arguments))) ((eq (car function) 'funarg) (apply (cadr function) arguments (caddr function))) ((eq (car function) 'lambda) (eval (caddr function) (nconc (pair (cadr function) arguments) environment))) (t (apply (eval function environment) arguments environment)))) (defun eval (form environment) (cond ((null form) nil) ((numberp form) form) ((atom form) (let ((apval (get atom 'apval))) (if apval (car apval) (cdr (sassoc form environment (lambda () (error "A8"))))))) ((eq (car form) 'quote) (cadr form)) ((eq (car form) 'function) (list 'funarg (cadr form) environment)) ((eq (car form) 'cond) (evcon (cdr form) environment)) ((atom (car form)) (let ((expr (get (car form) 'expr)) (fexpr (get (car form) 'fexpr)) (subr (get (car form) 'subr)) (fsubr (get (car form) 'fsubr))) (cond (expr (apply expr (evlis (cdr form) environment) environment)) (fexpr (apply fexpr (list (cdr form) environment) environment)) (subr ; see below ) (fsubr ; see below ) (t (eval (cons (cdr (sassoc (car form) environment (lambda () (error "A9")))) (cdr form)) environment))))) (t (apply (car form) (evlis (cdr form) environment) environment)))) (defun evcon (cond environment) (cond ((null cond) (error "A3")) ((eval (caar cond) environment) (eval (cadar cond) environment)) (t (evcon (cdr cond) environment)))) (defun evlis (list environment) (maplist (lambda (j) (eval (car j) environment)) list)) 
(The definition of evalquote earlier was a simplification to avoid the special case of special operators in it. LISP 1.5's apply can't handle special operators (which is also true of Common Lisp's apply). Hopefully the little white lie can be forgiven.)
There are several things to note about these definitions. First, it should be reiterated that they will not run in Common Lisp, for many reasons. Second, in evcon an error has been corrected; the original says in the consequent of the second branch (effectively)
(eval (cadar environment) environment) 
Now to address the "see below" comments. In the manual it describes the actions of the interpreter as calling a function called spread, which takes the arguments given in a Lisp function call and puts them into the machine registers expected with LISP 1.5's calling convention, and then executes an unconditional branch instruction after updating the value of a variable called $ALIST to the environment passed to eval or to apply. In the case of fsubr, instead of calling spread, since the function will always get two arguments, it places them directly in the registers.
You will note that apply is considered to be a part of the evaluator, while in Common Lisp apply and eval are quite different. Here it takes an environment as its final argument, just like eval. This fact highlights an incredibly important difference between LISP 1.5 and Common Lisp: When a function is executed in LISP 1.5, it is run in the environment of the function calling it. In contrast, Common Lisp creates a new lexical environment whenever a function is called. To exemplify the differences, the following code, if Common Lisp were evaluated like LISP 1.5, would be valid:
(defun weird (a b) (other-weird 5)) (defun other-weird (n) (+ a b n)) 
In Common Lisp, the function weird creates a lexical environment with two variables (the parameters a and b), which have lexical scope and indefinite extent. Since the body of other-weird is not lexically within the form that binds a and b, trying to make reference to those variables is incorrect. You can thwart Common Lisp's lexical scoping by declaring those variables to have indefinite scope:
(defun weird (a b) (declare (special a b)) (other-weird 5)) (defun other-weird (n) (declare (special a b)) (+ a b n)) 
The special declaration tells the implementation that the variables a and b are to have indefinite scope and dynamic extent.
Let's talk now about the funarg branch of apply. The function/funarg device was introduced some time in the sixties in an attempt to solve the scoping problem exemplified by the following problematic definition (using Common Lisp syntax):
(defun testr (x p f u) (cond ((funcall p x) (funcall f x)) ((atom x) (funcall u)) (t (testr (cdr x) p f (lambda () (testr (car x) p f u)))))) 
This function is taken from page 11 of John McCarthy's History of Lisp.
The only problematic part is the (car x) in the lambda in the final branch. The LISP 1.5 evaluator does little more than textual substitution when applying functions; therefore (car x) will refer to whatever x is currently bound whenever the function (lambda expression) is applied, not when it is written.
How do you fix this issue? The solution employed in LISP 1.5 was to capture the environment present when the function expression is written, using the function operator. When the evaluator encounters a form that looks like (function f), it converts it into (funarg f environment), where environment is the current environment during that call to eval. Then when apply gets a funarg form, it applies the function in the environment stored in the funarg form instead of the environment passed to apply.
Something interesting arises as a consequence of how the evaluator works. Common Lisp, as is well known, has two separate name spaces for functions and for variables. If a Common Lisp implementation encounters
(lambda (f x) (f x)) 
the result is not a function applying one of its arguments to its other argument, but rather a function applying a function named f to its second argument. You have to use an operator like funcall or apply to use the functional value of the f parameter. If there is no function named f, then you will get an error. In contrast, LISP 1.5 will eventually find the parameter f and apply its functional value, if there isn't a function named f—but it will check for a function definition first. If a Lisp dialect that has a single name space is called a "Lisp-1", and one that has two name spaces is called a "Lisp-2", then I guess you could call LISP 1.5 a "Lisp-1.5"!
How can we deal with indefinite scope when trying to get LISP 1.5 programs to run in Common Lisp? Well, with any luck it won't matter; ideally the program does not have any references to variables that would be out of scope in Common Lisp. However, if there are such references, there is a fairly simple fix: Add special declarations everywhere. For example, say that we have the following (contrived) program, in which define has been translated into defun forms to make it simpler to deal with:
(defun f (x) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (h (* b a))) (defun h (i) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
The result of calling p should be 10/63. To make it work, add special declarations wherever necessary:
(defun f (x) (declare (special a b)) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (declare (special a b l)) (h (* b a))) (defun h (i) (declare (special a b l i)) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (declare (special a b i)) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
Be careful about the placement of the declarations. It is required that the one in p be inside the prog, since that is where the variables are bound; putting it at the beginning (i.e., before the prog) would do nothing because the prog would create new lexical bindings.
This method is not optimal, since it really doesn't help too much with understanding how the code works (although being able to see which variables are free and which are bound, by looking at the declarations, is very helpful). A better way would be to factor out the variables used among several functions (as long as you are sure that it is used in only those functions) and put them in a let. Doing that is more difficult than using global variables, but it leads to code that is easier to reason about. Of course, if a variable is used in a large number of functions, it might well be a better choice to create a global variable with defvar or defparameter.
Not all LISP 1.5 code is as bad as that example!
Join us next time as we look at the LISP 1.5 library. In the future, I think I'll make some posts talking about getting specific programs running. If you see any errors, please let me know.
submitted by kushcomabemybedtime to lisp [link] [comments]

How Trade 0-100 Binary Option 1 MINUTE BINARY OPTIONS STRATEGY 100% WINNING 100% GAINS বাইনারি, যোগ,বিয়োগ, গুন,ভাগ, করার নিয়ম,,binary ... binäre optionen tipps und tricks, binäre optionen ... Binary Options Strategy 2020  100% WIN GUARANTEED ... Binary options strategy 1 min 90%-95% ITM - Binary Signals Zero Loss $3,000 Profits with 1 Tick Strategy in 3 Minutes ... How To Subtract Binary Numbers - YouTube This is how to trade Binary Options Full Time! - YouTube 1 minute live trading - binary options - candlestick ...

Likewise, above $53.10, the options breakeven level, if the inventory moved $1, then the choice contract would move $1, thus making $100 ($1 x $100) as well. However, in follow, the web distinction is settled, and the investor earns a $60 profit on the choice contract, which equates to $6,000 minus the premium of $300 and any dealer commissions. Trading binary options may not be suitable for everyone. Trading CFDs carries a high level of risk since leverage can work both to your advantage and disadvantage. As a result, the products offered on this website may not be suitable for all investors because of the risk of losing all of your invested capital. You should never invest money that you cannot afford to lose, and never trade with ... Minus Binary Options. For example: add, subtract, multiply, divide, or modulo. There are a number of online brokerages that offer binary options …. With Binance Options, the expected payoff is variable Mar 22, 2020 · The binary number system works similarly to the base 10 decimal system we are used to using, except that it is a base 2 system consisting of only two digits, 1 and 0. If minus ... Binary options generally have terms shorter than traditional options, such as 60 seconds, 15 minutes, 30 minutes, 45 minutes, one hour, and one week. At the expiration date, the binary option pays either the contract value ($100 on the Nadex) or nothing, depending upon whether the trader has correctly projected the direction of the price for the specific underlying asset by the time of expiration. Furthermore, although the decimal system uses the digits 0 through 9, the binary system uses only 0 and 1, and each digit is referred to as a bit. Apart from these differences, operations such as addition, subtraction, multiplication, and division are all computed following the same rules as the decimal system. Almost all modern technology and computers use the binary system due to its ease of ... Top 15 Binary Options Brokers 1. IQ Option. IQ Option was established in 2012 and had since then received favorable reviews on the internet. It uses in-house software for trading. The maximum returns are 95%. However, traders in the USA, Australia, Canada, Russia, Belgium, Japan, Turkey, Israel, Iran, Sudan, and Syria are not accepted. IQOption Europe Ltd. is well-known for reliable broker ... The 15 Minute expiry strategy is very popular among binary options traders.. Considered a medium-term expiry, it is recommended a 15 Minute binary options trader choose forex options or stocks as an asset class preference. This is because, when trading a medium-term expiry, a high probability trade most often presents itself when both trend and volatility can be predicted with confidence. SUBSCRIBE TO OUR NEWSLETTER. Enter your email address below and be first to hear about all the latest promotions, sales and more! Tuesday, 11 July 2017. 0 Minus # 1 Biner Option Broker Binary options algorithm program; वैलेंटाइन डे; Home Binary option winning tricks. Minus vs union for binary options. By. Minus Vs Union For Binary Options ...

[index] [19317] [1154] [680] [27063] [15389] [23091] [15812] [18483] [4726] [11937]

How Trade 0-100 Binary Option

Trusted spots blog https://trustedspots1.blogspot.com/?m=1 To register a free account on desktop or laptop, click here https://bit.ly/3ghvlt5 To register a f... 1 MINUTE BINARY OPTIONS STRATEGY 100% WINNING 100% GAINS binary options strategy estrategia de opções binarias 100% acertos 100% ganhos. Category People & Blogs; Show more Show less. Loading ... binäre optionen tipps und tricks, binäre optionen trendfolgestrategie http://bitlye.com/7k7yHv Der Bitcoin Macht Menschen ReichUnd du könntest dich weiter... This is how I have traded Binary for the past 3 years. Thank you for watching my videos, hit the subscribe button for more content. Check out our members res... TRADING THE FEDERAL RESERVE LIVE – Live Trading, Robinhood Options, Day Trading & STOCK MARKET NEWS Stock Market Live 1,657 watching Live now Make $100/Day From Whatsapp With This 1 Trick ... ACCOUNT MANAGEMENT We manage binary option accounts - Deriv, Binary.com, Binomo, IQ Option, Expert Option # Minimum Balance $200 = Daily Profit:- $20 to $5... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t... বাইনারি, যোগ,বিয়োগ, গুন,ভাগ, করার নিয়ম,,binary,subtraction,multiply,divide, বাইনারির সকল নিয়ম ... This video tutorial explains how to subtract binary numbers. It contains plenty of examples and practice problems on binary subtraction. Subscribe: https://w... Binary Options 100% ITM strategy from 1000 to 20000 in 6 min live trading - Duration: 9:52. Binary Options Beat 84,210 views. 9:52. The Ultimate Candlestick Patterns Trading Course - Duration: 38 ...

http://binaryoptiontrade.worlminfi.ml