Optimal stopping problem, starts with the secretary problem in which you are wanting to hire a secretary, there are a few conditions applicants can’t decline your offer, you can’t go back to them, you want the best one and time is not the main factors – do you stop too early or too late. Solution get half way hire then if they are better than others hire them. Clearly we know nothing about the applicants other than how they compare to one another. This assumes you can’t go back to them. In life they may decline the offer but it gets you thinking.
The threshold rule, would be where we immediately accept an applicant if they are above a certain percentile. No matter what, never hire someone who’s below average unless you’re totally out of options.
In life we go from looking to leaping, i.e. actually making the decision. You can apply a similar principle to dating (although I wouldn’t recommend it), selling a house or finding a parking space – the main factors here are occupancy rate is key and remember to factor in time and the cost of fossil fuels.
Mentions the principle of explore and exploit. Exploration is gathering information, and exploitation is using the information you have to get a known good result. Factor in time once again, so if you have just moved somewhere try many new things. If you are about to leave somewhere exploit by just sticking with good things you know. Suggests that age is a factor here in that in later life we exploit focusing on known relationships that provide maximum benefit, as children we explore new ones.
When looking at slot machines that we know little or nothing about, there is some guaranteed payout rate which could make the optimum time not to want to play it again. This number is the “dynamic allocation index” knowns as the Gittins index— conceived the goal as maximizing payoffs not for a fixed interval of time, but for a future that is endless yet discounted. Something in the present is valued more than in the future record of 0–0—an arm that’s a complete unknown—has an expected value of 0.5000 but a Gittins index of 0.7029. In other words, something you have no experience with whatsoever is more attractive than a machine that you know pays out seven times out of ten! Computer science can’t offer you a life with no regret. But it can, potentially, provide you with a life with minimal regret.
Regret is the result of comparing what we actually did with what would have been best in hindsight. In a multi-armed bandit, Barnard’s outlines there is “inestimable loss” can in fact be measured precisely, and regret assigned a number: it’s the difference between the total payoff obtained by following a particular strategy and the total payoff that theoretically could have been obtained by just pulling the best arm every single time (had we only known from the start which one it was). We can calculate this number for different strategies, and search for those that minimize it. Computer science has developed a shorthand specifically for measuring algorithmic worst-case scenarios: it’s called “Big-O” notation.
Book mentions the power of computers really to put numbers into order and relates this to a pack of playing cards.
Merge sort splits up something then put these back in in steps (divide and conquer)
Insertion sort puts number into a sequence, inserting one at a time.
Bubble sort repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order.
The basic principle is this: the effort expended on sorting materials is just a pre-emptive strike against the effort it’ll take to search through them later. It is hard to work out what your future self will want to search for we search with our quick eyes and sort with slow hands.
The verdict is clear: ordering your bookshelf will take more time and energy than scanning through it ever will. Filing electronic messages by hand into folders takes about the same amount of time as filing physical papers in the real world, but emails can be searched much more efficiently than their physical counterparts. Sorting becomes less valuable if your computer can search for you.
“Things which matter most mustnever be at the mercy of things which matter least,” Goethe allegedly proclaimed. Sometimes it’s just not true if the most important thing cannot be done until that which matters least is finished, so there’s no choice but to treat that unimportant thing as being every bit as important as whatever it’s blocking. When a certain task can’t be started until another one is finished, scheduling theorists call that a “precedence constraint.”
Pre-emption – being able to stop one task partway through and switch to another, looking at the earliest due date and shortest processing time are still optimal strategies, able to guarantee you (on average) the best possible performance in the face of uncertainty.
The Copernican Principle if we assume that we’re arriving precisely halfway into something’s duration, the best guess we can make for how long it will last into the future becomes obvious: exactly as long as it’s lasted already.
These three very different patterns of optimal prediction – the Multiplicative, Average, and Additive Rules which arise from applying Bayes’ Rule to the power-law, normal, and Erlang distributions, respectively. And given the way those predictions come out, the three distributions offer us different guidance, too, on how surprised we should be by certain events.
In a power-law distribution, the longer something has gone on, the longer we expect it to continue going on. So a power-law event is more surprising the longer we’ve been waiting for it—and maximally surprising right before it happens. A nation, corporation, or institution only grows more venerable with each passing year, so it’s always stunning when it collapses.
In a normal distribution, events are surprising when they’re early—since we expected them to reach the average—but not when they’re late. Indeed, by that point they seem overdue to happen, so the longer we wait, the more we expect them.
Erlang distribution, events by definition are never any more or less surprising no matter when they occur. Any state of affairs is always equally likely to end regardless of how long it’s lasted. No wonder politicians are always thinking about their next election.
Failing the marshmallow test—and being less successful in later life—may not be about lacking willpower. It could be a result of believing that adults are not dependable: that they can’t be trusted to keep their word, that they disappear for intervals of arbitrary length Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
Overfitting making a predictive model more complicated that needs to be, overcome by calibration (lasso algorithm reducing each factor to zero and only include if necessary) and thinking of Occam’s razor.
When applying overfitting to real life how early to stop and decide depends on the gap between what you can measure and what really matters. If you have all the facts, they’re free of all error and uncertainty, and you can directly assess whatever is important to you, then don’t stop early. Think long and hard: the complexity and effort are appropriate. But that’s almost never the case. If you have high uncertainty and limited data, then do stop early by all means. If you don’t have a clear read on how your work will be evaluated, and by whom, then it’s not worth the extra time to make it perfect with respect to your own (or anyone else’s) idiosyncratic guess at what perfection might be. The greater the uncertainty, the bigger the gap between what you can measure and what matters, the more you should watch out for overfitting—that is, the more you should prefer simplicity, and the earlier you should stop. Even with the most powerful computer in the world some things can be solved by finding an optimum solution.
Mentions travelling salesman in which find the shortest route between increasing numbers of towns as an example. To overcome this use relaxation a process in which you relax the outcome so you accept good enough. The enemy of good enough is perfect.
Think about famous stories of serendipity and people making discoveries. When you’re very in the middle of something, you forget the most obvious things. Therefore introducing random can help throwing you out of the frame, of breaking the context a little bit discovering insights. You’re people who are alive and in the world and want to be aware of a lot of other things as well.
With communication and non-responsiveness what should we as humans do, firstly it does depend on the medium: we start to worry in a matter of seconds over the phone, days over email, and weeks over postal mail. The longer the round-trip time between sender and receiver, the longer it takes a silence to be significant—and the more information can be potentially “in flight” before the sender realizes there’s a problem. In networking, having the parties properly tune their expectations for the timeliness of acknowledgments is crucial to the system functioning correctly. human society, we tend to adopt a policy of giving people some finite number of chances in a row, then giving up entirely. Three strikes, you’re out. Computers use similar approach to password challenges. Solution Exponential Backoff on the invitation rate. Try to reschedule in a week, then two, then four, then eight. The rate of “retransmission” goes toward zero—yet you never have to completely give up.
Circuit switching and packet switching are different ways of networks communicating. Circuit switching is connection orientated like phone lines used to be in busy times you may have found a message saying that the circuits are busy, the request is either approved or denied. Packet switching which is connectionless deals with congestion differently, if full the system gets slow and there’s nothing to inform the sender of how congested the network is at any given moment.
Therefore with packet switching, the sender and receiver must not only communicate but metacommunicate: they need to figure out how fast the data should be sent. the heart of Transmission Control Protocol congestion control is an algorithm called Additive Increase, Multiplicative Decrease, or AIMD.
AIMD works by seeing if the first packet is received successfully it sends out two more, if both of those get through it sends out a batch of four, and so on. But as soon as any packet’s ACK does not come back to the sender, the AIMD algorithm takes over. Under AIMD, any fully received batch of packets causes the number of packets in flight not to double but merely to increase by 1, and dropped packets cause the transmission rate to cut back by half (hence the name Additive Increase, Multiplicative Decrease). Essentially, AIMD takes the form of someone saying, “A little more, a little more, a little more, whoa, too much, cut way back, okay a little more, a little more…” Thus it leads to a characteristic bandwidth shape known as the “TCP sawtooth”—steady upward climbs punctuated by steep drops. a merely additive increase helps stabilize things for everyone, preventing rapid overload-and-recovery cycles.
Remember, it is always important to have benchmarks to help compare things. Our judgments betray our expectations, and our expectations betray our experience. What we project about the future reveals a lot – about the world we live in, and about our own past.