On Local Minima

I was thinking about doing this for THUNK, but I always try to keep THUNK upbeat, & I don’t know if I can do that here. So spoiler warning: serious downers.

One of the interesting concepts associated with evolution is the evolutionary local maximum. Imagine two hills right next to each other: a tall one & a shorter one. Imagine a torrential downpour begins to flood the area, & animals of various types are forced to flee to these hills to keep from drowning. Some are lucky enough to have started off next to the big hill, but some head for “high ground” on the short one.

Clever creatures might figure out that their chances are better on the other hill, & if the water hasn’t gotten too deep, can brave the rising waterline trying to swim over. But after a certain point, the water has become so deep (& the distance between the hills so great) that trying to switch hills is tantamount to suicide – they’d simply drown before they got there. So they climb as quickly as they can to the top of the short hill & hope that the rain stops soon.

This is why we often get suboptimal evolutionary “designs” – chlorophyll can’t capture energy from green light, DNA is prone to kinking up & causing cancer, that sort of stuff. There are obviously better solutions, but the rising tide of survival waits for no creature, & if you spend too much time tinkering to find the best hill to climb, you’ve already drowned.

Of course, you can also drown on the shorter hill; we call that “extinction.”

This phenomenon isn’t restricted to evolution; as game theorists know intimately, any scenario with some sort of pressure to find competitive advantage is subject to “terminal” local maxima. Businesses are an obvious example here, but so are cultures, ideas, nations, & politics. Occasionally, we can navigate from one hill to another, frequently at great cost (e.g. the American revolutionary war transitioning from a precarious colonialist monarchy to a more stable local democracy), but even with a known superior solution, transitioning can sometimes be logistically impossible.

Some examples:

1. The American medical education system is very clearly & unapologetically exploitative, requiring incredible sacrifices of time & quality of life from people who want to be doctors (so hospitals can bleed free labor from them). This causes a shortage of doctors, which forces hospitals to exploit med students to stay profitable & open.

2. The method we use for federal elections (a plurality vote) is demonstrably inferior to other methods (e.g. ranked choice voting). But everyone who is elected by that method has at least some incentive to preserve it, & advocating for something new is politically dangerous.

3. App developers largely succeed/fail based on how much attention they can demand from users. Ideally, they’d build apps to maximize user wellbeing (which would probably include shutting off our phones), & given a large enough user base, that strategy could be very successful, but any developer who “defects” to a less attention-grabbing app is at a disadvantage.

4. As businesses grow, the infrastructure laid in their startup days becomes embedded & calcified under necessity – software, processes, etc. become increasingly essential for daily operation as the company gets bigger, & replacing/updating them with something better gets more costly over time.

We can see the higher ground from here, but I don’t know if there’s actually any practical way to get there, &  I don’t know when or if the water will swallow us up here on our little hills. That scares the hell out of me.

2 thoughts on “On Local Minima”

  1. Great post! Though it’s a shame these ideas didn’t qualify to become a video.

    As to the matter at hand, I’m not an expert in the subject, I’d still like to give my two cents.

    From a computational perspective, there is no magic bullet for the problem of local extrema. However, there are methods which have been shown to be very effective (although not perfect). In essence, most of them rely on adding a stochastic element to the problem. I think above a certain number of dimensions, stochastic methods have even been shown to be more effective than deterministic methods, even when there is only one extremum. The basic idea is that
    even if an algorithm converges on some extremum, the random element will still make it “wander around” and scout for a better one.

    Back to the real world, there are some cases where random circumstances facilitated the shift from one extremum to another. One example is the great fire in Seattle in 1889. In short, they built the city at sea level, so that every time the tide came in the sewers overflowed. They figured out that the way to fix it was to elevate the streets, but the cost was prohibitive. After the fire, they had to build everything from scratch, so this time they did it the right way.

    There is even a school of thought that suggests that, in the long run, these sort of disasters help societies evolve. The idea here is that without destruction, old industries and institutions will eventually take over all niches and resources, and stamp out innovation. Destruction in this case creates opportunities for innovations to get a foothold.

Leave a Comment