On Local Minima

I was thinking about doing this for THUNK, but I always try to keep THUNK upbeat, & I don’t know if I can do that here. So spoiler warning: serious downers.

One of the interesting concepts associated with evolution is the evolutionary local maximum. Imagine two hills right next to each other: a tall one & a shorter one. Imagine a torrential downpour begins to flood the area, & animals of various types are forced to flee to these hills to keep from drowning. Some are lucky enough to have started off next to the big hill, but some head for “high ground” on the short one.

Clever creatures might figure out that their chances are better on the other hill, & if the water hasn’t gotten too deep, can brave the rising waterline trying to swim over. But after a certain point, the water has become so deep (& the distance between the hills so great) that trying to switch hills is tantamount to suicide – they’d simply drown before they got there. So they climb as quickly as they can to the top of the short hill & hope that the rain stops soon.

This is why we often get suboptimal evolutionary “designs” – chlorophyll can’t capture energy from green light, DNA is prone to kinking up & causing cancer, that sort of stuff. There are obviously better solutions, but the rising tide of survival waits for no creature, & if you spend too much time tinkering to find the best hill to climb, you’ve already drowned.

Of course, you can also drown on the shorter hill; we call that “extinction.”

This phenomenon isn’t restricted to evolution; as game theorists know intimately, any scenario with some sort of pressure to find competitive advantage is subject to “terminal” local maxima. Businesses are an obvious example here, but so are cultures, ideas, nations, & politics. Occasionally, we can navigate from one hill to another, frequently at great cost (e.g. the American revolutionary war transitioning from a precarious colonialist monarchy to a more stable local democracy), but even with a known superior solution, transitioning can sometimes be logistically impossible.

Some examples:

1. The American medical education system is very clearly & unapologetically exploitative, requiring incredible sacrifices of time & quality of life from people who want to be doctors (so hospitals can bleed free labor from them). This causes a shortage of doctors, which forces hospitals to exploit med students to stay profitable & open.

2. The method we use for federal elections (a plurality vote) is demonstrably inferior to other methods (e.g. ranked choice voting). But everyone who is elected by that method has at least some incentive to preserve it, & advocating for something new is politically dangerous.

3. App developers largely succeed/fail based on how much attention they can demand from users. Ideally, they’d build apps to maximize user wellbeing (which would probably include shutting off our phones), & given a large enough user base, that strategy could be very successful, but any developer who “defects” to a less attention-grabbing app is at a disadvantage.

4. As businesses grow, the infrastructure laid in their startup days becomes embedded & calcified under necessity – software, processes, etc. become increasingly essential for daily operation as the company gets bigger, & replacing/updating them with something better gets more costly over time.

We can see the higher ground from here, but I don’t know if there’s actually any practical way to get there, &  I don’t know when or if the water will swallow us up here on our little hills. That scares the hell out of me.

Josh’s Gemsbok Interview!

I recently did an interview with Daniel Podgorski over on his philosophy-oriented blog The Gemsbok, which included some questions about my philosophical leanings & what I absolutely hate about my show.

For internet logicians, there are plenty of possible outlets, from article sites (like The Gemsbok) to online journals (academic and otherwise) to conventional blogging platforms (like free WordPress and Tumblr blogs). What is it about video as a medium, and in particular YouTube, that feels like the right fit for you and your style?

JP: My hair. I mean, look at it. You can’t convey this magnitude of hair in text. @:) See? It just doesn’t work.

Check it out here!

Some Unfortunate Objectivisms

1. “I am/am not an Objectivist.”

Objectivism contains an assertion that it is the only valid philosophy for a rational person to hold. This means that, for Objectivists, the world is divided into two groups: Objectivists & irrational people. Annoyingly, this distinction prompts everyone who wants to discuss Rand’s ideas to preface their thoughts by identifying themselves one way or the other – either you’re with her or you’re against her. (Apparently.)

2. “Show me exactly where Rand’s wrong.”

Arguments can be unconvincing in many ways. Sometimes, if an author has been very diligent in creating rigorous stepwise clarity in their work, it’s possible to find a singular step in their reasoning which can be refuted directly, but it’s far more common that the conclusion simply doesn’t seem plausible – the final leap the author makes from their premises to that conclusion doesn’t intuitively follow.

“The United States has a population of 318.9 million people. They don’t need to eat meat. Meat is costly. We should ban meat in the US.” There’s no factual or structural error in that argument, it just isn’t particularly convincing.

It’s not necessary to say “THIS BIT HERE, THIS is where Rand goes awry” to justify not accepting her work wholesale, it is sufficient to say, “I don’t find her arguments convincing.” Considering the sprawling & highly interconnected nature of her writing, it’s hard to believe that anyone would think a point-by-point refutation appropriate – she presents a unified ideology, & if it’s not compelling to some people after the first couple chapters, well that’s just fine.

3. “I was hoping for someone to engage with Rand’s ideas, instead it’s just these ad hominem attacks.”

The tone & style Rand’s writing is saturated with an implicit assertion of her philosophical & intellectual superiority – egoism is central to her ideology, & it shows. It’s also a critical premise for some of her arguments, as the polemic which she uses to dismiss contrary ideas is often justified only by her say-so:

As reporters, linguistic analysts were accurate: Wittgenstein’s theory that a concept refers to a conglomeration of things vaguely tied together by a ‘family resemblance’ is a perfect description of the state of a mind out of focus.

As such, Rand’s character (& the events of her life indicative of that character) are absolutely relevant to the evaluation of many of her ideas – if I’m supposed to discount Wittgenstein’s intensely rigorous analysis of language simply because it’s “a perfect description of the state of a mind out of focus,” I can certainly demand some qualifications from the mind that claims to be focused.

It’s curious that these same assertions appear, more or less verbatim, in so many discussions of Objectivism. I also notice that they’re all, in some fashion, setting a particular tone for the discussion:

1. You’re either with or against Rand.
2. If you’re against her, you have to prove she’s wrong.
3. Her ideas are to be questioned, not her qualifications or character.


1. People generally don’t believe in Rand’s absolutism.
2. It’s totally valid to simply find her arguments unconvincing.
3. Her qualifications & character are an essential part of her ideas.

If I were a little more paranoid, I’d call this a deliberate rhetorical tactic, but I think it’s really just part of the Objectivist culture to engage on these terms. I think those individuals would be be much more pleasant to talk to if we didn’t have to.

Taboo & Combating False Beliefs

tl;dr – Should we discuss incorrect ideologies publicly?


In the little private utopia I have set up in my head (which is mostly modeled on Star Trek: The Next Generation), critical thinking & skepticism are universal values. While they don’t always lead to the same conclusions, they do tend to attenuate extremist attitudes – you can believe whatever you want to, but you bear in mind how far that idea is from what can be readily proven.

Ex: I believe that the US should be more socialist in how it taxes & reallocates funds, but I recognize that people who know way more about economics than I do are heavily divided on the issue, so I temper that belief. I couch my opinions in language that makes it clear that other positions are also valid & maybe I’m wrong.

In this utopia, everyone’s mental “immune system” is healthy & active, so we can really talk about *anything.* Even if some meme is particularly dangerous or insidious, a culture of doubt would tend to cripple dogma before it can get up to cruising speed, so you don’t get anyone who’s immune to a good argument.


1. People aren’t like that.

2. Culture isn’t like that.

Seductive ideologies (& ideas in general) are subject to greater reinforcement over time – a person who gets it into their heads that something is the case will see evidence for it everywhere & believe it harder over time. The perceived distance of such an idea from provability will lessen over time: “Of COURSE the US should be more socialist! Look at all this compelling (for me) evidence! I’m more certain every day that everyone who disagrees with me is an idiot!” 

This leads to a feedback loop of the people farthest from that ideal of dispassionate skepticism being the most outspoken. This goes double for cultural biases, where consensus (which is often used as a heuristic for truth) colors everyone’s evaluation of the evidence from the outset.

Also, there’s “the backfire effect,” whereby incorrect beliefs are sometimes *strengthened* by conflicting data: www.dartmouth.edu/~nyhan/nyhan-reifler.pdf 

So there are practical issues with my utopian ideal. That doesn’t necessarily mean that it’s the wrong approach, but that we can’t simply take it at face value as being superior – if brains suck more than the error allowed by the system, it would be a mistake to use it.


There are people who think we ought not discuss racist ideologies publicly because this normalizes them, that we should use shame & taboo to shape what sorts of discourse occur in public, to stigmatize racism & the “alt-right” white supremacy movement.

On the one hand, disallowing topics is antithetical to my ideal, & in my perfect world it’d be counterproductive – racism isn’t supported by the evidence, & arguing publicly why it’s both factually inaccurate & undesirable would result in more people being convinced of the correct conclusion. Result: less racism & a better cultural sensitivity to incorrect racist arguments. Even in our imperfect world, there are people who believe racist things who can be convinced of their incorrectness with evidence, & shushing all public discourse just means they won’t be granted that evidence.

On the other hand, brains suck, & there’s a demonstrable cultural bias regarding race. People are prone to believe wrong things about racism, & people don’t buy into racist ideologies because they’re totally unconvincing. Their flaws are subtle & often require a fair amount of numeric analysis to understand fully, which many people simply won’t do.

(Also, people suffer & die because of widespread incorrect ideologies, and that’s bad.)


There’s an (unproven!) sentiment that this is exactly what went wrong with climate change & vaccines  – by allowing crazy people to speak publicly about their incorrect (but convincing) memes, even in the service of disproving those memes, we caused them to spread.

There’s an (unproven!) sentiment that one reason Donald Trump was elected president was because he demolished the PC culture the DNC had used as a weapon (in precisely the manner described above), that there were a group of Americans who no longer felt included in the national dialogue because many of their beliefs had been disallowed from public speech.

The backfire effect (www.dartmouth.edu/~nyhan/nyhan-reifler.pdf) is a proven thing.

It’s possible that memes of anti-science/anti-evidence/anti-critical thinking can spread this way.


I want to get to a point where my ideal of a critical-thinking society is realized; is rendering certain ideologies “off-limits” for public discourse better or worse for such? 

Corollary: What is an acceptable moral cost for getting to a point where racism isn’t a thing anymore?