Sunday SciKu | Addition by Subtraction

Cognitive biases are always interesting because understanding them is such an important aspect of critical thinking. We can only see the world through layers of filters, and it’s impossible to understand anything without adjusting for them.

This week, researchers at the University of Virginia published work on a bias I’d never heard of before—I don’t think there’s even a name for it yet, but maybe it will be called “additional bias”? When problem solving, it turns out we’re much more drawn to solutions utilizing addition rather than subtraction.

For example, training wheels have been added to bicycles as a way to teach kids how to balance. It took decades for us to realize that a better solution is just to take the peddles off, and let them practice on a simple “balance bike.”

In the study, participants were given LEGO problems that could be solved either by adding more blocks or removing some of them. Almost invariably, people default to adding more blocks even in scenarios where blocks cost money to buy. It seems as if our minds generate additive solutions more readily than solutions through subtraction—and because we tend to stop at the first working solution we find, that’s what we end up going with.

The consequences of additional bias are wide-ranging in engineering and ecology, but also apply to daily life. Just ask Marie Kondo.

 

priceless dust
on an empty shelf
yard sale

 

Sunday SciKu | Great Legs

Trilobites were among the most successful animals in the history of the planet, thriving for 250 million years before finally succumbing to the end-Permian Extinction, along with 80% of marine life. It’s the greatest extinction event in the fossil record, and we’re still not sure what caused it, being so long ago—most likely it was climate change due to the intense volcanism that created the Siberian Traps at the same time.

We do know now, though, that trilobites had gills on their legs for taking oxygen out of the water. A team at UCR took CT scans of trilobite fossils found in pyrite (fool’s gold), which managed to preserve tiny impressions of soft tissues, allowing them to examine the filaments thinner than a human hair that filtered and transported trilobite blood.

At the peak, there were more than 22,000 species of trilobites ruling the ocean floor. Their reign spans about 30% of the time since the first animals appeared. Hominids, by comparison, have been here 0.5% of the time. Humans 0.025%. It kind of puts things into perspective, doesn’t it?

 

all your success
a trilobite found
in fool’s gold

 

Sunday SciKu | Humming Along

I almost forgot to share this week’s sciku! It was inspired by researchers at Stanford and TU/e who have been studying how hummingbirds hum.

If you’ve spent any time watching hummingbirds, you might already know they answer: they don’t fly like regular birds that apply aerodynamic force to the undersides of their wings with every down-stroke. Their wings move back and forth horizontally, letting letting them hover like a helicopter. This motion creates pressure fields on both the up and the down strokes, and those fields of compacted air oscillate back and forth at 40 hertz, creating their iconic and soothing hum, rather than an annoying buzz.

What was interesting about this story, though, was the great pains the researchers went through to record the hummingbirds and then process the data. Over the course of four days, they filmed with 15 high speed cameras, over two thousand microphones in a “sound camera” array, and recorded the movement of air with a series of pressure plates. Then it took THREE YEARS of machine learning AI to process and synchronize the massive amount of data that was collected.

There’s a cliched joke about scientists wasting time and grant money studying whale burps or the way cheese melts on toast. But it’s often the challenges that come with solving any problem—even a mundane problem—that lead to new advances. In this case, the sound camera technology that was developed will be used to decrease the background noise created by drones and fans, and ultimately could make all of our appliances quieter. Wouldn’t that be nice?

 

hummingbird
heavy in the field
of sound

 

Sunday SciKu | Show Don’t Tell

One of the main points that always comes up in our live Critique of the Week sessions is the importance of images in writing. “Show don’t tell” is the mantra of every creative writing workshop, and all it really means that illustration is more emotionally powerful than explanation. This concept seems counterintuitive because it is—shouldn’t a precise description of a feeling convey it better than a visual representation that needs to be interpreted by the reader?

The thing is, we’re not computers. The information we receive from the world isn’t digitized into discrete packets of meaning. We think in something more akin to messy clouds of association, as constellations of neurons that fire together become wired together. If you were to map the way thoughts are constructed, our brain might fundamentally be simile machines—this is like this is like this—and when one thought lights up, everything connected to it starts to glow.

Evolving within the ecological niche that we did, as scavengers looking for fruit in the distance while avoiding snakes in the grass, evolutionary pressure has driven our mental capacity toward more and more visual processing—stealing that capacity from other senses like smell. 60% of the human cerebral cortex is devoted to vision, and 40% of all nerve fibers connected to the brain are linked to the retina.

If we were dogs the advice would be “smell don’t tell,” but we’re humans, so our cognition is oriented to sight. Because of that, conveying emotion through visual stimulation gets more of our neurons firing and allows us to make stronger associations that translate into a bigger emotional response for the reader.

Researchers at UNSW-Sydney were able to measure “show don’t tell” for the first time this week. They put participants into a dark room and had them read a scary story presented on a screen, using skin conductivity to measure their fear responses. Those with aphantasia—the inability to visualize mental images—showed no physiological fear response, while a neurotypical control group did, demonstrating that it is the visualization within the reader that elicits emotions, not the meaning of the text. Interestingly, the research was inspired by people with aphantasia recounting their difficulty enjoying novels.

So if you want your readers to feel something, show them. And it applies everywhere. Consider the emotional impact of “Build the Wall” to the similar but less visual phrase “Build Back Better”—even if that feeling is a sickness in the pit of your stomach, one is visceral and sticky and the other is not. It’s one of the things that makes Trump such an effective conman—he always uses visuals.

This is also why one of the best memory techniques is to imagine placing the things you want to remember into different rooms of a house. Images use more of the brain, giving them more opportunities to stick.

Anyway, here’s this week’s tiny sciku.

 

image—
a
nation

 

Sunday SciKu | Digital Universe

From the quantization of everything (including time?) to Fermi’s Paradox, retrocausality, collective consciousness, and subjective perception, there are plenty of observations that suggest all we see and seem might be a dream within a dream—and that was before Hong Qin at the Princeton Plasma Physics Laboratory was able to create a machine-learning algorithm that can predict the motions of planets without knowing Newton’s Laws.

The mathematics of discrete field theory and Lagrangian density is beyond me, so I could be misunderstanding the paper, but what it seems to be doing in layman’s terms is treating the universe as a three-dimensional pixelated lattice of points—rasterizing it, essentially, rather than treating it as objects interacting with each other through the rules of physics. Then he added the positional data of a few planets, and let AI machine-learn an algorithm that could predict any other orbital pathway in the solar system. As Hong Qin puts it: “Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data […] There is no law of physics in the middle.”

On a practical level, this looks like the start of a new era of science, where black box AI “knows” more than we do, providing us with extremely useful information, while keeping us mortal humans completely in the dark about why it’s able to make the accurate predictions it does. Something similar was used last year to develop the Bradykinin hypothesis for Covid-19 pathology. Qin plans on using his technique next to help work on plasma fields within experimental fusion reactors—something that could be incredibly useful, as well.

Even more interesting, though, are the implications for the way consciousness projects meaning onto the underlying code of reality. As Qin puts it, “What is the algorithm running on the laptop of the Universe?” What we think of as the physical world is nothing more than a series of adaptive filters we construct, like icons on a desktop or beer bottles to the Australian jewel beetle, in Donald Hoffman’s great analogy. The reason we can’t seem to unify general relativity and electromagnetism into a Grand Unified Theory and why the Lambda-CDM model keeps digging itself into deeper and deeper holes as we fail to find dark matter and dark energy might be because these are only functional programs running simultaneously on a more fundamental operating system that we can’t access. Yes, like The Matrix.

 

a blur of fur
bounding though the open
field theory