Mentoring the Machines: Surviving the Deep Impact of an Artificially Intelligent Tomorrow is a new book project I've partnered on with esteemed cognitive scientist John Vervaeke. Visit www.mentoringthemachines.com to learn more or at Amazon.com
Once upon a time, there was a fast-moving, ambitious young man intent on making something of himself, preternaturally endowed with a drive to become someone authoritative, worthy of emulation.
From a peripatetic and hardscrabble background, certainly not to the manner born, nor a hail-fellow-well- met, he was nevertheless innately curious and steadfast in finding the optimal pathway to social, even historical, relevance.
He was a distinctly American young man in a hurry, as writer Mark Twain so precisely characterized Tom Sawyer.
After teaching himself calculus at fifteen and, as a result, gaining early admittance to the California Institute of Technology at sixteen, he burned through his coursework with relative ease. Clearly on the fast track to achieve the stated ambition he’d included with his undergraduate application, “I intend to be a professor of mathematics” (Hayes and Morgenstern 2007, 94), he found himself facing the second act of the American dilemma, a nasty case of the Peggy Lee conundrum. The “Is That All There Is?” blues (Lee 1969).
By the time he received his bachelor of science and began his graduate math studies in the fall of 198, the twenty-one-year-old’s lofty goal was pretty much in the bag. The thing he soon realized about climbing into the highest reaches of abstract math, though, was that since Kurt Godel’s devastating Incompleteness Theorems in 1931, the discipline had lost its place at the top of Western civilization’s intellectual hierarchy. While absolutely integral to advancing science in totem, it was now understood by those truly in the know that math was not the be-all and end-all that rationalism’s ghosts (Pythagoras, Euclid, Galileo, Leibniz, Hume, and Hilbert, to name a few) had cracked it up to be.
Per Godel, math was incomplete and ultimately an unprovable truth that required an axiomatic faith that there was a way to know, understand, and comprehend reality. Math was foundationally indefensible. What was once considered the yellow brick road to certainty now required an incontrovertible acceptance of uncertainty.
Nevertheless, to be a capable mathematician in 1948 qualified you for special service. But make no mistake. Usually, it was a service position to another discipline. No matter how adept one was at manipulating mathematical abstractions, mathematicians no longer sat at the top of the science pyramid.
Physics had usurped it. It was the new top of the intellectual spear, the place to be.
Godel may have been the oracle who’d foretold the changing of the guard. Still, the writing was very much on the wall ever since Einstein, Bohr, Heisenberg, Schrödinger, Fermi, and Oppenheimer blew up Sir Isaac Newton’s two- hundred-and-twenty-eight-year reign as the “settler” of the ways of mechanical interaction.
What was a poor intellectual to do when physics’ pantheon of geniuses was generally complete? While making a name for oneself in that crowd wasn’t entirely impossible, Richard Feynman managed (Feynman, Leighton, and Hutchings 1997), but the big ideas, it seemed, were inventoried and accounted for. General and special relativity was at the top (Einstein 1920), thermodynamics was in the middle (Kondepudi and Prigogine 1998; Prigogine and Stengers [1978] 2018), and quantum theory (Hürter and Shaw 2022) sat at the bottom. All that was left to do was integrate the triplet into a coherent universal whole, contriving a theory of everything. And then it would all be over but the shouting.
Reification of robust theorems with nuanced particularities was considered mop-up work. And Einstein was already on the “theory of everything” case. Not much meat left on the physics bone, it would seem...
But one figure somehow rose above it all. He straddled math and physics, chemistry, sociology, whatever. He belonged at any academic table in the world. This wizardly polymath would end up as the architect who transformed Alan Turing’s abstract fever dream, the Turing machine, into an elegant three-component mechanism that looped together and bootstrapped itself into energy, information, and someday perhaps even a meaning processor (Hodges [1983] 2014).
One day, the young man in a hurry noticed that John von Neumann (Bhattacharya 2022)—Godel’s equal, even his savior, and often his caretaker—the Manhattan Project consigliere who visited and left Los Alamos as he pleased, and the mastermind behind the computer architecture that would transform the world, was participating in a lecture series at Caltech. Von Neumann was a featured attraction at the Hixon symposium “Cerebral Mechanisms in Behavior,” which would convene from Monday, September 20 through Friday, September 25, 1948 (Jeffress 1951).
Five days were dedicated to exploring the relationship between the brain and the mind, generally considering how brain matter generates mind motion.
This was something fresh. The young man blew off his classes and settled in.
The topics presented and discussed those five days, especially by von Neumann and the foremost Gestalt psychologist Wolfgang Köhler ([1970] 1992), lodged like splinters inside the young man’s mind. He’d listened and then speculated that what had been conjectured since time immemorial—knowing how our mind knows—was no longer a matter for fantastical science fiction storytelling, one of his hobbies, or armchair philosophy. It was now entirely scientifically frameable and within grasp.
A year later, still convinced that the brain/mind investigation was the future, he headed east to earn his PhD on the campus kitty-cornered to the think tank that von Neumann, Godel, and even Einstein used as their home base, the Institute for Advanced Study in Princeton, New Jersey.
One day the excited young man spotted the great man himself, John von Neumann. And excitedly, perhaps maniacally, he approached him and told him he’d been thinking a lot about the professor’s symposium presentation back at Caltech the previous fall, entitled “The General and Logical Theory of Automata” (Jefress 1951).
Automata derives from Greek. It’s the plural of automaton, which means to be an unconscious inanimate thing that is self-moving, self-acting, even self-replicating. While capable of expressing novel behavior, “thinking,” they would be theoretically programmable and unconscious. They would be zombie machines that acted like brains and minds but were not alive. Problem-solving servants not problem-solving persons.
In theory.
Why was von Neumann involved in automata? Didn’t he have enough on his plate what with the EDVAC and nuclear fission and fusion projects?
In the spring of 1945, just before the A-bomb detonations, in discussions with his wife Klari, after confessing to the monstrous achievement, Ananyo Bhattacharya’s reports in his biography The Man from the Future: The Visionary Life of John von Neumann:
But then von Neumann abruptly switched from talking about the power of the atom to the power of machines that he thought were “going to become not only more important but indispensable.”
“We will be able to go into space way beyond the moon if only people could keep pace with what they create,” he said. And he worried that if we did not, those same machines could be more dangerous than the bombs he was helping to build.
“While speculating about the details of future technical possibilities,” Klari continues, “he got himself into such a dither that I finally suggested a couple of sleeping pills and a very strong drink to bring him back to the present and make him relax a little about his own predictions of inevitable doom.”
Whatever the nature of the vision that possessed him that night, von Neumann decisively turned away from the pure maths to focus single-mindedly
on bringing the machines he feared into being. “From here on,” Klari concludes, “Johnny’s fascination and preoccupation with the shape of things to come never cease” (2022, 102–103).
Now four years later, in the fall of 1949, on the Princeton campus, an excited young man approaches von Neumann, referencing specificities in his recent work, intent on discussing the very thing that had seized the forefront of his imagination.
Von Neumann stopped and listened to what his fellow automata nerd had to say.
Excitedly, the young man proposed to von Neumann that perhaps simulating behavior, the brain-to-mind input process and mind-to-brain output process, involved not one but two systems, specifically “two interacting finite automata, one playing the role of a brain and the other playing the role of the environment” (Nilsson 2012, 3).
Entertaining the ideas of an overexcited naïf, expounding on concepts he himself had been pondering for decades and had recently placed at the top of his cognitive stack, von Neumann did what any wise professor would do under the circumstances.
He nodded encouragingly.
The young man sputtered, “Even if the ‘brain automaton’ could be made to act intelligently, its internal structure wouldn’t be an explicit representation of human knowledge.” The young man thought that somehow brains did explicitly represent and reason about “knowledge” (Nilsson 2012, 3). But these representations were not part of the internal organization stuff, neurons, or their interlinked networks that made up the overall structure. There must be two processes, not one.
Considering the propositions, crunching the numbers, the towering figure, the wizardly polymath himself, John von Neumann said, “Write it up.” Then he went on his way (Nilsson 2012, 3).