In a recent Atlantic piece, Adam Kirsch examines developments in brain research that propose the potential of uploading the complete human mind. Such an operation would involve a brain scanner able to detect and record an information pattern called the “connectome,” the mapping of the brain’s neural connections through the synapses across all its many levels and modes. All human cognition is created by these dynamic interactions. This map, the wiring diagram of the brain’s workings, is analogous to the human genome. This would be an artificial reality for thought, emotion, and reasoning that could replicate the thinking / feeling / experience of a total brain – almost more real than real—or at least a resource to connect human sense-making with machine learning.
An uploaded mind won’t dwell in the same environment as we do, but that’s not necessarily a disadvantage. On the contrary, because a virtual environment is much more malleable than a physical one, an uploaded mind could have experiences and adventures we can only dream of, like living in a movie or a video game (“The End of Us,” Atlantic, Jan/Feb 2023, pp. 64-65).
This complete artificial intelligence, using every affordance of human thinking, is capable of a powerful merging of human with machine intelligence. In the investment world, AI has disclosed the potential of computer intelligence that is superior to human hunches about the market and tracking its movements. This intelligence is based on projecting the past, in fine-grained detail, into the future, incorporating multiple factors beyond the ability of even the best investors to recognize and trace. “The challenge facing the investment world is that the human mind has not become any better than it was a hundred years ago …the time will come when no human investment manager will be able to beat the computer” (Institutional Investor’s Alpha, online journal for the hedge fund industry).
However, the brain is organic and its structures and dynamics are not computer programs. While a computer can win against the best human players at chess, Go, and even Jeopardy, we have yet to see computer programs perfect self-driving cars, body mobility, long-term planning, or hiring decisions. Herbert Simon, the political scientist who coined the term “bounded rationality,” (1957) did so to counter the economics model of the completely rational brain (“rational man”) making purely rational money decisions. But Simon’s term can also be applied to describe the limitations of machines in achieving artificial general intelligence—as machines, they are severely limited in replicating human cultural and common sense, cause and effect, symbolic recognition, implication finding, future projections, and decision making. This is the reason that the simple ideal image of enhanced human thinking is a human being--using a calculator. The interactive power of the digitalplus the neural appears to offer the best promise of enhanced decision making based on what each does best.
A few facts about the brain here: One of the problems: no great unified theory of intelligence yet exists, and it requires mega-computing power to even approach simulating many of the general intelligence scenarios we take for granted, such as meeting new people, learning a new language, telling jokes, handling a crisis (mental or physical), and dealing with unknown outcomes for a new decision demand. Involved in change and experience are thousands of neurons of our store of 86 billion in the brain, meaning a potential of 100 trillion interconnections. The European Union launched the Human Brain Project in 2013 with the goal of a complete simulation of the entire human brain by 2023 (Epstein, “The Empty Brain,” Aeon, 2016). This has yet to be achieved.
That is because the human cognition system is not just an information processor but far more layered and interactive as a sophisticated universe of connected thinking and emotion. This includes informal logic, seeing the viewpoints of others (theory of mind), understanding implications, nuance, multiple interacting variables, modes and layers of reality, and hyperreality. Even a three-year-old’s cognition outstrips the capacity of sophisticated computer programs to read cultural reality.
Notes Cade Metz, writing on the use of AI in medicine (AI, 2020) on current state-of-the-art issues: “Able to recognize patterns in data that humans could never identify on their own, [computer] neural networks can be enormously powerful in the right situation. But even experts have difficulty understanding why such networks make particular decisions and how they teach themselves.”
No computer program yet has been able to replicate the activity and accomplishments of human neural networks, the thousands of neurons involved in change, experience, and memory that humans achieve instinctively, but must be taught (by humans) to computers as deep learning. Computers operate by fixed focus on well-defined tasks; at the other end of the scale, humans use WB (the model for Whole Brain Emulation machines follow) to deal with change, adaptability, and handling problems we’ve never encountered before in situations that are also unique—with incomplete information and unknowable outcomes. Ever since we first emerged as homo sapiens, we’ve been trying to find ways to understand our own intelligence and the brain that centers it.
Speech engines are one such example as a means to understanding natural language, as in voice recognition and translation. Language is a complex program in itself, like the brain, with multiple modes, levels, rules, and styles, depending on purpose and context (both text and spoken), and the social relations involved. Because of this complexity, understanding language intent requires a broad approach to expression and meaning interpretation that stalls out the computer while the nimble brain fills in all the gaps creatively. Deep neural networks are now showing greater sophistication in facing down the complexity of machine learning for language analysis.
Image from Pixabay