A growing number of neuroscientists believe that our brain is a kind of “prediction machine” that predicts what is happening before our perceptions, in part, are hypotheses. Experiments by “computational neuroscientists” with artificial neural networks (components of AI algorithms) suggest that brains evolved as prediction machines to optimize their energy consumption. Life evolved by striking a perfect balance between the “count” and the energy that could be expended. By interweaving genes and forms, digital and analog, hardware and software, reason and emotion, the universe created biological intelligence that uses far less energy than our digital computer computations.
Artificial intelligence progressed very differently from biological intelligence, obeying the laws of scientific geopolitics and industrial competition almost as much as physical science. Pioneers Alan Turing and John von Neumann, inspired by human biology and also by Kurt Goedel’s mathematics on the limits of logic and algorithms for understanding reality, built the first digital computer (funded by the Manhattan Project and military funds during the “Cold War”) with) ). Thanks to semiconductor physics, computers expanded their computing capability as the size of the chips could be reduced. The memory and computing capacity of microprocessors doubled every two years between 1980 and 2010. This created a separation between the activity of manufacturers of chips (hardware) and the activity of developers of software and algorithms. We computer scientists and scientists have become accustomed to thinking only of the algorithm, assuming it will run on machines that are capable of computing anything we throw at it.
But we are reaching the limits of this model. On the one hand, the chips cannot be made smaller any further (the 2 nanometer limit has already been reached, there is no more room to shrink). On the other hand, only Taiwan and South Korea know how to make the most advanced chips, which creates a precarious geopolitical situation. But there is another problem, energy consumption, which is starting to become another insurmountable obstacle for fragile globalized production chains. It is estimated that 3% of the total electricity used in the world is consumed in data centres, which is more than the electricity used by the whole of the UK. Projections suggest that this will increase to 13% in 2030.
The supercomputers we use to model weather, design medicine, planes and cars, etc. They also consume as much electricity as a city of 10,000 inhabitants. The Summit supercomputer at Oak Ridge National Laboratory, for example, will annually produce CO2 emissions equivalent to more than 30,000 round-trip flights between Washington and London. One round of training a powerful AI algorithm (for example, a language translator) costs $4 million in electricity bills. A single cryptocurrency transaction uses the same amount of electricity as a typical household in a week.
These exorbitant expenses can/should be factored in. Scientists are trying to rectify the situation, but in a disorganized way. Although little unites them, they look to biology for inspiration in living structures, which are capable of computing with very little energy expenditure. Algorithm designers try to incorporate the predictive ability of the brain, which I mentioned at the beginning, by using the physics of the process, to reduce the number of AI parameters. But the mighty aren’t running that way: The race for “artificial superintelligence” began in 2020 with “Open AI” founded by Elon Musk revealing GPT-3 with a capacity of 175 billion parameters in the algorithm. This was followed in 2021 by Google (with 1.6 trillion parameters) and the Beijing Academy of Artificial Intelligence (with 1.75 trillion). But it is not clear that this route of increasing the size of the algorithm will lead to SuperIA, as energy consumption sets a limit that cannot be crossed.
Some scientists are clear about it, the only solution for progress is to revisit biology. As in our brains, hardware and algorithms/software must be closely related. One particularly interesting area that is starting to gain traction is “neuromorphic chips.” Neuromorphic designs mimic the architecture of the gelatinous blob in our heads, which houses computing units alongside memory. The researchers use analog computing, which can process continuous signals like real neurons. Several analog neuromorphic computers are already in operation, two examples being NeuroRAM in the US (which costs 1,000 times less than a digital chip) and NeuroGrid from the “brain in silicon” at Stanford. In Europe, IMEC built the world’s first self-learning neuromorphic chip and demonstrated its ability to learn to compose music. It’s unclear how these new systems will reach the real world. The problem is that designing hardware is risky and expensive (developing a new chip costs $30–80 million and 2–3 years).
Perhaps it is precisely the geopolitical situation, as in the birth of the first computer, that pushes us. In China, neuromorphic computing is seen as one of the areas where it can surpass current digital systems and has dedicated laboratories in all major universities. In the US, the Armed Forces Digital and Artificial Intelligence Office (CDAO) and other military institutions are already developing and funding the implementation of neuromorphic hardware for use in combat. Applications include smart headsets/glasses, drones and robots.
In an unstable world once again threatened by wars, geopolitics may inspire us to reinvent computing and reconnect with Goedel, Turing and von Neumann beyond our limits . As they well knew, reality cannot be simulated in digital algorithms. We return to the reality of physics, which has always escaped the full control of human logic, to try to move forward.
you can follow country technology in Facebook why Twitter or sign up here to receive our newsletter semanal,