× close
Credit: Rice University
Cambridge scientists have shown that imposing physical constraints on an artificially intelligent system – in the same way that the human brain must develop and function within physical and biological constraints – allows it to develop the characteristics of the brains of complex organisms in order to solve problems. Tasks.
As neural systems such as the brain organize and make connections, they must balance competing demands. For example, energy and resources are required to expand and maintain the network in physical space, while optimizing the network for information processing. This trade-off shapes all brains within and across species, which may explain why many brains converge on similar organizational solutions.
Jascha Achterberg, Gates Research Fellow from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge, said: “Not only is the brain excellent at solving complex problems, it does it using very little energy. With our new work, we show that considering the brain’s problem-solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look the way they do. »
Co-lead author Dr Danyal Akarca, also from MRC CBSU, added: “This follows from a general principle, that biological systems generally evolve to make the most of the energy resources available to them. are often very elegant and reflect the compromises between the different forces imposed on them.
In a study published in Intelligence of natural machines, Achterberg, Akarca and their colleagues created an artificial system intended to model a very simplified version of the brain and apply physical constraints. They found that their system developed some key features and tactics similar to those found in the human brain.
Instead of real neurons, the system used compute nodes. Neurons and nodes serve a similar function, in that each takes an input, transforms it, and produces an output, and a single node or neuron can connect to multiple others, with all incoming information calculated.
However, in their system, the researchers applied a “physical” constraint. Each node was assigned a specific location in a virtual space, and the farther apart two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organized.
The researchers gave the system a simple task to complete: in this case, a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it must combine several pieces of information to decide on the shortest one. route to arrive at the end point.
One of the reasons the team chose this particular task is that to complete it, the system must maintain a certain number of elements (start location, end location and intermediate steps) and once it has learned to perform the task reliably, it is possible to observe, at different times during a trial, which nodes are important. For example, a particular group of nodes may encode arrival locations, while others encode available routes, and it is possible to track which nodes are active at different stages of the task.
Initially, the system does not know how to accomplish the task and makes mistakes. But when we give him feedback, he gradually learns to improve at his task. It learns by changing the strength of the connections between its nodes, in the same way that the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until it eventually learns how to perform it correctly.
However, with their system, the physical constraint meant that the further apart two nodes were, the more difficult it was to establish a connection between the two nodes in response to feedback. In the human brain, connections that span large physical distance are costly to establish and maintain.
When the system was asked to perform the task under these constraints, it used some of the same tricks that real human brains use to solve the task. For example, to circumvent constraints, artificial systems have begun to develop hubs, highly connected nodes that serve as conduits for transmitting information across the network.
What is more surprising, however, is that the response profiles of individual nodes themselves began to change: in other words, rather than having a system in which each node codes for a particular property of the maze task, such as the goal location or the next choice, the nodes developed a “flexible coding scheme”. This means that at different points in time, nodes can trigger a mixture of the properties of the maze. For example, the same node might be able to encode multiple locations of a maze, rather than needing specialized nodes to encode specific locations. This is another characteristic observed in the brains of complex organisms.
Co-author Professor Duncan Astle from the Cambridge Department of Psychiatry said: “This simple constraint – it is more difficult to wire nodes far apart – forces artificial systems to produce quite complex features. Interestingly, these are features shared by biological systems like “The human brain. I think this tells us something fundamental about why our brains are organized the way they are.”
Understanding the human brain
The team hopes their AI system could begin to shed light on how these constraints shape differences between people’s brains and contribute to differences seen in those experiencing cognitive or mental health difficulties.
MRC CBSU co-author Professor John Duncan said: “These artificial brains allow us to understand the rich and puzzling data we see when the activity of real neurons is recorded in real brains. »
Achterberg added: “Artificial ‘brains’ allow us to ask questions that would be impossible to examine in a real biological system. We can train the system to perform tasks, and then play experimentally with the constraints we impose, to see if it starts to resemble the brains of particular individuals more. »
Implications for the design of future AI systems
The results are likely to be of interest to the AI community as well, as they could enable the development of more efficient systems, particularly in situations where there are likely to be physical constraints.
Dr Akarca said: “AI researchers are constantly trying to figure out how to create complex neural systems that can code and operate flexibly and efficiently. To achieve this, we believe that neurobiology will give us a lot of inspiration. For example, the overall cost of wiring the system we created is much lower than that of a traditional AI system. »
Many modern AI solutions involve the use of architectures that only superficially resemble a brain. The researchers say their work shows that the type of problem solved by AI will influence the most powerful architecture to use.
Achterberg said: “If you want to build an artificially intelligent system that solves problems similar to those of humans, then ultimately the system will end up looking much more like a real brain than systems running on large computing clusters specialized in tasks very different from these. ”
This means that robots that must process a large amount of constantly changing information with limited energy resources could benefit from brain structures similar to ours.
Achterberg added: “The brains of robots deployed in the real physical world will likely look more like our brains, because they might face the same challenges as us.”
“They must constantly process new information from their sensors while controlling their body to move through space toward a goal. Many systems will need to run all their calculations with a limited supply of electrical energy and thus balance these energy constraints with the amount of information it has to process, it will probably need a brain structure similar to ours.”
More information:
Spatially integrated recurrent neural networks reveal extensive connections between structural and functional discoveries in neuroscience, Intelligence of natural machines (2023). DOI: 10.1038/s42256-023-00748-9
Journal information:
Intelligence of natural machines
Gn En tech