Plato Data Intelligence.
Vertical Search & Ai.

Like a Child, This Brain-Inspired AI Can Explain Its Reasoning

Date:

Children are natural scientists. They observe the world, form hypotheses, and test them out. Eventually, they learn to explain their (sometimes endearingly hilarious) reasoning.

AI, not so much. There’s no doubt that deep learning—a type of machine learning loosely based on the brain—is dramatically changing technology. From predicting extreme weather patterns to designing new medications or diagnosing deadly cancers, AI is increasingly being integrated at the frontiers of science.

But deep learning has a massive drawback: The algorithms can’t justify their answers. Often called the “black box” problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms—even if they have high diagnostic accuracy—can’t provide that information.

To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In a study in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.

The resulting AI acts a bit like a child. It condenses different types of information into “hubs.” Each hub is then transcribed into coding guidelines for humans to read—CliffsNotes for programmers that explain the algorithm’s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.

Dubbed “deep distilling,” the AI works like a scientist when challenged with a variety of tasks, such as difficult math problems and image recognition. By rummaging through the data, the AI distills it into step-by-step algorithms that can outperform human-designed ones.

“Deep distilling is able to discover generalizable principles complementary to human expertise,” wrote the team in their paper.

Paper Thin

AI sometimes blunders in the real world. Take robotaxis. Last year, some repeatedly got stuck in a San Francisco neighborhood—a nuisance to locals, but still got a chuckle. More seriously, self-driving vehicles blocked traffic and ambulances and, in one case, terribly harmed a pedestrian.

In healthcare and scientific research, the dangers can be high too.

When it comes to these high-risk domains, algorithms “require a low tolerance for error,” the American University of Beirut’s Dr. Joseph Bakarji, who was not involved in the study, wrote in a companion piece about the work.

The barrier for most deep learning algorithms is their inexplicability. They’re structured as multi-layered networks. By taking in tons of raw information and receiving countless rounds of feedback, the network adjusts its connections to eventually produce accurate answers.

This process is at the heart of deep learning. But it struggles when there isn’t enough data or if the task is too complex.

Back in 2021, the team developed an AI that took a different approach. Called “symbolic” reasoning, the neural network encodes explicit rules and experiences by observing the data.

Compared to deep learning, symbolic models are easier for people to interpret. Think of the AI as a set of Lego blocks, each representing an object or concept. They can fit together in creative ways, but the connections follow a clear set of rules.

By itself, the AI is powerful but brittle. It heavily relies on previous knowledge to find building blocks. When challenged with a new situation without prior experience, it can’t think out of the box—and it breaks.

Here’s where neuroscience comes in. The team was inspired by connectomes, which are models of how different brain regions work together. By meshing this connectivity with symbolic reasoning, they made an AI that has solid, explainable foundations, but can also flexibly adapt when faced with new problems.

In several tests, the “neurocognitive” model beat other deep neural networks on tasks that required reasoning.

But can it make sense of data and engineer algorithms to explain it?

A Human Touch

One of the hardest parts of scientific discovery is observing noisy data and distilling a conclusion. This process is what leads to new materials and medications, deeper understanding of biology, and insights about our physical world. Often, it’s a repetitive process that takes years.

AI may be able to speed things up and potentially find patterns that have escaped the human mind. For example, deep learning has been especially useful in the prediction of protein structures, but its reasoning for predicting those structures is tricky to understand.

“Can we design learning algorithms that distill observations into simple, comprehensive rules as humans typically do?” wrote Bakarji.

The new study took the team’s existing neurocognitive model and gave it an additional talent: The ability to write code.

Called deep distilling, the AI groups similar concepts together, with each artificial neuron encoding a specific concept and its connection to others. For example, one neuron might learn the concept of a cat and know it’s different than a dog. Another type handles variability when challenged with a new picture—say, a tiger—to determine if it’s more like a cat or a dog.

These artificial neurons are then stacked into a hierarchy. With each layer, the system increasingly differentiates concepts and eventually finds a solution.

Instead of having the AI crunch as much data as possible, the training is step-by-step—almost like teaching a toddler. This makes it possible to evaluate the AI’s reasoning as it gradually solves new problems.

Compared to standard neural network training, the self-explanatory aspect is built into the AI, explained Bakarji.

In a test, the team challenged the AI with a classic video game—Conway’s Game of Life. First developed in the 1970s, the game is about growing a digital cell into various patterns given a specific set of rules (try it yourself here). Trained on simulated game-play data, the AI was able to predict potential outcomes and transform its reasoning into human-readable guidelines or computer programming code.

The AI also worked well in a variety of other tasks, such as detecting lines in images and solving difficult math problems. In some cases, it generated creative computer code that outperformed established methods—and was able to explain why.

Deep distilling could be a boost for physical and biological sciences, where simple parts give rise to extremely complex systems. One potential application for the method is as a co-scientist for researchers decoding DNA functions. Much of our DNA is “dark matter,” in that we don’t know what—if any—role it has. An explainable AI could potentially crunch genetic sequences and help geneticists identify rare mutations that cause devastating inherited diseases.

Outside of research, the team is excited at the prospect of stronger AI-human collaboration.

Neurosymbolic approaches could potentially allow for more human-like machine learning capabilities,” wrote the team.

Bakarji agrees. The new study goes “beyond technical advancements, touching on ethical and societal challenges we are facing today.” Explainability could work as a guardrail, helping AI systems sync with human values as they’re trained. For high-risk applications, such as medical care, it could build trust.

For now, the algorithm works best when solving problems that can be broken down into concepts. It can’t deal with continuous data, such as video streams.

That’s the next step in deep distilling, wrote Bakarji. It “would open new possibilities in scientific computing and theoretical research.”

Image Credit: 7AV 7AV / Unsplash 

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?