Embracing Neuroscience to Understand ChatGPT
A Meeting of Worlds
As we navigate the ever-evolving realm of artificial intelligence (AI), it is imperative that we turn to the wealth of literature available in the fields of neuroscience. Two pivotal books bridging these domains are Jeff Hawkins' "On Intelligence" (2004) and "A Thousand Brains" (2021). These groundbreaking works offer invaluable insights into the principles of human intelligence and can inform our understanding of the current fascination with chat Generative Pre-trained Transformer (GPT) models. Let’s delve into the theories presented in both books and draw parallels with a new AI product called Auto GPT, which employs self-feedback mechanisms akin to those described by Hawkins.
"On Intelligence" (2004): Laying the Foundation
In his first book, "On Intelligence," Jeff Hawkins presents the Hierarchical Temporal Memory (HTM) model, inspired by the structure and function of the neocortex. The model is based on hierarchical organization, Sparse Distributed Representations (SDRs), temporal context, invariant representations, and learning through prediction.
A key aspect of Hawkins' theory is the role of feedback mechanisms in the brain, which he argues are essential to intelligence. The brain constantly generates predictions based on its internal model of the world and updates this model when new information is available or when its predictions are incorrect. This feedback loop allows the brain to adapt and learn from its environment, ultimately leading to the development of intelligence.
"A Thousand Brains" (2021)
In his second book, "A Thousand Brains," Hawkins expands on the ideas presented in "On Intelligence" and delves deeper into the intricacies of the neocortex. He emphasizes the importance of understanding the brain's modular structure, with each cortical column processing information independently while also contributing to a collective understanding of the world. This distributed processing approach, which Hawkins refers to as the "Thousand Brains Theory," provides further insight into how human intelligence works and how it may be replicated in artificial systems.
Using Neuroscience Tactics for ChatGPT
The current popularity of ChatGPT models, such as OpenAI's GPT-4, can be better understood in the context of Hawkins' theories. GPT models rely on deep learning techniques to generate human-like text based on the patterns they have learned from large datasets. While these models are not true replicas of human intelligence, they do exhibit some similarities with the principles described by Hawkins, such as hierarchical organization and learning through prediction.
The development of AutoGPT takes these similarities a step further by incorporating self-feedback mechanisms, analogous to those described in "On Intelligence." One such mechanism is an auto-criticism feature, which allows the AI system to evaluate its own generated content and make adjustments accordingly. By incorporating this feedback loop, Auto GPT has the potential to improve its performance and better mimic human intelligence.
In a previous article we used AutoGPT to identify who is the fairest more beautiful woman in all the world - the “mirror, mirror” approach.
First, AutoGPT sets out with a specific goal and strategy:
AutoGPT: To determine who is the most beautiful woman in history, we must start by gathering as much information as possible. We could start with a Google search on famous historical beauties, then browse through websites like art museums and photography archives to find images of them. We can also use GPT agents to help expand our search and retrieve information from a variety of sources. At the same time, we should consider what aspects of beauty we want to focus on, such as physical features, personality traits, or cultural ideals.
Then, however, AutoGPT uses a self-critique creating a kind of feedback loop:
AutoGPT CRITICISM - We need to be careful not to get sidetracked by irrelevant information or fall into the trap of subjective biases when evaluating beauty. We also need to be aware of any cultural or historical context surrounding beauty standards
This provides a fascinating tactic which might indeed mimic one of the structures of human intelligence.
In the end, AutoGPT chose Cleopatra as the winner :)
In my own opinion - we have not yet realized Artificial General Intelligence (AGI) status but the incorporation of self-feedback mechanisms in AI systems, such as Auto GPT, represents an interesting step forward. By grounding our understanding of AI in the principles of human intelligence, as described by Hawkins, we can better appreciate the potential and limitations of these technologies.