From Dogs to Consciousness: Why Everything is in a Graph

πŸš€ Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Research

Tags: AIbrainconsciousnessgraphlearning

Entities: Brain Simulator 3Charles SimonChat GPTFuture AI SocietyUniversal Knowledge Store

Building WordCloud ...

Summary

    Brain and AI Comparison
    • Charles Simon discusses the advantages of the human brain over AI, particularly in energy efficiency, common sense, and one-shot learning.
    • The brain organizes signals into thoughts through a graph-like structure, with nodes representing clusters of neurons and edges as synapses.
    Graph Structure in the Brain
    • The brain's graph structure supports object-attribute associations, reverse relationships, and inheritance with exceptions.
    • This structure allows for efficient data compression and fast retrieval of information.
    • Agents in the brain help maintain relevance by bubbling up attributes and pruning outdated information.
    Human Learning and Imagination
    • Human learning is efficient, allowing for quick generalization from limited experiences due to the graph structure.
    • Imagination is the ability to simulate outcomes in the mental graph, facilitating understanding and prediction.
    Consciousness and the Graph
    • The graph structure is vital for consciousness, enabling a mental model with the self at the center and supporting bi-directional reasoning.
    AI Limitations and Future Directions
    • Current AI techniques cannot be directly implemented in the brain due to speed and mechanism limitations.
    • The brain simulator 3 and the universal knowledge store aim to integrate these graph-based features into AI for a more human-like intelligence.
    Takeaways
    • The brain's graph structure is key to its efficiency and flexibility.
    • Understanding and imagination are integral to human cognition and stem from the brain's graph organization.
    • AI's current limitations highlight the need for graph-based approaches to achieve more natural intelligence.
    • Consciousness requires a dynamic, self-centered graph structure.
    • Participate in the Future AI Society to explore these concepts further.

    Transcript

    00:00

    Welcome back to the channel. Our brains have some huge advantages over today's AI, particularly in the areas of energy efficiency, common sense, and oneshot learning.

    So, can we look at the human

    00:17

    brain and copy its design for better AI? We know the brain contains about 86 billion spiking neurons, each connected by synapses to possibly thousands of others.

    But knowing about neurons and

    00:32

    synapses doesn't explain how we think, how we reason, or how we understand the world. If we step back, we find that there is only one structure that could possibly organize this flood of signals into thought, and that's a mathematical

    00:50

    graph. A graph is a set of nodes connected by edges.

    In the brain, the nodes are clusters of neurons, and the edges are the many synapses linking them. The graph may sound like a mathematical abstraction, but in

    01:05

    reality, it's the most natural explanation. It's the only way the brain can represent what we so clearly observe in everyday human behavior.

    I'm Charles Simon, longtime AI researcher, software developer, and

    01:21

    manager. Beyond AI, I've developed software for neurological test instruments and neuro simulators.

    I created the future AI society to explore how neuroscience can inform smarter, more humanlike AI.

    01:38

    I'm using our open-source brain simulator projects for simulations and demonstrations throughout this video series. Think about what you know.

    You don't think in terms of neural spikes. You

    01:54

    think in terms of objects and attributes. A dog has fur, four legs and a tail.

    Your friend John is tall, wears glasses, and likes jazz. These are simple object attribute associations, and the only neuron-based

    02:10

    structure that can support them with flexibility and retrieve them in a fraction of a second is the graph. Each concept is a node and the connections to other nodes represent relationships or attributes.

    02:28

    This structure also supports reverse relationships. You don't just know that pho has fur.

    You know that fur is an attribute of phto. Your memory works in both directions.

    If you're asked which of your pets has

    02:44

    fur, you can retrieve the answer immediately. That kind of reverse lookup falls naturally out of a graph where every node knows its edges.

    Because phto and fur have direct connections, retrieval of the information can take a small fraction of

    03:01

    a second, even though the individual neurons can fire at most 250 times a second. Another key feature is inheritance with exceptions.

    You know that most birds can fly, but you also know penguins can't.

    03:19

    In a graph, this is handled beautifully. The bird node passes down attributes like flying and feathers to its sub nodes.

    But the penguin node carries an exception does not fly. This mirrors exactly how humans think.

    03:36

    No other structure fits the way knowledge is inherited and modified. From a computational perspective, this is a huge mechanism for data compression.

    Your brain doesn't know all the

    03:52

    attributes of anything. Each thing inherits most of its attributes from its ancestor nodes.

    So it only needs to reference the attributes which make it unique. Let me repeat that another way.

    You don't know explicitly all the attributes

    04:09

    of anything. You only know the unique attributes.

    All the others are inherited on the fly. Your brain makes this process invisible and seamless.

    But you know it's happening because if you learn a new attribute of an ancestor node, it

    04:26

    propagates immediately to all the child nodes. If I tell you that dogs typically have 18 toes, you'll immediately assume that phto has 18 toes, too.

    Now consider biology.

    04:42

    Your DNA doesn't contain enough explicit information to hardcode the connections for every neuron. It's impossible.

    Instead, evolution relies on repeating a standard design pattern. Clusters of neurons likely cortical columns wired

    05:00

    with a generalurpose architecture. These clusters then optimize themselves as they receive input.

    Within each cluster, neurons and synapses already form the framework of nodes and edges. Then adding information requires changing the

    05:17

    weights of just a few synapses. And when millions of clusters link together, the graph grows into the vast network we call the mind.

    This repeating design is the only way to scale brain complexity without requiring

    05:34

    impossibly detailed genetic instructions are taking impossibly long to process. The graph structure is unbelievably efficient.

    You can learn the attributes of an object in a single moment. You can

    05:50

    also identify a thing based on attributes in just a handful of steps. If I describe something as a yellow fruit that grows in bunches, you jump to banana or perhaps grapefruit almost immediately.

    In graph terms, you have

    06:07

    traversed perhaps only 20 edges before settling on the right nodes. In terms of scaling, observe that in your area of expertise, you know a lot, but in other areas, you know just a little.

    But knowing a lot doesn't seem

    06:22

    to slow your brain. If you were a programmer, does it take you longer to answer programming questions than it does to answer questions about cats and dogs?

    Of course not. Apparently, the time it takes to search your mental graph doesn't increase much with the

    06:38

    amount of information in the graph, and it certainly doesn't exhibit the combinatorial explosion observed in many other data structures. This efficiency also explains why human learning is so fast.

    Unlike today's AI

    06:55

    which requires millions of training examples, you can learn from a single experience which builds a few neural relationships. Even young children do this.

    They know things in terms of objects, attributes,

    07:12

    and inheritance from a very young age. A child who learns that a robin is a bird instantly assumes it can fly without ever having seen one fly.

    That assumption comes from inherited attributes in their mental graph. When

    07:28

    they later learn about penguins, they add an exception. We might call this deductive reasoning, but it's a natural result of the way your brain stores information.

    07:43

    As we grow, we move on from factual graphs to learned algorithms. Adults use sequences of nodes to perform math procedures or writing techniques.

    These sequences are really just paths through the graph. Whether it's adding up

    07:59

    numbers, composing poetry, or playing a symphony, every complex skill is broken down into nodes and edges organized in time. But the graph isn't static.

    It needs

    08:14

    mechanisms to stay useful. This is where we introduce agents.

    Local processes that act on the graph. Agents allow attributes to bubble up so that properties of subnodes can inform the parent class.

    If your brain notices that

    08:31

    phto, spot, and rover all share some attributes, it bubbles those attributes up to the dog node and removes them from the individual dogs. in your brain.

    Perhaps this happens during sleep. Other agents create subclasses when new

    08:47

    categories are discovered. A guitar, violin, and ukulele are all musical instruments, but your brain may notice that they all have strings and necks and tuning pegs.

    It can create a subclass of

    09:02

    musical instruments, make these instruments its descendants, and bubble the common attributes up to it. Later you may learn a name for the subclass like stringed instruments.

    09:18

    Agents can also create instances of classes. My guitar is an instance of the guitar subclass with some specific attributes including its location.

    But this whole class subclass instance organization

    09:34

    isn't fixed or formal. It's just your brain's way of generalizing from observations and keeping its storage efficient.

    Of course, agents can also prune stale information when it's no longer useful.

    09:49

    These agents are how the brain keeps its knowledge fresh, flexible, and adaptable. Without them, the graph would collapse under its own weight.

    Contrast

    10:05

    this with modern AI. Popular techniques simply cannot map onto neurons.

    They either require speed neurons don't have or mechanisms neurons don't support. Take back propagation and gradient descent.

    This requires microscond

    10:22

    updates with precise error gradients sent backwards across synapses. Neurons fire in milliseconds, not microsconds.

    and no such backward error mechanism exists. Or look at the transformer attention of

    10:38

    large language models like chat GPT. AI computes allto-all products across thousands of tokens, something neurons are far too slow to do.

    The brain implements attention with actual focus,

    10:54

    not giant matrix multiplication. The list goes on.

    reinforcement learning, Monte Carlo methods, symbolic logic, basian inference, predictive coding. None of these can be implemented directly with neurons.

    The brain doesn't

    11:11

    do them because it can't. The only structure left is the graph.

    And that's exactly what your brain uses. A graph can store any kind of information.

    Of course, factual

    11:26

    information, as I've used in the previous examples, but also visual information becomes nodes and attributes. Red, round, and shiny as attributes for an apple.

    Segment, segment, segment for an A. Sounds are stored the same way.

    High-pitched, loud,

    11:43

    vibrating. There are also attributes of relative sizes, positions, pitches, and volumes.

    When you recognize an apple, its apparent size depends on how close it is. By only storing relative sizes, you

    11:59

    can recognize it regardless. Likewise, you can recognize a tune regardless of what key it's being played in.

    The graph can also handle situation, action, outcome triples. If you touch a hot

    12:16

    stove, the graph stores situation, hot surface, action, touch, outcome, pain. That becomes a few learned edges you can recall instantly whenever you detect a similar situation.

    12:32

    It can even store algorithms, a sequence of nodes with if goto style edges forms a procedure. This is how you know how to cook a meal, drive a car, or add up numbers.

    Most importantly, the graph can create a

    12:49

    mental model of your surroundings. This model places you at the center with connections to nodes representing objects around you.

    It updates continuously with sensory input, sight, sound, and touch. It integrates them

    13:07

    into a coherent model. So you can hear phto barking at night and picture his relative position.

    This mental model allows you to predict outcomes. You can imagine what will happen if you drop a glass or swing a

    13:23

    bat. You don't need to try it.

    You simulate it in the mental graph. That's what imagination really is and what it's for.

    It also explains understanding. You understand something when you can

    13:40

    connect it to other nodes in your graph. Understanding, as I define it here, isn't about memorizing facts.

    It's about integration and using your imagination to make predictions using that integrated information.

    13:55

    This is a key factor differentiating our thought from a large language model. By this definition, chat GPT understands nothing and attempts to make up for this lack by storing trillions of cases.

    In

    14:11

    the same way that inheritance and exceptions allows for huge knowledge compression, understanding can replace thousands of samples with a few generalized rules. This is why humans can generalize so

    14:28

    well. You've had a limited number of experiences, but your imagination lets you make predictive use of them in new contexts.

    This ability comes directly from the graph structure.

    14:44

    Finally, let's tackle consciousness. Why is the graph structure necessary for it?

    Because the mental model, which includes your point of view, underpins your sense of self. Consciousness requires a

    14:59

    representation of the world with yourself at the center. It requires birectional reasoning so you can reflect on your own thoughts.

    It requires context sensitivity so you know not just the facts but their relevance to you.

    15:18

    No other structure can do this. A giant matrix multiplication can't tell you what you are.

    Only a graph continuously updated and centered around the self nodes can sustain consciousness.

    15:34

    So let's review. The brain can't run modern AI techniques.

    It's just too slow or doesn't have the mechanisms. But it doesn't need them.

    By organizing itself as a graph, the brain achieves storage of objects and attributes. Reverse

    15:51

    relationships, inheritance with exceptions, efficient oneshot learning, agents to maintain relevance, mental models that integrate all the senses, imagination for prediction and

    16:06

    understanding through this integration and imagination. And finally, consciousness itself.

    Every observation about human behavior, every constraint of biology and every comparison with AI leads us to the same

    16:24

    conclusion. The brain must be a graph and not just any graph, but a richly structured dynamic graph of neuron clusters.

    It's the only possible model of knowledge in the human brain. That's

    16:40

    why the brain simulator 3 and its underlying structure, the universal knowledge store, is so important to the future of AI. It's an open-source project that already implements many of the features I've described in this video.

    It demonstrates

    16:56

    the efficiency and common sense processes which could be added to our artificial intelligence to enable a safer, more sustainable future for all of us. Next time I'll dive into the unique features of the brain simulator and the

    17:14

    universal knowledge store. If these ideas resonate with you, if you want to see where they lead, please like, subscribe, and hit the notification bell because the YouTube algorithm won't surface videos like this

    17:30

    unless you ask for them. And if you want to dig deeper, join the Future AI Society.

    It's free. and help us shape the next generation of intelligent systems.

    And you can participate in our online conversations and our Discord

    17:45

    server. And as always, thanks for watching.