How We SAVE IT in our Head
The human brain is complex—agreed. But as it is said, the laws of the universe are the same and simple: the transfer of energy happens either in its original state or in the form of loss, and it propagates from a high-density area to a low-density area. In reality, most areas are in a lower-density state compared to where energy currently exists. But how is the direction of certain energy flows decided, or how is energy channeled from one place to another without harming the surrounding environment?
Anyway, we are not here to discuss strange concepts of energy propagation but to understand how the human brain, and each part of it, works both individually and collectively. When we think about the brain, we often picture it as a central processing unit. The brain is made up of special cells called neurons, which either have certain functionality from the start or develop over time for more efficient operation. Or perhaps neurons function the same way from the beginning—it’s just the type of information being processed that makes all the difference.
In artificial neural networks, time is typically ignored. Information is processed in static snapshots, meaning the model doesn’t consider the order or sequence in which inputs are received. However, in real life, many tasks—like speaking, walking, or understanding language—depend on the temporal relationship between events. Biological neurons naturally process this through spike timing, where the precise timing of neuron firing is crucial for tasks like coordinating movements or forming coherent sentences in conversation.
To address this limitation, recurrent neural networks (RNNs) were developed to handle sequential data by introducing memory into the model. RNNs allow information to be passed from one step to the next, making them suitable for tasks like speech recognition or time-series forecasting. However, even though RNNs introduce time dependency to artificial networks, they still fall short compared to the brain's real-time adjustments. Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs) attempt to improve time-sequence processing by storing longer-term dependencies, but their efficiency remains limited when compared to the human brain’s ability to process time. We will explore how these technologies work and consider ways to improve time-based computations, aiming to achieve more human-like capabilities.
Additionally, the plasticity of biological neurons, where synaptic connections strengthen or weaken over time-based on experience, allows real-time learning and adaptation. Artificial neurons, though powerful, typically do not adapt dynamically once the model is trained; their weights are fixed unless retrained. However, neural networks manage learning in various ways, such as through reinforcement learning (RL), which leverages feedback loops and reward systems like Q-learning and policy gradients to improve performance over time. While these methods allow for gradual improvement, they don’t yet match the full complexity and flexibility of biological neurons. By examining existing technologies like recurrent neural networks (RNNs) and reinforcement learning, we can explore how artificial networks handle sequential data and feedback and identify areas for improvement. Enhancing real-time learning in artificial neurons could make them more adaptive to new environments, bringing them closer to the responsiveness of biological systems.
Moreover, biological neurons rely on neurotransmitters—chemical signals that modulate behavior based on emotional and physical feedback, such as dopamine reinforcing successful actions. While artificial neurons communicate through numerical values, they still incorporate feedback mechanisms using techniques like reward systems and reinforcement learning, where agents adjust their behavior based on rewards or penalties. Though these methods are effective, they do not yet reach the intricate feedback loops found in biological brains. By focusing on existing approaches in artificial neural networks, we can explore how these systems handle feedback and discuss ways to improve them with more advanced contextual learning systems that can dynamically adjust based on factors beyond simple numerical rewards
The energy efficiency of biological neurons also stands out. The human brain can learn continuously with minimal energy, whereas artificial neural networks require massive computational resources and are far from efficient. While neurons fire only when necessary, artificial neurons are constantly active, consuming significantly more energy than their biological counterparts. We will also dive into how we can modulate
hardware systems to reduce energy consumption, looking into the current hardware for humanoids and robotics, such as neuromorphic chips like IBM's TrueNorth or Intel's Loihi, which are designed to mimic the energy efficiency of the brain. These chips aim to reduce the power needs of artificial systems, but there is much room for improvement in scaling these technologies to broader applications.
Lastly, in the brain, neurons function within a spatial structure, where different regions handle specialized tasks. For instance, the motor cortex controls movement, while the visual cortex processes sight. Artificial neural networks, on the other hand, usually lack this kind of specialization, treating neurons in a more generalized and uniform way. This makes them less capable of handling simultaneous, diverse tasks as the human brain does. We will explore how integrating concepts like modular neural architectures could allow artificial brains to process specialized tasks in a more efficient manner, potentially bringing them closer to how the human brain handles multi-tasking. Hierarchical networks or spiking neural networks (SNNs), which simulate the temporal aspect of neuron firing, may also offer solutions to improving artificial systems' capability to integrate various processes dynamically.
If we were to build an artificial brain where neurons worked similarly to biological ones—handling various types of data simultaneously—we would require systems much more advanced than current computational technologies. Without concepts like real-time learning, temporal processing, plasticity, and chemical modulation, artificial neurons still fall short of the incredible capabilities of the human brain. In this exploration, we will look deeper into what exists, what makes biological and artificial systems different, and how we can bridge these gaps to improve the performance of ambitious humanoid brains.
Subscribe to my newsletter
Read articles from MindlessMind directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
MindlessMind
MindlessMind
I am a machine learning engineer with a deep curiosity about how the human brain works. During my studies, I often found myself asking questions about the remarkable capabilities of the brain and how they compare to the mechanical systems we create. This curiosity led me on a path of exploration—looking at what we currently lack in our understanding and how we might bridge that gap. In this blog, I aim to explore the possibilities of creating a more human-like, or "humanoid," brain. I’ll be discussing existing algorithms, sharing new ideas, and diving into the complexities of neural networks, learning mechanisms, and what it takes to replicate the remarkable abilities of the human mind.