This is an automated archive.

The original was posted on /r/singularity by /u/AlterandPhil on 2023-08-18 16:19:23+00:00.


If we think about it, current machine learning paradigms can have a lot to learn from the neuroscience of our brains. Don’t get me wrong, the original neural networks are loosely inspired by the neural networks of the brain, though even today the neural networks we have are based on principles that are quite simple compared to the neuroscience that we currently have.

To get what I mean, let’s first try to imagine the following:

A neuron is essentially a mathematical function, but simply functions do not represent complex phenomenon well. The trick with most modern neural networks is just to create large networks of these neurons, where the inputs of many neurons are the outputs of previous layers of neurons. The idea behind this is to essentially craft together a much more complex function using these neurons and to use an algorithm called backpropagation to step by step find the set of weights that will provide the correct answer. Of course, this is oversimplified, but it at least gets the general idea well.

Now if we think about it, this is quite a simple (relatively speaking) algorithm for learning compared to what nature might have to offer. For example, look no further than the biological neuron itself. The biological neuron neural network has the following properties:

  • Synapses: The axons of one neuron release neurotransmitters (chemicals like dopamine and acetylcholine) onto the receptors of another neuron’s dendrites in order to propagate a neural signal.
  • Integrate and fire: Neurons receiving signals from other neurons tend to “build up charge” for lack of a better phrase, but this charge continuously leaks. You can think of it like a leaking pool being periodically filled by buckets of water. If the pool overflows, the neuron fires. But this isn’t a universal rule, as neurons could also have different functions and thereby, different thresholds for when they fire. For digital neurons, we currently have the Spiking Neural Network.
  • Cortical Column: Neurons are arranged into layers where each neuron might have their outputs connected to the inputs of the next layer’s neurons, but this isn’t a hard rule and usually the outputs of the neurons could also connect to the inputs of neurons really far away. Cortical columns could also be bundled together and continuously expanded upon. Of course, current artificial neural networks also have layers, though it doesn’t capture the full complexity of these cortical columns.
  • Glia: These structures largely aid in supporting the neurons of the brain, but could also influence the neurons of the brain in ways we don’t fully understand. For example, these glial cells might help influence how certain neurons might fire given an environment. The closest things I could think of as an equivalent for artificial neural networks is reinforcement learning.
  • There are a multitude of different neurons in the brain, each with their unique functions: Unipolar, Bipolar, Multipolar, Basket Cells, Betz cells, Lugano cells, Pyramidal Cells, the whole shabang. There are a whole host of things we could vary between artificial neurons in order to get similar results from our artificial neurons, like the activation function, the hyperparameters, the addition of convolutional or attention layers, and many more.
  • Neuroplasticity: Each individual neuron could individually alter their connections through really complicated chemical signaling. They could individually strengthen, weaken, prune, grow new connections, or even migrate between different regions. Closest thing I could think of is the NEAT algorithm and the liquid neural networks.
  • To top it all off, our brain is subdivided into various regions that subdivides individual tasks and then integrates them later on. The closest thing we have for artificial neural networks is the mixture of experts architecture.

This doesn’t even scratch the surface when it comes to the treasure trove of neuroscience research currently out there. This is all to say that the modern neural networks aren’t very sophisticated and complex compared to the brain. The biggest takeaways we should get from this is:

  1. Perhaps introducing additional complexities might allow artificial neural networks to have a greater degree of freedom in how they can learn the datasets being fed to them.
  2. There’s some isolated work being done on various segments of neuroscience inspired neural architectures, though they are scattered and haven’t been integrated into one product before. More work needs to be done on improving these initial discoveries and in implementing these integrated models in real world applications.

Alright, that’s all I have to say. Thank you for reading this far.