The Future of AI is Really Not Self-Assembling

GenevisGenevis
5 min read

A response to the post: The Future of Artificial Intelligence is Self-Organizing and Self-Assembling

Intelligence is solved. There are simple definitions of an intelligent system that are broad enough in coverage yet specific enough in scope that there exists no complete argument against it.

Def. 1 - Intelligence - Any method by which a system attempts to model the external world internally e.g., a "model" (compressed representation) of an environment, ecosystem, or even abstract context.

Another common way to summarize any intelligent system is:

Def. 2 - Intelligence - any system that can perform information compression and decompression.

Again, these are not detailed descriptions of any particular intelligent system, but they satisfy an internal intuition and sedate the cynic.

Why is this important? Because, there is often a discussion of "artificial intelligence" that is premised on the idea that "intelligence" is not already a thoroughly solved problem in computation. It was solved before deep learning and perceptrons with Control Theory (and Information Theory). Just because these systems have extremely minimal and highly-contextualized intelligence does not make them unintelligent.

So, what is often the aim of looking for new "intelligence". It is driven by the search for AGI. However, it can be easily argued that if intelligence is a solved problem then where we lack is not in the size and scope of intelligence but in the efficiency of it. It does not take a human trillions of dataset iterations and long inference crunching periods to say the next thing on their mind. So, why does it take this much time and effort for the most advanced multi-billion parameter models? Why is scaling models and scaling learning algorithms so inefficient? Because, intelligence is easy. Efficient Intelligence is a PIT-royal-A.

In the subconscious realizations of this fact, many are led to begin bio-mimetic "inspirations" that are often non-rigorous and vaguely-defined in nature. For example, we create buzzword terminology that could mean a plethora of things, utilize it as if it is already well-defined in the field of Biology, and attempt to "build (towards) it" with existing ML algorithms.

What even is "self-assembling" or "self-organizing"? No one actually knows. We can only attempt to create examples or point to something that exists in nature and say "(like) that thing". Great.

one of the most fascinating aspects of nature is that groups with millions or even trillions of elements can self-assemble into complex forms based only on local interactions and display, what is called, a collective type of intelligence.

Which is fun to think about. But, it could also be argued that all intelligent systems are "collective". How many parameters in that last ML model you designed, or how many neurons in the brain, and in both cases you have individual, modular, and global-aggregate-behavior?

Cellular Automata

Are a pain to run and extremely inefficient.

Neural Cellular Automata ... the same neural network is applied to each grid cell, resembling the iterative application of a convolutional filter

Are even more inefficient to run and are just adding complexity to existing algorithms. It is no different than asking: what if trillions of neural networks all interacted and could depend on some neighborhood of what other neural networks are doing? gasp

Do, we not see the same games we are playing?

Add more parameters. Add more cells. Add more networks. Add more training time. Add more machines.

It's the same scaling game. "Maybe if we just add more complexity then more complex things will emerge". Yes, we can study emergent phenomena with CA (including NCAs), but this is not an end goal. It's a research tool for understanding algorithms of such high-complexity that we don't have the math (or possibly even brain power or computing power) to study them without running them. But, that is it. And, in fact, in many domains of science this will become truer and truer. Some algorithms are not compressible. Some math is not reducible. Some programs must be run to be "understood" or simply studied. There is a discomfort in this realization because human lifespans are relatively short and we invented computers literally to speed things up for us so we can get more bang for our buck in those short lifespans, so this conclusion feels counter-intuitive to all of our efforts up until this point, but it stands as a simple fact of our universe nonetheless.

NCAs is that they can only generate (and regenerate) the single artifact they are trained on ... A very recent approach called a Variational Neural Cellular Automata mitigates this issue

Yet, even more complexity. Let's produce a multi-parameter distribution instead of a single-output. More parameters.

During evolution, by iteratively using simple K-Means clustering to combine rules, an approach we call Evolve & Merge was able to reduce the number of trainable parameters

That's great. It really is. Model compression algorithms can really make for much more efficient inference. But, it doesn't really help with the whole "we required a ton of extra inefficiency to arrive at a system that is more efficient -- but not provably more robust or provably any other properties that we were not trying to explicitly preserve in our provable compression, convientley ignoring the massive number of requirements that complex biological systems preserve while undergoing constant fluctuations e.g., neurology -- for a particular singular task: inference".

it is possible to directly use an NCA as a controller for an agent

Ok, this one seems interesting. That is all that can be said at this time. But then of course the paper abstract has to ruin it with this first sentence:

Neural cellular automata (Neural CA) are a recent framework used to model biological phenomena emerging from multicellular organisms - https://arxiv.org/abs/2106.15240

No, they do not model biological phenomena from highly-complex multicellular organisms. I digress ...

This article, and others like it, are optimistic (and filled with pretty pictures and animations) but not actually going anywhere. It's a lot of (buzz)words - and a lot of attempted status-preserving flexing of paper-publishing skills -- for very little progress. Sometimes it genuinely seems like these "researchers" could be automated by a simple combinatorial optimization algorithm that just rehashes existing ML concepts in different combinations. Of course, not to even mention, that this Risi person seems to get funding - likely from his local taxpayers who have no idea how much he is wasting their hard-earned money - to sit around and think (poorly) about "creative AI". Cute.

Again, intelligence is a solved problem. Completely. Full stop. Efficient intelligence, however, is far from solved.

0
Subscribe to my newsletter

Read articles from Genevis directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Genevis
Genevis

Theoretical Biology Research.