We can brute force AGI with a realistic physics simulator

An unknown, but potentially large, fraction of animal and human intelligence is a direct consequence of the perceptual and physical richness of our environment, and is unlikely to arise without it”. – John Locke (1632-1704)

As we already know, AGI is probably the hardest problem mankind can work on and we do not have a way to solve it.

An AGI can not understand our world the way we do unless they access to the same data and world that we do. Think about concepts like “rough”, “cold”, “slippery slope”, “uphill battle”, “get crushed”, “crumble”, “I’ll cover for you”, “your on a collision course”, “accelerating”, “dodging a bad situation”, “on top of that”, etc. Computer algorithms currently have no way to understand the meaning of these things. The idea of grounded perception and embodiment being required for intelligence has been studied in philosophy and science for a while, but it has only been recent that computer science is warming up to this idea. I’ve written about grounded learning in a few articles. If we assume this is correct, then it is imperative that we build a physics simulator that mimics our real world as close as possible so that we can train computers inside of them.

We currently train AI programs by feeding them millions of lines text which is completely disparate from how we learn things. Its equivalent to having a baby being born and immediately strapping it to a chair where it is completely immobile and feeding it thousands of books. No single movement is allowed, not even eye saccades. Do you think a baby could be bootstrapped to intelligent if it was trained like that? Of course not, but that is how we train computers.

So we absolutely must bridge that gap so we can at least try to teach computers in a similar way to us. With a realistic world simulator, we could quickly iterate with millions of variations of neural networks, algorithms, and objective functions. We could brute force our way to a model that does have a grounded understanding of our concepts, even if its simulated grounded understanding.

Many of the recent breakthroughs of machine learning occurred after access to large amounts of data was made available. With the availability of MNIST, there was an innovation explosion in optical character recognition (OCR). With the release of ImageNet, there was an explosion of image classification models using deep learning. There are datasets for autonomous driving and the list goes on and on. But we do not have access to this dataset for grounded meaning. On top of that, it is highly improbable to create a static raw dataset you can feed to a computer and it will learn grounded meaning. MNIST is a collection of thousands of digits. ImageNet is a collection of millions of images. They are static data dumps that you process. To learn grounded understanding, AI programs will most likely need to explore and test millions of experiments to understand relationships and cause and effect. To be able to run these tests, you need a simulator for the AI programs to run in and interact with the world.

So If we had realistic physics simulator, we could run millions of experiments, essentially brute forcing our way to computers with grounded understanding. That may be enough for AGI, but at the minimum a life changing event.

The problem is our current world simulators are not enough.

Many scientists already use general world physics simulators for experiments. If you have ever played a video game like Grand Theft Auto, Microsoft Flight Simulator, or Fortnite, then you have already used a world physics simulator. Simulators and models are approximations of our real world, they are never 100% exact. Simulators are large scale models, and as we know, all models are wrong, but some are useful. These game engines are built with the goal of not being realistic, but often times to run fast enough so that the video game feels fluid. The physics are usually exaggerated so that characters in the game jump higher, run faster, or have weapons that cause more damage. Microsoft Flight Simulator tries to be more realistic versus exaggerating the physics. People even train to fly in it! Their goal is to have realistic flying though and so other physics aspects are missing such as combustion modeling, liquids modeling, temperature modeling, particle physics models, chemistry models, etc. There are physics engines that are focused on more realism and science research such as MujoCo, lab, and pybullet. They all make different tradeoffs in terms of accuracy and speed.

Due to our current computer architects, it is currently impossible to build physics simulators that can fully mimic our world. CPUs are not fast enough to be able to simulate everything. Many dynamics and models are not used because it takes too long to compute to be useful. On top of that, our computer chips still run sequentially, one instruction after another. In our real world, things are happening concurrently, but we still do not have true parallelism in our chips. Our physics models will need to be able to run in parallel to be able to get more accurate results.

Alternatively, you could try to build a purposely stripped down physics simulator that has only what is needed to teach computers grounded meaning, the problem is we don’t know what the bare minimum features of our physical world are required. I have started writing an article on that for more research.

What about robots?

If you can’t go down the physics route, you could try bringing robots into the real world. That is also extremely hard. Building robots with sensors that can interact with our world are still too primitive. State of the art of most robots are still robots that use wheels. Sensors for touch are extremely primitive, almost non existent. Our sensors for sight with camera and hearing with microphones are good though and used everyday. Since we do not know what senses are needed for grounded learning, it would be painfully slow to try and built hardware prototypes and then test it in the real world. On top of that, after you got a robot that seemed decent, you would then need to build thousands of them to run lots of experiments.

So in summary, IF we had a realistic physics simulator like our world, we could brute force grounded learning and (maybe) get to AGI, but its not possible with current technology.

Leave a comment

Your email address will not be published.