2021-10-03

Knowledge Creation and Induction

Here are my thoughts after listening to Bruce Nielson's excellent breakdown of the current state of artificial knowledge creation in The Theory of Anything podcast episode 26: Is Universal Darwinism the Sole Source of Knowledge Creation?

Hume explains why induction can't work. Induction is the belief that we can derive theories from observations. At least that is my understanding of the concept. This is the definition I will use as a starting point here as I dive into the subject of the podcast.

The only way to perceive reality is through observation. We also know that our theories work pretty well. So, where do they come from if not from observation? Popper says they are guesses. Conjectures. We just make shit up. That sure explains why people are so weird. We make advances in our understanding of reality by combining theories that give support to each other and to the observations we make. We definitely use observations. We have no other way to get access to reality. But they come second, after the theory is conjectured. The more support a theory needs for it to work, the better it is. Theories that don't rely on support from other theories or observations are bad because they are easy to vary.

Some critics of current AI algorithms say that they are based on induction, that they are inductive. This is a bit puzzling because induction doesn't exist and machine learning do. And it does create knowledge. Also, just because a process uses observations doesn't make it inductive as we have seen above. I think the claim that an algorithm is inductive is just plain wrong. Induction is a failed philosophical approach without support in the physical world, where machine learning lives. An alternative formulation of the criticism could be that current AI researchers believe that induction works, therefore they will never be successful. We come to that conclusion because machine learning uses observations. So does Popper's theories, which leads me to believe we can all agree to drop that claim as well.

Another criticism is that machine learning doesn't create any new theories, no new explanatory knowledge. Therefore it cannot solve the problem of AGI, Artificial General Intelligence, which requires universal theory creation. Machine learning do create knowledge, but not of the explanatory kind. The theory involved in machine learning, e.g. how neurons work and their topology, comes from the programmer, not from the algorithm.

I'm not sure the critics will be right here either. I do not rule out that it is possible to find a topology that can excel at the task of universal theory creation. It would be yet another jump to universality to put under our belt. We don't appear to have found it yet though. Whatever theory we need to come up with to realize AGI, it will be created by a programmer. On its own, the fact that programmers create theories cannot be held against an approach that attempts to solve AGI. If it could, the task to create AGI would be proven impossible, and we have proofs of the opposite. Once created, the AGI algorithm will be able to rediscover itself, similar in kind to a metacircular interpreter that can interpret itself.

It doesn't seem to be very easy to recreate oneself though. Just look at our own struggles. Are we not universal explainers after all? Do we have a blind spot that will prevent us from achieving AGI? We have the theory of computation, the Turing machine and our ability to program it. Us having a blind spot appears to be ruled out. Theories that solve AGI just seems to be very rare and all we can do is keep guessing.


Felfri produktutveckling

Skojade bara. När vi konstruerar och tillverkar produkter är det alltid möjligt att göra misstag. Konstruktionen fungerar inte som det var t...