In thinking about my model building engine theory, cause and effect is one of the central ideas that came about from that thought experiment. If humans create models of our world, and these models are rich interactive simulations that we can reason about, then they are causal models.
He has made 2 huge contributions to computer science and artificial intelligence. He pioneered Bayesian networks, which is a simple way to model if an event occurred, which factors may have contributed to it. They are modeled as probabilistic graphical models using conditional dependencies and DAGs.
Later, he invented the DO-calculus for representing cause and effect. He believes machine learning has hit a wall because they do not model cause and effect, they can’t build models of the world like humans do. When I read DO-calculus examples, it looks like regular probability equations P(Y|X) with do(X) introduced to show P(Y|do(X)). The do(X) means that an experiment or action happens that can be observed. [I’m writing a detailed DO-calculus article]
Pearl has this idea called the ladder of causality, which has 3 steps. The lowest level is about observing data, the 2nd level is doing actual experiments, and the highest level is imaging what would happen.
If we look at machine learning, computers are at the seeing level with humans assisting by feeding in this data. Animals do level 1, but not any other levels. Computers can’t do the other levels due to our primitive technology and primitive understanding of the human mind. Humans can do all 3 levels and imagination is probably THE core contributor to our intelligence.
Pearl thinks causal explanations, not dry facts, make up most of human knowledge, and should be the cornerstone of machine intelligence. Current machine learning algorithms are fed only dry facts.
He has a great interview with Lex Fridman here. Some insights from it: He thinks quantum mechanics is useless to us from a casualty point of view and life is basically deterministic.
He invented the DO calculus because current mathematics can’t model cause and effect. If you look at algebra equations like X = 5Y, you can’t tell if X caused Y or Y caused X.
He thinks consciousness is just an agent having a model of the world and itself. I also believe that this is enough as well to model some form of consciousness.
His advice for creating intelligent systems: “There are no dumb questions. Ask your question. Solve them your own way. Don’t take no for an answer. Keep trying and you will know if your answer is right or wrong eventually. Try to understand things your way.”
He has been driven by a singular goal for many years, to create true artificial intelligence that can reason like a scientist.
He already has his tombstone ready: “The fundamental law of counterfactuals. Put a counterfactual in terms of a model surgery.”
To him, metaphors are expert systems and humans are great at doing metaphors. We use metaphors to take a problem we don’t understand well and compare it and convert it into a problem we know very well to try to understand the original problem in a better way. This theory fits well with my ideas of processing information through different lenses. Unfortunately, we do not know how to program metaphors into computers yet. Douglas Hofstadter also believes that the core of human intelligence is analogies and metaphors and he has written about this extensively. One of my favorite books is from Douglas Hofstadter Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Hofstadter attempted to program analogy making algorithms when he was still an active professor.
I’d love to read his whole book, “The Book of Why”, buts its 423 pages, I barely have time to read books these days, fortunately I did browse through some of it. He proposes “mini Turing test”, which is that we can give a computer a story (that can be encoded in any way) and test if a computer can answer casual questions that a human could answer. Before a computer can acquire the knowledge of casual reasoning, we must find a way to represent this data in a computer. He mentions: “One major contribution of AI to the study of cognition has been the paradigm: ‘Representation first, acquisition second.'”
I’m so glad he mentions that, it is a core problem I have been thinking about and researching. A related quote I like to think about from Linus Torvalds: “Bad programmers worry about the code. Good programmers worry about data structures and their relationships.”
You can tell Judea is an intellectual genius, he mentions: “Passing this minitest has been my life’s work—consciously for the last twenty-five years, and subconsciously even before that.”
Judea Pearl is a great scientist, engineer, philosopher, and physicist. He expresses his true thought, invents new paradigms, has the will to work on core problems for long periods of time, and is actually making real progress in the direction he believes is true to achieve his dream, my type of hero.
Some papers up in my reading list:
Introduction to Judea Pearl’s Do-Calculus https://arxiv.org/abs/1305.5506
The Do-Calculus Revisited https://arxiv.org/abs/1210.4852
Replacing the do-calculus with Bayes rule https://arxiv.org/abs/1906.07125
Pearl’s Calculus of Intervention Is Complete https://arxiv.org/abs/1206.6831
Towards Causal Representation Learning https://arxiv.org/abs/2102.11107
Artificial Intelligence is stupid and causal reasoning won’t fix it https://www.frontiersin.org/articles/10.3389/fpsyg.2020.513474/full
Machine Learning for Causal Inference in Biological Networks: Perspectives of This Challenge https://www.frontiersin.org/articles/10.3389/fbinf.2021.746712/full
Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution https://arxiv.org/abs/1801.04016
Causal Reasoning from Meta-reinforcement Learning https://arxiv.org/abs/1901.08162