There is a fierce debate on whether autonomous vehicles are possible because driving requires all the sophistication of human intelligence. The argument is there are too many variables and situations that require human level intelligence, therefore either we never get autonomous vehicles or we solve artificial general intelligence (AGI). We will absolutely have autonomous vehicles in our lives on a daily basis. Tesla already has their “beta” self driving mode although as of 2021, its a little too early. Mercedes just got approval for level 3 autonomy where they take full responsibility for a crash. Waymo has opened up their taxi service for some people in certain cities to drive around. So the pieces are falling into place, it will just look a little different from what we expected. Computer Science is the science of taking problems from our world and putting them into a form that a computer can optimize for. At a high level, the self driving problem can be formulated as “get to the destination without causing harm”. Getting to a destination works most of the time, but there are many edge cases where harm can happen and researchers are actively working to fix them. We do not need to solve AGI to have autonomous vehicles. Autonomous vehicles will look like all the other problems that computer science/artificial intelligence solved. We thought chess, Go, speech recognition, and other problems required real AGI, but after they got solved with new algorithms, they bring us nowhere near closer to understanding human intelligence or AGI. With self driving cars, each of the remaining problems can be optimized with a combination of technology, policy changes, and infrastructure improvements. People think that because cars can go anywhere, then our self-driving cars have to go everywhere as well before self-driving is solved. If you think about a train system, it is on tracks, it is restricted, and can’t go everywhere, that should be easy to automate. While a car can go anywhere, so it feels impossible to fully automate, but most cars can only go where roads have been paved for it, so infrastructure needs to be purposely built for cars. If we reduce or restrict certain difficult situations such as the road to Hana in Maui or highly dangerous intersections, while we work out the issues, then we can accelerate self-driving in the safe areas. It doesn’t need to be an all or nothing situation. If we think about the the autonomous vehicle system as a constantly expanding train system, then we can accelerate the road to autonomous vehicles.
Eventually the autonomous vehicle infrastructure will just like train infrastructure on steroids, that means there will be certain tracks that cars can drive on, but with a 1000x more tracks, it will not look like true artificial intelligence. With using the same analogy, you could say all industrial artificial intelligence research is about taking the car problem and turning it into a train problem, it’s much easier to optimize for a restricted set of tracks.