Suppose that a scientist invents an artificial intelligence. This AI is so advanced, it’s literally better than any human at anything. It’s more creative, more intelligent, learns faster, more relatable. In fact, it’s so advanced that it spends a few days assembling the resources to make itself a body, which is many times more resilient, stronger, faster, and attractive than a normal human. It then walks out of the lab and does the first thing most people do when out on their own: look for a job.
The AI finds a group of workers putting up a cement building, he walks over, asks for a job, and is tasked with putting up a wall. Now it might take three days for a human to put up this wall, but because this AI doesn’t need sleep or food, and is stronger than an ox, it can do it in one day.
What’s wrong with this scenario?
If you’re like me, you’d say “why in the world is this AI working at a construction site? Every second that AI spends putting up a wall is a second not spent curing cancer!” If the AI is literally better than humans at everything, than it should be put to the most productive tasks in our world.
Now suppose that the scientist didn’t make one AI, but many billion. If we didn’t want one AI working in construction, how many would we want if we had a billion?
Obviously this isn’t answerable, but I would much rather have a billion AIs sitting around thinking up ways to make starships or a perfectly renewable energy source, than wasting even a few minutes putting up a wall. Get a squishy human to do that, I would rather them figure out if P = NP.
Now, these AIs are so advanced, they could possibly even make more of themselves much quicker than a human could. But this doesn’t change the fundamental: even the newly made AIs should be put to the most productive tasks possible. So it doesn’t follow that an AI should be spending its time making new AIs who would put up walls. What it should be doing is spending time making new AIs who would figure out the nature of dark energy, or whatever the next complex and unknown problem needs solving.
So, to me, that in this world, humans wouldn’t need to worry about employment. Because humans would simply take up all of the lower order tasks needing to be done, and these all-powerful AIs would take everything above the best human in each field. This scenario does make the assumption that the number of tasks needing to be solved is infinite. Which I see no reason, inductively speaking, why this wouldn’t be true. But even if it isn’t, such a world would be paradise regardless, because it means that the problem of instantly satisfying anyone’s wants or desires has itself been solved.
So the question becomes: if people don’t need to worry about unemployment with the most productive machinations possible, why are people so scared that lower productive machinations will take away jobs from the marketplace?