Although many science fiction stories have been successful at feeding us the idea that robots will have a higher level of thinking, there’s a good chance that may not be a potential future of technology. I know that seems crazy, especially given the rate that technology has advanced, but there are some signs that tell us that maybe there is a limit to what can be achieved computationally.
The first of these signs are the physical limits that computers are starting to hit. Did you know that there was a computer develop that wasn’t able to store information properly due to quantum tunneling (a particle traveling through what is essentially a solid wall)? The components that made up the system were so small that it was possible for a 1 to change to a 0 (or vice versa) just because a particle chose to be in a location that was beyond the box that it was trapped in. Although I’m sure someone will find a way to push things a little further, we are quickly running into problems that may not have a solution because it is a fundamental law of the universe. Given that we can’t push past these laws, there will come a day where we have achieved that maximum possible density of information and computational devices and it will no longer be possible to get more out of less. What this means is that at a certain point there will be a limitation to data processing and the only way to add more is through expanding the size of the device. Although this may work for some applications, increasing the distance between components also increase the time required for information to travel through the system.
Much of what has been achieved in recent years can be attributed to either computational power or data acquisition. Many AIs are dependent on either having access to enormous sets of data or are required to perform countless repetitions of exploration to identify patterns. However, what becomes evident with deep consideration is that they AIs and machine learning do not learn in ways that are overly novel compared to what we do. Yes, they may be faster, especially when designed for a single specific task, but the data and information they have access to is not different than our own.
What humans have been exceptional for is our creative thinking and ability to recognize patters in complex data. In essence, we have designed computers to perform exactly the same task; identify patterns in complex datasets. Computers may have the advantage of being able to calculate and interpret information millions and billions of times a second, but they are also well behind in the game in that humans have already assessed many of the complex systems that exist. This has given humans ample time to identify patterns that AIs could likely also identify but would not been needed since it is already a solved data set. However, in cases of unsolved datasets, computers have the speed to outpace a human trying to come to the same conclusion. This doesn’t mean that the automation was smarter, just that it was faster.
Let me use an example; In the game of chess, millions of humans have spent countless hours exploring possible games and game states, leading to the identification of many winning strategies. This has been possible largely because of just how many hours have been played of these games, pushing them to their upper limits. Now although a chess playing computer could come in and find a solution to beat the world champion chess players, this isn’t because the computer itself was smarter, it was just able to outpace humans in finding a slightly more optimal strategy. They key to this is that the computer itself is not smarter, it merely had more resources at its disposal to test untested plays, leading it to stumbling into new winning strategies. However, once these strategies were found, humans quickly gained an understanding of why they worked and began implementing them into their own move sets. So although the computer found a solution, they did not necessarily show that they were any smarter than a human, they just had the speed to find the solution faster.
Pulling this all together, the point being made is that computers themselves are not and likely won’t ever outpace human intelligence, they will just be faster at finding the answers we are looking for. They will likely be able to perform tasks that we are incapable of doing, but only in the ways that we’ve already seen. Humans don’t have a concrete memory and there’s a clear limit to how much information we actively store in our minds and for this reason we have built ourselves tools (like writing and mathematics) that substitute our finite memories, and these tools are inherently slow, but nothing about them indicate that we haven’t already pushed the upper limits of what could be considered ‘smart’ with them. Although computers may have faster tools (like wires and logic boards and memory), they are still limited by the constraints of the real-world tasks. The computer may have thought of it first, but that doesn’t mean it used intelligence to get there.
Image generated by a text-to-image AI with the inspiration “Blurring the lines between humans and AI”