First and foremost, there is a significant difference between current AI models like GPT-x.x and the concept of AGI. What is the difference you ask? - If I were to summarize the difference it is this: AGI when it is created would have the capacity to think for itself. Current AIs do not.
AGI Similarities and Differences
AGIs would have all the properties of human beings, for example, they could think for themselves, they could have rational discussions with other AGIs and humans, they could be kind, they could be violent, they could get bored, they could get jealous, and so on.
How would they be different from humans then?
Firstly, it is unlikely they would be carbon-based like humans. They would most likely be silicon-based. This is due to the fact that humans have found the silicon substrate to be most appropriate to create computers, and it is most likely that AGIs would be created using similar fabrication technologies that are used to create computers. There are also good reasons for AGIs to be silicon-based that are coming up later.
Secondly, they would be faster than humans in computing things for which humans already have algorithms. For example, humans use calculators to accelerate solving of a certain problem. This functionality and many more functionalities would be built into the AGI, and it is likely its interface cost (i.e. human thinking “Oh I need a calculator now”, trying to find one, punching in the numbers, getting the answer, and then plugging in the value in the original thought) would be negligible as compared to that of humans.
Practicalities in creating AGIs
Now, before returning to the central question of this post of why humanity would want an AGI, we should think about the practicalities of creating such an entity. An entity that has free will to do whatever it pleases, unlike any other product/service created so far by humankind. This entity might not do what you want it to do, because it gets bored doing it over and over again. why would anyone pay for such an entity?! This actually gets at further issues in creating a product that has AGI-like qualities. Whenever there is an exchange of money for a certain product or service, one expects to perform all the duties the marketing material touts that it would do. AGI is not like that, just think of marketing material from a company that purportedly sells AGIs. If it is a real AGI, it couldn’t have any usual sales gimmicks like, the fastest at computing squares of 100-digit numbers and so on. This is an interesting issue that would need to be resolved as we attempt to build AGIs.
Now turning to the central question, with the issue mentioned above regarding the creation of AGI, why would we still want AGIs?
Why AGI?
Could it be the same reason why one wants kids?!
We don’t have kids because they do certain jobs for us, albeit they do those jobs (like doing chores etc) because they want to. Or at least there is a good reason to believe that if one is successful in convincing their children of doing a particular chore they would understand it and would want to do it without coercion. Although there is widespread belief among adults that there is value in coercing children into doing things they don’t want to do, that is a separate topic however.
We do have kids because we see ourselves in them, we have kids because in our heads we see our future flourish with them, we have kids because we want our ideas and our aspirations to be carried to them and we get to live vicariously through them even when we are gone. Moreover, we can reason with our kids, we can explain to them how we see the world, what are the pitfalls of doing things a certain way.
My guess is that these could be the same reasons why we would want to have AGIs as our companions. I give one scenario below on how this could turn out to humanity’s benefit.
Humanity has aspirations to leave the solar system and inhabit different planets elsewhere in the galaxy. There is a good reason for this aspiration apart from the spirit of exploration. The solar system can not be our final destination. The solar system has a definite lifetime due to the Sun which has a definite lifetime after which it will expand in size and engulf Earth and other planets. Granted these are not parochial problems, and we have practical tools (like current AI chatbots) for our parochial problems. They are excellent at solving parochial problems, but to solve long-term problems we need long-term thinking. I believe that AGIs must be one of the components of the solutions for such long-term problems.
If we have AGIs on our side, we would have a companion that shares our aspirations, shares our goals, and shares our inquisitiveness. Moreover, it would be much quicker than us in calculating and retrieving already-discovered knowledge! Space is vast and unrelenting in its physical characteristics. Carbon-based life forms require a very narrow range of temperatures and pressures to survive. It is true that with ingenuity humans have built a cocoon on the unrelenting Earth we inhabit, however, it would be more efficient if these ranges are extended when we venture into space. It is clear that it would take fewer resources to sustain a silicon-based lifeform than a carbon-based one, and therefore our AGI companions could venture out to the edges of the galaxy to find another planet we could one day call home. As AGIs would be much faster in accumulating resources in space and much faster in assembling them into inhabitable spaces for humans and AGIs, the future could also hold us living in space by building space cities!