In order to build a general AI we can take inspiration from how biological intelligence agent (e.g. human infant) utilizes knowledge given to them at birth and builds upon it to survive and thrive. Here is a list of the types of innate knowledge that this agent possesses:
Innate capability to interact with the world - Embodiment - This is essential to detect problems and test solutions to those problems (See below on what problems)
Innate capability to learn (universal) language and use it effectively - Language Acquisition and Use - This is essential to get feedback from the environment and other intelligent agents and provide feedback to them.
Innate capability to be curious - ask questions, seek explanations. Curiosity - This is essential to move from one problem to another, and to improve one’s own situation. (i.e. to move towards better and better problems)
Innate capability to want to survive - survival instinct
No ultimate objective function needs to be achieved - goal-less open-ended exploration
With embodiment, language, and curiosity in hand, the agent is set out on a lifelong quest of survival and thriving. It can be imagined that this happens in a continuous loop by solving problems as they appear, with no need for an ultimate state to be achieved. Survival and thriving is all about problem-solving at a given moment.
“All life is problem-solving” - Karl Popper
Let us imagine an infant's first minutes of life on Earth. The survival of this infant is solely dependent on micro-loops of steps that the baby repeats that allow her to move from problem to problem, gain knowledge along the way, and thrive. In this process, the baby utilizes the innate expectations and abilities she was given at birth.
The process looks like this:
A. Encounter Problem (due to the survival instinct and with prior embodiment, language, and curiosity) - the baby encounters the problem of discomfort, or of wanting food and not knowing how to get the food or to comfort herself yet.
B. Conjecture (guess) solution to the current problem with current expertise of (crude) language and curiosity - (Use language - i.e. crying to solve the problem). As a sidebar, this use of language is primitive, and more sophisticated descriptive and argumentative parts would develop as the infant grows.
C. Test solution against reality (using embodiment) - Unconscious learning pattern of someone coming to help when
D. If the solution solves the problem - store this newfound knowledge (this could also be called learning; hold this as “tentative theory”) else go back to B (Unconscious learning of the relation between using language - crying - and resolving the problem at hand - discomfort, wanting food)
The learned knowledge gradually becomes part of the new “expectations” upon which further new knowledge could be learned.
This “new” knowledge could be existing one but not known to the agent - this is what every general intelligent agent goes thru when they learn a new task for the first time - like riding a bicycle
Another possibility is that the new knowledge could be a net new knowledge, that is not known to anyone yet, and this is how new discoveries and inventions take place.
There is lot to learn from the biological counterparts of the future AGI and it might turn out that some of them are not directly transferable while building AGI, however this is the only instance we have to learn from to begin integrating generality in our current AIs