There is a concept called the “Ladder of Abstraction” created by American linguist S. I. Hayakawa in his 1939 book Language in Action. It describes the way that humans think and communicate in varying degrees of abstraction.
Humans are constantly interacting with our environment all day and automatically referencing concepts (objects, ideas) at different levels of abstract. That sounds very abstract… To be more concrete, let me give some examples of what I’m trying to express. Lets take a key. A key can be thought of as something to open doors, a metal object, a thing, matter, a sharp tool to puncture things, a lever for opening bottles, a hard solid object, a weapon to hurt someone, an object to write in the dirt, and probably other things.
How about an apple tree, its a living organism, its a plant, its a tree, you can convert it into construction wood, a food source, firewood for warmth, something to climb onto to see high above, fertilizer for other plants, it could be a source of income as lumber you sell, it could be used as a place to sleep if you didn’t want to touch the ground, and more.
Those examples are using objects at different levels of abstractions, but I think this idea applies to ideas as well.
“Lets meet for coffee” is an abstract statement, there are no solid objects involved, yet this could mean different things. Lets go and drink actual coffee, lets meet up somewhere to discuss an issue, lets meetup and drink something that doesn’t have to be coffee, or let’s hang out.
How do we choose the appropriate level of abstraction when referring to an on object: honda civic > car > vehicle > thing> this.
How about shiba inu > dog > Canis familiaris > four-legged creature > mammal > living being > organism > thing > this. How is a specific class of an item chosen? Part of it has to do with context. If we are discussing animal kingdom classification, we will probably think about “Canis Familiaris”. Is it the most familiar word that pops in our mind first or the most relevant? By moving up and down the abstraction ladder, you most likely will change how you perceive the concept and that may change your behavior.
The main point I want to make is that any sort of computer intelligence must be able to fluidly move up and down this ladder of abstraction if they are going to be able to have human level intelligence. Not just to be able to communicate with humans, but to be able to see our world from different points of views. And as of now, this is not possible with computers.