I think the human mind an autonomous interlinked model building database of the world. I define models and concepts to be the same thing. One important aspect of this system that has been nagging me is the idea that the human mind can examine ideas/models/concepts from different filters. In my article about the mind as a simulation engine, I briefly discussed this idea, but I want to investigate it more.
The mind can literally view things from different angles, I can imagine a dog from the side, from above, standing in front of it, from up close, from far way, or any angle I choose to imagine. This can also be called changing your perspective. Changing the angle changes the sensory input even though we are still looking at a dog.
If we continue to use the camera analogy and expand the idea further, the human mind has the ability to view ideas through different points of views that are not related to angles and distance, but to any model we already have. Viewing things through filters or lenses has the effect of transforming the subject of focus in multiple ways. Imagine adding a special effect lens on your camera to give it different effects, a picture of your best friend turns into an anime cartoon. Continuing with the dog example, I can filter it through the concept of “food” and all of a sudden, I can think of it as “moving meat” or through the lense of “I’m cold” and I now I might go snuggle with my dog. There are countless models we can use as filters such as “companion”, “animal”, “intelligent”, “stupid”, “fun” that change the meaning of the main concept of “dog”.
Another way to look at this idea is as “operating modes”. Most organisms only operate in the mode of “homeostasis”. Everything is viewed through the lens of eating, resting, and surviving. Humans can change their operating mode all the time: “survival mode”, “study mode”, “vacation mode”, “save the relationship mode”, “healthy mode”, etc. Switching modes changes how we react to everything.
It could be that this idea is the same thing or related to :
Which I wrote about in previous articles, but I think there is something special here.
Every model in the human mind seems to have cognitive computational properties that can modify other concepts to change behavior and thought. If we look at it like a computer program, the code may look something like:
mode = “hungry”; hunt_for_food; # hunt for delicious stuff
mode = “stocking up for winter”; hunt_for_food # can take your time
mode = “about to die of hunger”; hunt_for_food # eat anything you can find asap
mode=”bored”; hunt_for_food # may hunt for a few minutes then give up
At first glance, it looks like compositionality, but compositionality is really about multiplying meaning: “smelly + dog”, “rocking + chair”, “flying + vehicle”, “plastic + house” give completely new meaning. What I am proposing is that models/concepts can alter cognitive computation of other models/concepts. It is similar to how an adverb modifies a verb: “quickly eat”, “safely drive”, “victoriously punched”, etc. Adverbs are only meant for changing verbs, for changing action, but this system is more than that.
This is an important topic to me because current reinforcement learning doesn’t work. We train our models to achieve one goal and they can usually do it, but if we change it slightly, the model completely falls apart. Humans and living organisms can change their goal and usually still achieve it. I’m always wondering how to design agents that achieve any goal and this idea of “different filters” has me wondering if we could build this property into reinforcement learning systems to allow for flexible goals. Imagine an original goal of “go home” through the filter of “hungry” modifying the goal to be “pickup pizza on the way home”. The original goal of “buy a coffee” through the filter of “I’m hot” may modify the goal to be “get an ice coffee”.
Another way to look at this idea is as a state modifier. The definition of state is: mode or condition of being. Very similar to “operating mode” from above, the difference is with “operating modes” you are purposely putting yourself into that mode. With state, I am talking more about how your state is being changed naturally due to internal and external factors. The state you are in will directly modify how you view things. You want to eat less if you are sick, you will want to sleep if its late at night, you will want to run faster if a mother bear is chasing you vs a baby bear. In some ways, the state something is in like an adjective modifying a noun. That relationship of adjective + noun will have certain properties and consequences. If you see two bears, you will be very scared. If you look closer and notice one is a hurt bear in pain and the other is a smart bear, you will probably focus on getting away from the smart bear as fast a possible.
Another way to look at this idea is as a focus or ignore mechanism. Similar to the camera analogy, if we are looking at a scene at a concert, we may want to focus on a single person, but it is a lot more powerful than that. If the concept of “that person is in danger”, then it will modify our view and behavior to ignore everyone else and try to help them. By changing the point of view, we can tune our mind to only focus on different parts of an idea we are examining. If we are freezing and looking at our dog as a warming mechanism, we don’t care if the dog smells bad or how smart it is.
Another way I sometimes think about this idea is as an abstraction ladder which I’ve written about before here.
This idea of modifying our behavior and understanding of things by changing the point of view is so powerful. Often times we will ask someone else for an opinion on a topic we have been thinking about. We do this because they have a different mental model and may be able to provide new insights into your issue.
Just look at this article I wrote, I looked at this idea from several different points of view:
- cognitive computational modifier
- angles and points of view
- operating modes
- camera filters
- abstraction ladder
- behavior and reinforcement learning
I don’t know what this phenomenon is called in neuroscience or cognitive science, but there must be a specific name and research into this area.
I bet if you could built a computer system that had this “interchangeable lens” property, you would open up a whole new world of computer intelligence.