The formal definition of “to move” is: “to change from one place or position to another” or “to set or keep in motion”. This moving usually happens by their own volition, but things can be moved by others as a “patient”. It should be easy to know if something is moving and all animal intelligence… Continue reading Representing Moving vs Doing
Tag: composability
The Language of Thought hypothesis and Compositionality was worked on hundreds of years before computers existed!
I just learned Gottfried Wilhelm Leibniz (1646-1716), famous for inventing calculus, was also interested in the philosophy of mind. He proposed a concept called the Alphabet of human thought, which is an idea that says there is a universal way to represent all human thought with small primitive components. With these primitive components, we should… Continue reading The Language of Thought hypothesis and Compositionality was worked on hundreds of years before computers existed!
Testing compositionality on GCKR
can we just represent each prime as a point in a vector? We would need to have a programming language that could interpret the vector. So we are moving the meaning of the vector to the interpreter. We need to test compositionality. Can you have multiple vectors as a sequence to represent new concepts/words? The… Continue reading Testing compositionality on GCKR
Notes on Neural Module Networks (NMN)
I’ve been studying Neural Module Networks from Jacob Andreas. He focuses on compositionality and grounding, my 2 favorite subjects. I just read 2 of his papers: https://openaccess.thecvf.com/content_cvpr_2016/html/Andreas_Neural_Module_Networks_CVPR_2016_paper.html https://openaccess.thecvf.com/content_iccv_2017/html/Hu_Learning_to_Reason_ICCV_2017_paper.html I’ll go over the first paper: “Deep Compositional Question Answering with Neural Module Networks” He has created a new type of neural network architecture where he builds… Continue reading Notes on Neural Module Networks (NMN)
Notes on “Creativity, Compositionality, and Common Sense in Human Goal Generation”
Just read this short and sweet paper from Guy Davison et al. Brenden Lake is also a co-author, I have read several of his papers. paper link: https://psyarxiv.com/byzs5/ They built mini programs as DSLs that represent game rules. The programs act as reward generating functions (goals as reward-generating programs). Their line of reasoning follow the… Continue reading Notes on “Creativity, Compositionality, and Common Sense in Human Goal Generation”
A computational definition of grounded compositionality
“The meaning of a whole is a function of the meaning of the parts and of the way they are syntactically combined”. It seems there is not a clear consensus on how we would implement compositionality in computers. I have gone through several research papers and collected several different definitions. Systematic The most common definition… Continue reading A computational definition of grounded compositionality
Candidate data structures and algorithms for representing ground-able compositional knowledge in computers
Some of the most interesting ways to represent knowledge in computers using Mathematical structures. Graphs and graph neural networks multi-spacial grid cell representations grid representations probabilistic programming holograms Voronoi tesselations – they allow distance calculations between any concepts https://towardsdatascience.com/the-geometry-of-thought-700047775956 word2vec – its such as simple model, its worth thinking about how to scale this up.
Variables in grounded language learning
I keep reading grounded language learning papers trying to figure out what is the minimal ingredient to achieve a grounded representation in computers. One theme I keep seeing and coming back to is that while everything is grounded, compositionality may need more abstract thinking. It seems like there definitely must be some recursive computation or… Continue reading Variables in grounded language learning
My notes on ”A Benchmark for Systemic Generalization in Grounded Language Understanding”
I meant to read this paper for a long time and finally got around to it. Paper link: https://arxiv.org/abs/2003.05161v2 Laura Ruis is the main author, I hope she goes on to do more work in this “general direction” 🙂 Jacob Andreas of NTM fame is a co-author, awesome! Brenden Lake who does tons of interesting… Continue reading My notes on ”A Benchmark for Systemic Generalization in Grounded Language Understanding”
Examining concepts through different filters
I think the human mind an autonomous interlinked model building database of the world. I define models and concepts to be the same thing. One important aspect of this system that has been nagging me is the idea that the human mind can examine ideas/models/concepts from different filters. In my article about the mind as… Continue reading Examining concepts through different filters