Quotes from the interview between
Lex Fridman and Gary Marcus.
“Intelligence is a multidimensional number”
Most of the past work delves in the idea of IQ but doesn’t go into other quotients like creativity, emotion, and fluid vs. crystallized intelligence. You can imagine a multidimensional space rather than a scalar for intelligence. Many papers have been written on finding the quotient of AI systems but provide a narrow scalar to encompass the lossless wonder of “intelligence”. Many people are considered intelligent because they know certain keywords and have associated them with definitions or a set of other terms. Unfortunately, they are unable to compute on that knowledge and form new representations of that information that is hyper connected in their mind. Furthermore, Richard Feynman once said that “if you cannot create it, you don’t understand it” and furthermore, there is a whole “learning by doing” philosophy in educational systems that is not found in AI systems for training. How are these systems supposed to understand how something works by simply these word-definition correlations? There must be something more subtle and more nuanced. Creating to understand is something that is missing from the curicullum for artificial intelligence.
“Some types of intelligence, machines have mastered, some they haven’t”
Once again, what is really intelligence? Is it the ability to reason on symbolic knowledge, find patterns in subsymbolic information, or both in unison to create a association between the symbol and subsymbolic information. Also, most of the production AI systems today are using either of the two but not both. There are also many “ways of thinking” about symbolic systems and many “ways of finding patterns” about subsymbolic systems. Why is this important?
“Machines don’t have commonsense”
Commonsense reasoning is quite an old field. Furthermore, many researchers have approached it from a logical avenue using a variety of different mathematical constructs. Many of the curent production AI systems cannot do counterfactuals or imagine the world differently. They are simply very brittle systems.
“Many different aspects of language”
The same sentence can be interpreted in multiple forms by multiple angles. This whole notion of multiple minds by Minsky can be adopted by a newer class of AI. Language is not simply symbol manipulation, it is deep understanding of the nuances. One example is how can you make a AI system that can understand jokes. Today’s systems fail miserably as they simply do shallow annotations of the data and then, associate a bag of words with a meaning which could be different depending on the person. There is no notion of universality or using crowdsourcing techniques to do a large survey amongst everyone.
“Language abstracts away”
Do we have machines that create language? Do we have machines that create new types of poetry, breaking the rules or notions of previous literature? Abstract is subjective depending on the language and having a universal, collision-free, and lossless abstraction is very important to prevent misinterpretations in addition to deep understanding at a homunculus level that is out of body and from the sky view.
“One word encompasses many possibilities”
Like many other researchers, one word can have many meanings. Thus, universal languages with universal turing machine would help to situate where in the hyperspace the person conceiving this language. How do you reverse-engineer the actual point and its relevant similar thoughts? They say that one picture is worth a thousand words, but what about a single word? Could it be more an interplay between the neighboring words like more NLP/NLU states? The tone of the spoken word and many lossless attributes are lost when its lossy representation is found in the textbooks. Like I have stated in the past, we need datasets that have all the information (lossless), not just the processed information like the end-result of text.
TO BE CONTINUED…