Born to do Math 190 – Watson and Google: Associational Engines
October 22, 2020
[Beginning of recorded material]
Scott Douglas Jacobsen: If at all, how conscious are Watson and Google Translate? Both of which are association engines.
Rick Rosner: The way Watson answers questions in Jeopardy, the question would be put into its system. The words it had learned and the word associations, e.g., word order, would bring up candidates for possible correct answers within Watson’s system.
I guess, Watson might, or eventually would, get a tally of possible answers with a probability of each answer being right. If some answer broke a threshold of 80%, 90%, Watson would ring in an answer.
Google Translate works somewhat the same way. Actually, I don’t know how it exactly works. Probably, Google Translate, you have to build each system, so, not that it knows stuff but, that there are words in each system.
Then you build the associations with each word. With Google Translate, they probably built in a lot of language-to-language dictionaries. That bread in English and French relate this or that way. They might have started with that.
Once it started running, it has access to all sorts of literature in each language that it is working with. It can reach conclusions about what word you’re looking for if it doesn’t have it in its system. Unless, it doesn’t have contextual clues in each language and statistical likelihoods.
In each case, a set of inputs brings up associations of varying strengths. Google Translate has gotten better and better. I used it yesterday. It is crazy good. I was looking for a word in French, not having the exact word I wanted in English.
I was poking around. I plugged in a word close in English. It translated it into French. When I translated it back into English, it gave me the word I was looking for in English. It did my thinking for me, in the language I was starting in based on context.
So, we can ask the question, “How conscious, if any, are these association engines?” It’s been said – and I haven’t read a paper on it or anything – that inside Google Translate; there’s a metalanguage that expresses relationships among words hat are common across all languages.
Somehow, Google Translate finds it. I don’t know if this is true or not. But this sounds plausible that Google Translate finds it efficient to catalogue the relationships among words in a meta way that isn’t dependent on any single language, but is really an outgrowth of all languages that it is working with.
It is a sophisticated associative net. So, to the extent that these engines are aware of anything, they are probably not aware, but there is something going on where they reliably bring up the right association, even when that wasn’t plugged in there by someone early on.
It is an association that has been developed via machine learning. So, it has the mechanics, the associational mechanics. It has the ability to associate things and to bring up things based on association the way consciousness does.
But it is missing so many other ingredients of consciousness; that it is unlikely to be what we’d consider conscious. Among the things that it is missing are, maybe, the biggest things like real-world correlations, e.g., Google Translate knows that there’s a relationship among the parts of a car and the word “car.”
It knows that it can group other words, e.g., it knows wheels are associated with tires and the word “rotate” and “grip the road” and “steering.” All these things associated with wheels and driving. But it probably lacks any kind of imagery library that explains in any way what rotate means and what steer means, even circular.
Although, you have to figure. With Google being a big science fictioney sinister company, and ditto for IBM, they have probably made attempts to associate visual imagery, plug it into the associative net.
We know Google Image search is pretty good at visual associations. So, it is possible that Google Translate might have visual imagery having been entered – pictures having been entered – into its system.
It may have increased its effectiveness at coming up with the right word. Who knows if Google would tell us about it? Because that would make people nervous. I don’t still don’t think it is multiplicitous enough, seen from enough different angles, that Google Translate could come up with any real understanding of how wheels work at this point.
Because I don’t think that the nodes, the words and, maybe, images in its system, are associated enough with what we would consider sensory input, say video; that it would have a well-developed enough associative net with aspects of the world, working as they do in the world; that it would have any real kind of understanding.
That is big thing one that it might be missing. Thing two is judging. I don’t this either system has any way of judging. First of all, neither system has an idea of itself. Neither system has any means of judging whether something is good for either itself or good for some kind of aesthetic or some multiplicitous set of values.
In each case, each system is looking for the optimal word or answer to a question. That’s a simple enough measure of relevance. It makes its best guesses, best calculation as to how relevant a word choice is or an answer is.
If it is high enough for Watson, Watson rings in and answers the question on Jeopardy. I think with Google Translate. It gives you its best stab at what it thinks you’re trying to say, even if it is a bad stab, I assume.
I don’t think the measure of relevance is tied into enough of an associative system that a supported judgment; that it can be truly said to judge or to experience things that it likes versus things that it doesn’t like.
I don’t think it has the experiential and associative net to do that. Beyond that, it doesn’t have emotions. At the very least, emotions are also an associative net.
[End of recorded material]
Authors[1]
American Television Writer
(Updated July 25, 2019)
*High range testing (HRT) should be taken with honest skepticism grounded in the limited empirical development of the field at present, even in spite of honest and sincere efforts. If a higher general intelligence score, then the greater the variability in, and margin of error in, the general intelligence scores because of the greater rarity in the population.*
According to some semi-reputable sources gathered in a listing here, Rick G. Rosner may have among America’s, North America’s, and the world’s highest measured IQs at or above 190 (S.D. 15)/196 (S.D. 16) based on several high range test performances created by Christopher Harding, Jason Betts, Paul Cooijmans, and Ronald Hoeflin. He earned 12 years of college credit in less than a year and graduated with the equivalent of 8 majors. He has received 8 Writers Guild Awards and Emmy nominations, and was titled 2013 North American Genius of the Year by The World Genius Directory with the main “Genius” listing here.
He has written for Remote Control, Crank Yankers, The Man Show, The Emmys, The Grammys, and Jimmy Kimmel Live!. He worked as a bouncer, a nude art model, a roller-skating waiter, and a stripper. In a television commercial, Domino’s Pizza named him the “World’s Smartest Man.” The commercial was taken off the air after Subway sandwiches issued a cease-and-desist. He was named “Best Bouncer” in the Denver Area, Colorado, by Westwood Magazine.
Rosner spent much of the late Disco Era as an undercover high school student. In addition, he spent 25 years as a bar bouncer and American fake ID-catcher, and 25+ years as a stripper, and nearly 30 years as a writer for more than 2,500 hours of network television. Errol Morris featured Rosner in the interview series entitled First Person, where some of this history was covered by Morris. He came in second, or lost, on Jeopardy!, sued Who Wants to Be a Millionaire? over a flawed question and lost the lawsuit. He won one game and lost one game on Are You Smarter Than a Drunk Person? (He was drunk). Finally, he spent 37+ years working on a time-invariant variation of the Big Bang Theory.
Currently, Rosner sits tweeting in a bathrobe (winter) or a towel (summer). He lives in Los Angeles, California with his wife, dog, and goldfish. He and his wife have a daughter. You can send him money or questions at LanceVersusRick@Gmail.Com, or a direct message via Twitter, or find him on LinkedIn, or see him on YouTube.
Scott Douglas Jacobsen
Founder, In-Sight Publishing
Footnotes
[1] Four format points for the session article:
- Bold text following “Scott Douglas Jacobsen:” or “Jacobsen:” is Scott Douglas Jacobsen & non-bold text following “Rick Rosner:” or “Rosner:” is Rick Rosner.
- Session article conducted, transcribed, edited, formatted, and published by Scott.
- Footnotes & in-text citations in the interview & references after the interview.
- This session article has been edited for clarity and readability.
For further information on the formatting guidelines incorporated into this document, please see the following documents:
- American Psychological Association. (2010). Citation Guide: APA. Retrieved from http://www.lib.sfu.ca/system/files/28281/APA6CitationGuideSFUv3.pdf.
- Humble, A. (n.d.). Guide to Transcribing. Retrieved from http://www.msvu.ca/site/media/msvu/Transcription%20Guide.pdf.
License and Copyright
License
In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at www.in-sightjournal.com and www.rickrosner.org.
Copyright
© Scott Douglas Jacobsen, Rick Rosner, and In-Sight Publishing 2012-2020. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen, Rick Rosner, and In-Sight Publishing with appropriate and specific direction to the original content.