Ask A Genius 336 – AIs Wants and Types

In-Sight Publishing

Ask A Genius 336 – AIs Wants and Types

November 1, 2017

[Beginning of recorded material]

Scott Douglas Jacobsen: What is going on with AI and if/when it’s eventually developed more and more what will it want, if we can use that phrasing?

Rick Rosner: You and I coincidentally both read the same essay, which attacked the science-fiction views of robots running amok trying to kill humanity and taking over the work. This essay rightfully did that, but then it didn’t get into specifics.

It went off in a different direction or the guy started talking about a novel that he’s written, which has some AI in it. So, we could talk about what AI might want and let’s dispense with the mid-future like more than 50-100 years from now, more than 80 years from now.

There will be a point at which AI could be given human-like abilities and one of the reasons that it would be given those abilities is because humans like companions and human-like interfaces and companions already do that to an extent.

We can talk with Siri and Alexa and all that junk, and within right around the end of the century we should be able to build pretty decent robot girlfriends, robot butler’s, robot advisors, that at least understand what humans want and to some extent can be made to at least simulate those wants in themselves.

It could be for the purposes of being a sassy girlfriend, but the deal by that time, I predict, is that we will have a good mathematical model of consciousness, which will allow us to reasonably accurately predict the different ways in which AIs that have been built to be human-like might behave and misbehave.

Given that model, there will be regulations about the prudent construction of AI. So, you don’t have an AI that’s been built to run amok; of course, there are all the assholes who do that anyway. Those AIs that have been programmed to act in human ways and in malicious ways, whether human or not.

That will be an issue to deal with. But what you won’t get, I don’t think, is mission creep, which is the way that almost all malevolent AIs in science fiction turn bad; it’s robots or AIs being made to be our servants, but then they start thinking about their duties and in more wide-ranging or more general ways.

They start following chains of reason to the point where they decide that the real problem with preserving the Earth is humans. Then they decide to kill all humans, which is pretty much Skynet and dozens of other science fiction things.

But the AIs that we’ll be building for most of this century won’t have that much mission creep. Then the AIs with the intellectual potential, the mental potential to do that level of reasoning, by the time we’re able to build those, we’ll have enough of a mathematical understanding of the mental landscape of what we’re building that we can pretty much engineer AIs that don’t have dangerous levels of mission creep.

So, you’ve got two areas of non-threat; you’ve got AIs that are built after the end of the century that is highly sophisticated and powerful but have been prudently engineered. So, no threat there or little threat there.

Then you have AIs built during this century that aren’t powerful enough to be a threat, then you have a two non-threats. Then the threat is AIs built by assholes in order to cause mischief or worse. That’s a medium to the low level threat on a level of terrorism today, which is bad, but it’s not freaking World War II bad.

So, then you have one major threat that we haven’t talked about which is piggybacked AI, which will be for the next 100 years or more; the most powerful form of AI, which is to whatever tech we have in terms of information processing in conjunction with people who are good at using that tech and eventually merging with that tech.

So, it’s not robots on there, but it’ll be people; rich people, smart people using whatever AI exists to obtain further advantages over other people by being able to think faster, being able to find patterns faster and deeper, and engaging in a normal human competition with increasing advantages via AI.

And so, of the four forms of AI that can cause problems, that one is probably the one I’d worry about the most.

Jacobsen: When you talk about prudently engineered AI in the next 80 years or so, the mathematical model of consciousness as an information processing complex will likely include a moral system akin to a tighter, more precise, and well-defined Golden Rule, which trims potentially harmful choices to people and other living things on the part of AI systems however sophisticated.

Rosner: I mean you’ve got the other who will be able to mathematically implement some things along the lines of Asimov’s Three Laws of Robotics when it becomes necessary. We won’t have that mathematical model say for another ten, twenty, or thirty years possibly.

We won’t need the model to control AI or predict possible glitches in AI for decades after that. But yes, I don’t know if you’ll build the Golden Rule. My buddy says there will be a trillion AI in the world by 2100.

Most of that AI won’t be sophisticated enough or conscious enough to be set out into the world meeting the Golden Rule standards, where it’s deeply embedded in there. Most AIs won’t be philosophers, but there will be some people who will want AI companions.

Some artists will strive to make AI as human as possible, but most AIs will be engineered for some specific sets of tasks and won’t be that deep. Although, you can picture a point, say 140 years from now or 150 years from now, where it’s cheap to build conscious AIs.

So, there may be some sloppy work and some abuse where you build AIs that have a full complement of feelings, even where it’s not necessary because its slapping a sophisticated AI consciousness into a system that might cost the equivalent of five of today’s dollars.

So, yes, that crap is going to be going on. You’d want a bunch of sophisticated controls either engineered into the AI itself or else roving AI sniffers that’ll look for AIs that are overpowered that could go bad; overpowered and under control.

I mean this is all part of a landscape that in some ways will be a hyper version of today’s landscape with hacker wars and cybercrime, sponsored by all sorts of entities from private A-holes to governments. One form of AI that’s already messing stuff up is almost too banal or banal.

However, the disruption caused by automation taking jobs. It’s not as bad as it’s going to be. We’re already suffering from it, but that’s not the threat by AI that people talk about when they talk about the risk of robots taking over which leads to what you noticed which is… go ahead.

Jacobsen: I watched a panel of middle-aged white, smart people who specialize in some form of AI or who’ve done some thinking on AI, or panicking about it more properly. The demographics being middle-aged white dudes. There is a hype.

So, I feel as though it’s mostly North American phenomena, barring Demis Hassabis and a couple others. It’s white or Caucasian men in the 35 to 55 range and if that’s a thing for any particular reason, but it does seem like it’s a thing, for whatever reason.

Rosner: For one thing, those are the guys who are most qualified to think about it. They’re the early adaptors; they’re the guys who they’ve been successful in the world of tech. People like them to talk about tech issues.

The “I’ve decided it’s prudent at this point to start talking about the possible risks of AI.” And I agree with them. I’m a middle-aged white guy though not as successful as those guys. I agree that it’s prudent.

I predicted that we will have good controls with an understanding in place by the time we need it, but that’s a guess. We won’t get those controls. That understanding, unless, we start working on it now.

So, I agree it did make sense to start thinking about what the issues are. However, remote they are; they’re fairly remote, but those guys, I mean, the cost of making of a mistake that turns over the world to malevolent AI is obviously the entire world.

So, even if it’s a remote possibility, you got to a look at it as a similar threat that that looks, if not remote, it looks like it’s it won’t become a threat for many years in the future. It won’t become a threat for many years.

Nanotechnology, where people have been worried for decades about the grey goo, where you make a little teeny molecular machine, a tiny little automaton, that eats whatever’s in its path to make more copies of itself.

Then all these copies eventually infect the world turning the entire world into these little machines that in swarms resemble the grey goo. We don’t know whether that’s a reasonable possibility, but, I mean, there are no steps in the imaginary process that don’t seem completely impossible.

I mean there are no steps in the process that seem completely impossible. So, that’s something that we will have to investigate and guard against; though at this point in history, it’s remote because we don’t have the tech yet.

It might turn out upon analysis to be an unlikely occurrence, but the cost of that occurrence which is again the death of everything on the planet makes it merit serious investigation.

[End of recorded material]

Authors[1]

Rick Rosner

American Television Writer

RickRosner@Hotmail.Com

Rick Rosner

Scott Douglas Jacobsen

Editor-in-Chief, In-Sight Publishing

Scott.D.Jacobsen@Gmail.Com

In-Sight Publishing

Footnotes

[1] Four format points for the session article:

  1. Bold text following “Scott Douglas Jacobsen:” or “Jacobsen:” is Scott Douglas Jacobsen & non-bold text following “Rick Rosner:” or “Rosner:” is Rick Rosner.
  2. Session article conducted, transcribed, edited, formatted, and published by Scott.
  3. Footnotes & in-text citations in the interview & references after the interview.
  4. This session article has been edited for clarity and readability.

For further information on the formatting guidelines incorporated into this document, please see the following documents:

  1. American Psychological Association. (2010). Citation Guide: APA. Retrieved from http://www.lib.sfu.ca/system/files/28281/APA6CitationGuideSFUv3.pdf.
  2. Humble, A. (n.d.). Guide to Transcribing. Retrieved from http://www.msvu.ca/site/media/msvu/Transcription%20Guide.pdf.

License and Copyright

License
In-Sight Publishing and In-Sight: Independent Interview-Based Journal by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Based on a work at www.in-sightjournal.com and www.rickrosner.org.

Copyright

© Scott Douglas Jacobsen, Rick Rosner, and In-Sight Publishing and In-Sight: Independent Interview-Based Journal 2012-2017. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen, Rick Rosner, and In-Sight Publishing and In-Sight: Independent Interview-Based Journal with appropriate and specific direction to the original content.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s