Ask A Genius 842: Did you know? Women can take their bras off without taking their shirts off.

[Recording Start] 

Rick Rosner: So you may know, since Elon Musk bought Twitter, it’s been a mess. One of the ways it’s a mess is that maybe it was porn-y before and I just didn’t know it but now there’s a lot of porn-y stuff on Twitter. I came across it because I am who I am once I see a little bit of that stuff I’ll click around and I’ll look for more. I follow a lot of AI art on Twitter thinking that it’ll give me insight into AI and I had the thought that with all the AI art that’s out there, and it’s a lot, there must be a ton of AI porn. So I tracked some down and it’s instructive because we are currently in the era of stupid AI, AI that does some things well enough to make people nervous, to freak people out but when you actually look at it its pretty dumb like ChatGPT and all the AI apps that write sentences and essays. They’re sound grammatically but they’re vacuous and often inaccurate. There’s no Insight that the AI came up with itself.

Any insight in there is cold from other people and is generally because it’s using large language models which mean it’s using big data, the insights are bland and obvious. One of the main demonstrations the current AI is stupid is self-driving cars which fuck up a lot more than people do. The accidents and fatalities per mile are higher for self-driving vehicles than they are for people, which is scary considering how crappily people drive. So, looking at the AI porn was instructive because it shows how much AI doesn’t understand. It doesn’t understand underwear that underwear stays on people because it wraps around you but AI will throw up all sorts of underwear like scraps of fabric on the model and stuff that would not stay on because it’s not attached to anything. It just it throws it up there as if it makes sense but it doesn’t. AI often doesn’t understand that penises are attached to a guy and often makes the mistake that the penis is part of the vagina that sticks out. 

I feel like AI porn that makes sense is the result of humans getting in there. I don’t know how you edit AI art but humans getting in there and editing out the nonsense, the things that just don’t comport with reality. So there are all sorts of errors. Sometimes you’ll have the top half of the model facing 180 degrees away from the how the bottom half faces and I’m not sure whether that’s the AI misunderstanding or whether that’s a perversion of the person who created the porn. You have people with extra legs and extra fingers. If you see this in non-porn AI that AI just doesn’t really have a good understanding of human anatomy or really a deeper understanding of how objects exist in space which led me to think about what we have, which is a lot. 

Human brains run on a big data model the same as AI except that our models are informed across many more aspects of reality. I was thinking about how that happened, how we understand how underwear works because we understand material objects. We understand fabric and stretchy fabric and we understand how you have to put your limbs through the holes in the underwear and then you know pull it up and it stays in place because it’s stretchy and it wraps around you because we’ve been in the world with agency. Now, agency isn’t necessary to understand stuff but it really helps because when you can go out and interact with the parts of your world that are pertinent, you get the necessary information and you get it in big doses in big data doses. I’ve put on underwear 18,000 times and I’m not confused by underwear. Most people are confused by topological tricks you can do with underwear like guys are shocked that women can take off their bras without removing their shirts.

Scott Douglas Jacobsen: What?!

Rosner: We’ve seen this in movies; at the end of a long day a woman walks in to her apartment and unhooks her bra and slides the straps down her arms even though her arms remain in her shirt and removes her bra without removing her shirt. Didn’t you just make a surprise sound?

Jacobsen: Correct. Also, this did happen in The Simpsons where Grandpa Simpson took off his underpants with his pants still on and he ripped them through.

Rosner: Well you can certainly do it by ripping your underwear.

Jacobsen: Then the kids go, “Grandpa how did you take your underpants off without taking your pants off?” And then he she keels over going “I don’t know.”

Rosner: So did he tear him off or did they come out intact?

Jacobsen: I think it come out intact.

Rosner: Because if you have stretchy enough underpants you can take them off without taking off your pants. They just have to be stretchy enough that you can pull them down one leg, over your foot, and then back up and then you can pull them out your other leg. 

Jacobsen: They were intact [Laughing].

Rosner: So I mean it’s not like women are mathematicians. Either somebody taught them how to do that or just by necessity and exposure they developed the understanding that a bra can come off and without taking everything off. 

Every aspect of our experience informs every other aspect so that we get these deep understandings. We have models of the world that are based on understanding how the world works in lots of little ways and AI understands nothing. It draws probabilistic conclusions. It has a rough idea, it knows where underwear goes, and it knows what guys like in terms of underwear configurations. Also, all the bodies in AI porn, for the most part they’re the same body over huge overflowing breasts, a smallish waist, and a huge billowy round butts that tend to almost overwhelm any clothing that is being worn. But this is all probabilistic conclusions and not deep understandings but the shallow understandings of AI are pretty indicative and as we’ve talked about, the limitations that make AI dumb now will eventually and probably sooner than later be overcome.

One problem with self-driving cars is I don’t know how many freaking servers it takes to build a data set for ChatGPT but it probably fills some big ass room. Maybe I’m wrong, I don’t know, but that’s what a Tesla needs; a big ass data set. I’m not sure you can fit a big-ass data set using current circuitry into a Tesla. In some ways we have very efficient information processing circuitry, it’s really sloppy. Complaining about how sloppy human information processing is a little bit of like complaining about how there aren’t any straight lines in the human body. Even our very longest bones have these long curves and those curves have evolved out of efficiency and the apparent sloppiness of our cognition is a product of hundreds of millions, billions of years of evolved cognitive efficiency. Its how we can that can fit everything we know into our fucking heads. Any comments?

Jacobsen: Daniel Dennett looked at consciousness or looks at consciousness or something like a user illusion, it’s like a screen that presents us this information but it’s really just an illusion. I think if that is true and I’m not sure if it is.

Rosner: Well I like it because we talk about as if consciousness. We’re conscious because our brains act as if they are conscious. Our thoughts are presented to us as if they’re conscious thoughts and we process them as if we’re conscious. And yeah it’s an illusion because we don’t have magic juice in ourselves that gives us this magical thing called consciousness. So anyway, keep going.

Jacobsen: Well, and with that user illusion that skirt thin screen of presentation, there’s a whole system underneath that makes that possible. Now imagine if you inverted that image; you still have the screen but you’ve taken out the base. That’s what these AI generation systems are right now.

Rosner: Okay, that makes sense yeah because I’d argue that it’s not a thin screen of presentation, it’s a thick ass screen of presentation that pervades our conscious information processing. 

Jacobsen: And so these AI; they are all screened. So, it’s like a magician’s trick; it’s presenting to us the immediate interpretation of things readily available to us without any requirement of understanding.

Rosner: Any mediation by the rest of your brain; the sensory information comes in and is processed, say you see something and what you see is processed unconsciously. A lot of processing happens before the image hits your consciousness. If somebody could analyze the images coming into consciousness before they’re consciously processed, I think you’ve just made the point that that shit would look like AI art. It would look pretty good, it would look pretty processed but it would have a lot of dumb misunderstandings because it hasn’t hit consciousness for consciousness to clean it up to say “Well you thought you saw somebody with three fucking arms but that’s not how people are, so we’re just going to clean that. Like when you see a ghost out of the corner of your eye in your house you’re like what was that. You’re pre-conscious processing drew some conclusion that said “oh guy in the doorway” and you look at the doorway and it was a glitch. Pre-conscious processing made a guess as to what most probably was in the doorway and said guy and then that was just a bad guess. But one that’s helpful because you need to know if there’s a guy in your doorway in the instances when there are. 

Jacobsen: So in that sense it’s like you’re just dealing with the neocortex. I mean it’s an argument for consciousness arising only in the context in terms of a deep understanding of the world around the system’s self, it being embodied somehow. We’re not just talking about the brain giving input to itself and talking within itself; we’re talking about the whole body acting, being embodied, having systems that are integrated into all that, and then feed that information in a particular way to that central processing system.

Rosner: So it makes it a lot easier to develop deeper understandings.

Jacobsen: Yeah. I’m not saying that there’s any magic. I don’t think there is. I think that we’re certainly at the cusp of the start of something new. It’s in a very far orbit, it’s out in the over cloud of consciousness it’s there but it’s not the sort of depth and fluidity that you’d see in normal consciousness.

Rosner: I mean it’s the substrate, it’s the pre-conscious processing, the probabilistic conclusions. Watson 15 years ago now I think, was just like having a probability network that when a Jeopardy answer has these words in it then the correct question, because that’s how Jeopardy’s set up is, is likely this. If the question has Tycho Brahe in it say and something about the Czech Republic, then that gets you maybe 60% of the way or Watson 60% of the way to saying the answer is going to be Prague. The only fucking thing people know about the Czech Republic is Prague. So, there might be some grammatical clues and I forget what percent certainty Watson had to get to before it would ring in; it was something like 75 or 80%, maybe higher, I don’t know. Watson didn’t understand shit; Watson was just coming to probabilistic conclusions in some kind of Bayesian Network.

And that’s how you can play Jeopardy that way. What if you’re asking about Wisconsin, there are only a few things people know about Wisconsin. Madison is the capital; the state slogan is I think ‘a land of a thousand lakes’. So, it’s likely going to be, one of the answer is going to be among the things people know about the thing being asked about which is based on no deep understanding.

[Recording End]

Authors

Rick Rosner

American Television Writer

http://www.rickrosner.org

Scott Douglas Jacobsen

Founder, In-Sight Publishing

In-Sight Publishing

License and Copyright

License

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Based on a work at http://www.rickrosner.org.

Copyright

© Scott Douglas Jacobsen, Rick Rosner, and In-Sight Publishing 2012-Present. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Scott Douglas Jacobsen, Rick Rosner, and In-Sight Publishing with appropriate and specific direction to the original content.

Leave a comment