Can AI surpass “good enough” human cognition while reshaping industries and creative work through uneven quality and relentless growth demands?
In this conversation, Rick Rosner dissects the economic and cognitive assumptions driving today’s AI discourse. He explores how growth narratives allow tech companies to justify high valuations, even amid uneven profits, and why AI is used to reduce labour while often producing error-prone work requiring human correction. Rosner critiques the term “AI slop,” arguing it obscures the difference between careless and conscientious use of generative tools. He then challenges what he sees as misplaced reverence for human cognition. For Rosner, consciousness is not exceptional but evolutionary, clumsy, and replicable — a level of thinking artificial systems may one day match.
Rick Rosner: People who run companies are happy to be convinced that AI allows them to let go of a large number of employees and save a significant amount of money. That is happening and will continue to happen.
He also explains that, as Doctorow notes, a growth company can be valued using a P/E ratio—the price-to-earnings ratio.
The price of a normal company — say, a chain of auto shops or auto supply stores across America — should sell for a reasonable multiple of its annual income. Let us say this is a stable business. It is expected to neither explode nor collapse. It is going to continue, growing at a steady pace that the market anticipates for decades to come. So an auto-parts store or chain should perhaps sell for a multiple in the mid-teens of annual earnings; call it about 14 times annual earnings in this example.
A company that has one hundred million dollars in annual earnings could be reasonably priced at fourteen times that amount — 1.4 billion dollars — divided by the number of shares outstanding. That applies to an established business. A growth business like Nvidia might sell for something on the order of several dozen times annual earnings, because investors expect much faster growth than for a mature chain of auto-parts stores.
A growth company might also, for long stretches, barely make a profit or even run losses. For years, a company like Uber fit that description: it spent a long time losing money while still being valued very highly, and reported its first full-year profit as a public company only in 2023. For a long time, the honest answer to “Has Uber ever made a profit?” would have been: “I don’t fucking know, probably not yet,” which is precisely the point about how growth stories can dominate valuation.
Such a company can sell for a very high multiple — fifty, seventy times earnings or more — because people think the sky is the limit, that the company can keep growing more or less indefinitely. Even though it has annual sales of one billion dollars now, ten years from now it might have yearly sales of twenty billion dollars. You want to get in on the ground floor, and you are willing to pay a high P/E for the stock.
Doctorow explains that once the growth story disappears and people decide that a company has become mature — like the auto-parts company — its P/E drops from fifty to fifteen, which means you have just lost about seventy percent of the company’s value, assuming earnings stay the same. So AI companies have to continue to hype their products to show that they are still growth companies, that they have not come close to reaching their mature potential. That means they are always pushing to sell new applications.
I agree with that part of his argument: AI purveyors will relentlessly seek new ways to make new claims, new fields in which they say AI can render large numbers of employees redundant.
I also accept Doctorow’s argument that, in many cases, they replace good human work with poor AI work — work that needs to be checked by humans, which is more tedious, and whose errors are harder to find.
Stupidly, AI can leave all sorts of backdoors and other vulnerabilities that humans have to find. In specific ways, finding those errors is more complicated than having a human write the original code in the first place. So that is a mess.
But I think “slop” is the wrong term because it doesn’t differentiate between the quality use of AI and lazy use of AI. Do you think that is a reasonable criticism of the term “slop”? It is a colloquialism meant to convey an idea, but the spirit of what you are saying is generally correct. I have seen poor AI work, and I have seen work that you could almost consider conscientious — where someone has sat down, worked with the AI, and removed the obvious nonsense.
What do I look at? Midjourney has a daily sampler of work. You can look at short clips — 2 seconds, 5 seconds — of video generated from human verbal prompts and interpreted by the AI. Some examples are purely illustrative and not meant to replicate reality: stylized animations that resemble magazine illustrations, animated for video. Because they are not aiming for realism, there are fewer opportunities for obvious nonsense.
In videos that do try to replicate reality — for example, a model walking down a runway — if you look closely enough, you can still find bits of nonsense. Two years ago, everyone joked about hands having the wrong number of fingers. Now fingers are mostly corrected. Instead, you might find bad physics or joints that do not move in ways human joints move. You have to look harder, but you can still find nonsense in almost every clip that purports to be realistic.
However, a conscientious user of AI can go through many iterations until the obvious nonsense has been designed out. You could end up with AI-generated work that matches the quality of a quarter-million-dollar shoot for a television ad.
He is a smart guy who understands tech. He does not think the tech is that good or that it can become that good. He can make a great argument, but I do not buy it. The error people make when they say AI cannot be as good as, or better than, human cognition is in calling human cognition good.
By saying AI cannot live up to human cognition, I mean that human cognition is pretty good—or that consciousness is too special.
Let us assume consciousness will eventually be fully understood, because you should not be able to use “consciousness not being understood” as part of your argument. So if you want to claim consciousness is special, you almost have to default to the idea that it has so many ingredients — or one special ingredient, like quantum neurons — or that it is so precisely balanced that even if we eventually figure it out, human consciousness remains too excellent to be surpassed by mechanical consciousness.
That is the part I do not buy: that human consciousness is excellent. Consciousness in general is something that will evolve given the right, not uncommon, circumstances.
You have a complex environment. You have organisms that already have brains that are more specialized than the eventually arising, more generalist brains.
I argue that brains are an advantage at any level of complexity. Anytime you can have a brain — or evolve a more complex one — it offers an advantage if you can keep it within your physiological budget. It cannot be so expensive to grow and operate that it harms the animal. But if you can build a brain cheaply and evolve it relatively cheaply, it is an advantage to have one.
Now, maybe there is a limit on that — a reasonable biological limit.
There is a biological limit to the human brain. One limit is that you cannot build a head so large that it kills every woman trying to give birth because the head will not fit through the vaginal canal. Human heads are already so big that the pelvic bones have to separate in the middle, and the skull of a newborn is made of plates that can be compressed so it fits through the birth canal. It already takes a lot of engineering to fit our big heads out of there. That is a limit on skull and brain size, at least until birth.
There may be another limit: would it really be enough of an advantage for us to walk around with giant “brainiac” skulls — Mars Attacks–style heads with basketball-sized brains? I do not know. But in any case, given a variety of environments and organisms, the evolutionary push is going to be for bigger brains, and those brains are going to be conscious.
My argument is that they will be conscious, but also bad. Consciousness is not especially noble; it is often clumsy. It will not take much for AI to achieve those clumsy levels of cognition.
Rick Rosner is an accomplished television writer with credits on shows like Jimmy Kimmel Live!, Crank Yankers, and The Man Show. Over his career, he has earned multiple Writers Guild Award nominations—winning one—and an Emmy nomination. Rosner holds a broad academic background, graduating with the equivalent of eight majors. Based in Los Angeles, he continues to write and develop ideas while spending time with his wife, daughter, and two dogs.
Scott Douglas Jacobsen is the publisher of In-Sight Publishing (ISBN: 978-1-0692343) and Editor-in-Chief of In-Sight: Interviews (ISSN: 2369-6885). He writes for The Good Men Project, International Policy Digest (ISSN: 2332–9416), The Humanist (Print: ISSN 0018-7399; Online: ISSN 2163-3576), Basic Income Earth Network (UK Registered Charity 1177066), A Further Inquiry, and other media. He is a member in good standing of numerous media organizations.
Last updated May 3, 2025. These terms govern all In Sight Publishing content—past, present, and future—and supersede any prior notices. In Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.