“From an evolutionary perspective, what appears wasteful at the individual level may be functional at the population level.” — Jacobsen
“From an evolutionary perspective, what appears wasteful at the individual level may be functional at the population level.” — Jacobsen
Rick Rosner: “The trajectory is empirical, not mystical, and it deserves careful measurement rather than panic or denial.”
“High-range IQ testing is a sport that very few people play. At this point, with individual intelligence increasingly intertwined with technologically assisted intelligence, you would have to convince me there is still a compelling use for this subculture.”
“Out of the roughly 12,000 underage people I removed, simple math suggests I may have prevented several dozen sexual assaults.”
“When bad actors face no practical constraints, that is always dangerous. Language let us compress reality into symbols, freeing our minds to roam more widely. That same generalist power makes us adaptable—but it also means our systems can fragment when pressure rises. The slope does not always look slippery, until it does.”
AI replaces good work with bad work, yet people expect limitless growth. Rosner argues the real mistake is assuming human cognition is excellent. If consciousness is evolutionary rather than sacred, artificial systems could eventually reach our clumsy level. The question is not whether AI becomes conscious — but how competently.
In this conversation, Rick Rosner and Scott Douglas Jacobsen examine whether humans can maintain meaningful understanding in an AI-driven world. Rosner argues that advanced intelligence will force people either to merge with AI or accept a diminished grasp of reality, comparing non-integrated humans to household dogs navigating a world they cannot interpret. Jacobsen responds that many communities—such as the Amish—function pragmatically within limited worldviews, even when those frameworks are false. Together, they discuss religion, pseudoscience, and functional ignorance, concluding that long-standing human tendencies toward siloed understanding will likely intensify as AI accelerates the pace of complexity.
In this interview, Scott Douglas Jacobsen talks with Rick Rosner about movies, mega-IQ tests, AI, and the future of consciousness. Rosner explains why Long Shot succeeds as sharp wish-fulfillment, reflects on the brutal difficulty of Cooijmans and Hoeflin high-range tests, and worries that humans may become like dogs—immersed in sensation but missing understanding. He sketches consciousness as a crisis-response system that allocates attention under pressure and predicts that only tightly AI-augmented people will ride the coming tsunami of complexity, while most drift through frictionless entertainment, sporadic insight, and increasingly outsourced thinking, with ethics and meaning left dangerously unresolved for everyone.
Scott Douglas Jacobsen and Rick Rosner outline potential compute futures: dystopian “cruel” systems, protective “conservative” networks, uncompromising optimization turning everything into infrastructure, leisure-driven “endless fun,” passive “idiocracy,” market-driven “capitalist,” adaptive “contextual” orchestration of CPUs, GPUs, QPUs, and speculative “Darwinistic” evolution where compute outlives humanity, cost, time, and energy efficiency optimization.
Scott Douglas Jacobsen interviews Rick Rosner in a wide-ranging conversation starting with swear words and diving into utilitarianism, longtermism, effective altruism, AI ethics, simulated consciousness, moral uncertainty, and capitalism. Rosner critiques modern frameworks, explores future consciousness, and calls for ethical clarity amid rapid technological change.