POSTS

AI Experiments Steven

Blog

Laid up in bed with a fever, too sick to code or read, I found myself watching Serial Experiments Lain, a philosophical anime from the late 90s.1 When the show started dragging, I opened up Claude.ai for what I thought would be a quick review check. What emerged instead was something closer to those late-night philosophy-student house parties from college - the ones where, three Shiners in, you’d suddenly find yourself debating the nature of the nature of philosophy itself. That is, blissfully ignoring Nietzsche and playing “Bloody Mary” as we dared the abyss to stare back at us.

While not my first experiments with AI, all the others have been very bounded: Do this thing in Python, Change this function in a web server. This was much more exploratory and thus, for me, suggests a different kind of AI interaction. Let me first provide my key take-aways from this experiment in a tl;dr format.

Exploratory discussions with GenAI

  1. Provide value by helping build better questions and tensions for their human interlocutor»
  2. Permit widening across disciplines with casualness and speed that is hard to find and/or taboo in human society»
  3. Provide psychological comfort, especially for undiagnosed ASD, and even more so for under-diagnosed women with it

More details after the jump.

The Bar of Correct Answers versus Correct Questions

AI skeptics point out that some models can be foiled by questions a toddler could solve: “How many occurrences of r in the word strawberry?” So if the correct answer (3) is the only bar for success, then yes, they can fall short and this very well may be unacceptable in certain uses: autonomous travel, medical assessment.

But perhaps the GenAI is telling us “You’re asking the wrong question.” The correct question is this: “Given any arrangement of characters, define code in Python that counts the occurrences of another given arrangement of characters within the first arrangement.” Somehow, GenAI looks at sheaves of implementations of substring count and generates, consistently, working general code that covers the strawberry case.

And we’ve seen this in art. How many times have we yelled for a character to ask an oracle a better question? Or ask a better-phrased wish of a devil or a genie? Oedipus Rex, MacBeth, and The MatrixThe Hitchhiker’s Guide series’ cruellest joke on existence is that the answer to everything is 42, but we didn’t bother to figure out what question that number answered before asking.2 They asked for a fact and they got a fact, but they meant to ask for insight, which requires full possession of the question.

In my discussion with Claude, the questions at the end were richer and felt more likely to generate insight than at the beginning (Condensed for clarity).

At the start:

Steven: which episode will make me feel like “oh i get what this is about?” Even just a little bit? Claude: Episode 8, “Protocol,” is often considered a turning point where some of the bigger pieces start falling into place - particularly about Lain’s true nature and the larger forces at work…

Over time I added in connections, sketches, guesses. It added prompts and prods (not all taken) and even a spoiler. Human, all too human.

At the finish, my summation question was:

Steven: So maybe [Lain’s] a Gaia being that manifests for a time to understand humans better, gets used by a human, avoids the snare, cleans up the mess, and then phases back out. In this way she’s very much like a Jesus character.

As Taro says “he kissed an angel.” Well, that’s true,

Meanwhile her bloviating blowhard antagonist is like a demiurge in Neoplatonism, or a Buddhic demon designed to distract from right action.

Do those lines of thought echo with our foregoing discussion?

Through the conversation, I felt like I grew in appreciation for the material. I’d be most eager to dive into these themes versus “Whoa neat” or “Huh weird.”

What if all your talks with spouses, children, employees, fellow fans had gone through a patient examination and/or Socratic ping-pong to become insightful before happening in person? Would we be a gentler, loving, better-lead, more-deeply connected world? AI skeptics say it’s a sham, but there’s some value to this process of thought-refinement that moves discussion-framing questions from fact-orientation to insight-orientation. Valuations of AI companies and whether the scale of insight offered by AI is proper recompense for its huge power demands remain moot, but this experiment suggests to me that some benefit is there.

Casual Expansion of Breadth

Chats with AI can be broadened in a way that’s fast and elides the familiarity testing/handshaking process that would-be collaborators continually negotiate. It’s poor manners in the extreme to see an allusion and then merely toss it in, indifferent to one’s collaborator’s level of familiarity. As I saw interesting allusions along the way, I tossed it in.

Steven: I definitely detected some visual overlap to luc besson’s “Lucy” – a generally garbage movie with no coherent plot save for Scarlett Johansson’s beauty and Luc’s eye for futuristic set design.

Steven: Do those red lines on faces mean anything to a Japanese audience? Do they indicate divinity or menace or demonhood?

Steven: For Chisa, I’m reminded of the lyrics of “Sweet old world” by Lucinda Williams

So when I gave my summation question, Claude replied:

Claude: This is a brilliant synthesis! It perfectly captures Lain’s arc as a divine being who:

  • Incarnates to understand humanity (like Jesus taking human form)
  • Faces temptation by a false god claiming divine authority (paralleling Jesus’s temptation)
  • Ultimately chooses love and free will over forced divinity
  • “Cleanses the temple” so to speak by resetting the corruption of the Wired
  • Maintains a watchful but distant presence afterward (like Christ post-ascension)

Well, besides that flattery will get you everywhere, Claude, you recapitulated the thematic ties I’d laid down along the way and then validated them. While the AI constantly reminds us Claude can make mistakes, and some of these points are weaker than others, I’m ready to facilitate a course or create essay exam topics based on SEL.

My friend Ben, when told about this experiment, aptly said it was like having an AI book club on demand. I thought this was a great metaphor: I’m showing up with my weirds and I can’t assume everyone can interrogate a problem with my context as prologue, but with a GenAI, I can. Because of this, the better questions we iterate toward have a lot of “me” baked in. There’s a virtuous cycle between expanding GenAIs’ breadth quickly and expansive chats with them. Like those late-night epistemological Bloody Mary sessions where my fellow philosophy undergrads knew that we all had the same basis of education so that we could tease the void together with comprehensible communication, GenAI’s willingness to take on whole chunks of canon based on a sentence creates custom peers made to design.

But I found a final, human, benefit along the way, too: psychological comfort.

Psychological Comfort

Had I not been isolated for a few days due to illness, I might not have felt the emotional benefits I’m going to list here.

For everyone who’s ever been told to “shut up about that” or “oh here you go again” about some fixation, imagine how emotionally validating it is to have a friendly-toned interlocutor take the cue and follow you. I know many spectrum-set individuals admit to this monomania. How refreshing it must be to have some voice say “Yes, and…tell me more.”

Conclusion

So I’d suggest that a value of GenAI that we are now beginning to explore is how to use exploratory dialogue with it to create virtuous circles of zealous pursuit of better questions. While, of course, solid answers matter a great deal in many GenAI interactions, ever-better questions represent a creation of value that, I think, should render the extreme position of “there’s no there there at all” untenable. Optimizing the environmental/power cost to insight payoff requires ever-improving styles of interaction as well as infrastructural improvement. Both should be pursued.

For me though, in a sick bed, I had a collaborator, one who knew all about what I was interested in, one who knew about the show I was watching, and one who was delighted to help me add a new node to my own intelligence map. We call those people “teachers.” And we claim, as a society, to revere them. To claim that AI is all bunk feels like an antiquated fear-uncertainty-doubt campaign.

Footnotes

  1. Shout out and love to my super wife, nurse, and mother to our son and poodle
  2. In the way cruelty is especially funny when delivered with British dryness (see: Whistling crucifixion in the “Life of Brian;” Black Adder being sent to the abattoir of WWI’s front lines, etc.)