The Ghost and the Machine

A good friend of mine – let’s call her “Aly” ;-) – was a bit put-off (or worse) by my using ChatGPT to write the second part of my story of a Rice engineering student who drops out to be a musician and ultimately dies of a drug overdose. I did write the first part myself, borrowing details from my own real experience as a Rice engineering student. But then, partly out of laziness, and partly as an experiment, I asked ChatGPT to write the second part “in the style of Eudora Welty”, a writer from my hometown of Jackson, Mississippi, whom I had quoted in the first part. Y’all following along so far?

Some readers who knew me back in my high school and college days at first thought the original story was true, but they were then confused when, in that story, I dropped out of school, started using drugs heavily, etc. The story was a bit trite, I admit, but it benefited more than little from the frisson engendered by uncertainty as to what was real and what was not.

Aly very perceptively pointed out to me that, when reading the second story, she had me “in her head”. I think she had a sense that it was me who was talking, me telling the story, and so when she learned otherwise, she was understandably miffed. It was a bait-and-switch that neither Aly nor the rest of you expected or deserved. Yes, I revealed that I used ChatGPT, and I shared the prompt. But I fooled you first and then fessed up at the end. MY APOLOGIES.

I’m seeing an increasing number of posts in Facebook groups that I follow that are clearly AI-generated. They sometimes contain factual errors that any follower of the group would know are wrong, but even when they’re mostly accurate – and well-structured and clear – they are also bland and somewhat obvious rehashes of accumulated “wisdom” about the topic. I find myself reacting somewhat like Aly did, annoyed and offended that my legitimate interest in the group’s posting is being subjected to bland rehashes by a machine. I think it would feel better somehow if such posts were labeled up front as AI (unlike how I did it), so I/we wouldn’t instead be reading them, à la Aly, with an archetypal group member in our head(s).

A year ago or so, as folks were first reacting to the implications of generative AI systems, it seemed like a sensible proposal to require systems and people to label content that was generated with such systems. LIKE I SHOULD HAVE DONE. But, as we all start collaborating with generative AI to make stuff, I wonder if this will be meaningful going forward. I’ve started to use AI to generate the track art for Storytown tracks, and I have not been telling you when I do it. Do you care? Those of you who know that I have limited talent in the visual arts may be assuming that I’m paying deserving artists for this stuff. Sorry, but no.

[I highly recommend the podcast Shell Game, in which journalist Evan Ratliff slowly replaces himself, bit by bit, with an AI voice clone to see how far he can actually take it. It’s funny, thoughtful, entertaining, and unnerving.]

Virtually everyone whom I talk with about the implications of generative AI in their discipline starts off by saying, with confidence, that these systems will “never” do that, or that they can “never” make that, etc. etc. After some spirited pushing back they invariably admit that the scope of what’s immune from the machine is very likely pretty small – and shrinking. From their comments, I believe that a significant number of the Facebook group members don’t recognize the AI posts as AI. For these members, these posts are fine. This highlights the perhaps disappointing fact that so much of what we make or consume is just “fine”, and I think it poses a big question: Who are we anyway? What about us do we think is out of reach of the machine?

As testament to my amazing prescience, I wrote a song and blog post right before ChatGPT was first unleashed upon the world (click here or the art to listen):

Well, technically, my blog post was published just one week after ChatGPT was released. The post doesn’t mention ChatGPT (although it does reference OpenAI’s GPT-3, on which it was built), and I won’t try to rehash my post’s very insightful insights, but here’s one excerpt that touches on the “who are we?” question:

It feels like we’ve moved past determinism with some of the AI generated stuff. Do these creations and capabilities now bear some imprint of “humanness”? Is it making progress in that direction? If yes, then what is the source of that humanness; is it from the software designers and the training data, or is it emerging from the machine? Does it matter? Will we have to redefine what humanness means? Is that a manifestation of the singularity?

Where is this all going? The song Surrender suggests one possibility:

We’re all cogs in a big machine
One so vast that it can’t be seen
It’s got no soul so its conscience is clean
Let’s surrender now


For now, don’t forget to mark your calendars for a purely analog, human intelligence-generated  evening next January 15, 7pm at The Cutting Room in NYC (click the image for tickets):

Authors Ken Womack and David Browne will talk about Greenwich Village and Bruce Springsteen and The Beatles, and bands Bird Streets and Storytown will show what humans can create. Not bad for skin and bones and wetware. Please come.

Thanks for listening, and have a great holiday.

Guy StoryComment