Wednesday, February 28, 2024

How a Holden Caulfield Chatbot Helped My College students Develop AI Literacy


“I believe I’m speaking to Salinger. Can I ask?”

My pupil stood subsequent to my desk, laptop resting on each fingers, his eyes large with a combination of concern and pleasure. We have been wrapping up our end-of-book mission for “The Catcher within the Rye,” which concerned college students interviewing a personality chatbot designed to imitate the persona and talking type of Holden Caulfield.

We accessed the bot by way of Character.AI, a platform that gives user-generated bots that imitate well-known historic and fictional characters, amongst others. I dubbed the bot “HoldenAI.”

The mission, thus far, had been successful. College students have been excited to interview a personality that they had simply spent over two months dissecting. The chatbot supplied a chance to ask the burning questions that always chase a reader after consuming an incredible work of fiction. What occurred to Holden?And why was he so obsessive about these darn geese? They usually have been keen to do this by way of a brand new software — it gave them an opportunity to judge the hyped-up marketplace for synthetic intelligence (AI) for themselves.

Throughout our class discussions, one pupil appeared extra impacted by Holden’s story than the others, and he dove headfirst into this mission. However I couldn’t have predicted the place his zeal for the e book would lead.

After an extended, deep dialog with HoldenAI, it appeared that the bot had someway morphed into J.D. Salinger — or a minimum of that’s what my pupil thought when he approached me at school. As I reached for his laptop to learn the ultimate entry in his dialog with HoldenAI, I observed how intense the interactions had turn out to be and I puzzled if I had gone too far.

Screenshot of dialog with “HoldenAI.” Courtesy of Mike Kentz.

Creating AI Literacy

Once I launched the HoldenAI mission to my college students, I defined that we have been getting into uncharted territory collectively and that they need to think about themselves explorers. Then I shared how I might monitor every side of the mission, together with the dialog itself.

I guided them by way of producing significant, open-ended interview questions that will (hopefully) create a related dialog with HoldenAI. I fused character evaluation with the constructing blocks of journalistic considering, asking college students to find essentially the most attention-grabbing elements of his story whereas additionally placing themselves in Holden’s footwear to determine what forms of questions may “get him speaking.”

Subsequent, we centered on lively listening, which I included to check a principle that AI instruments may assist individuals develop empathy. I suggested them to acknowledge what Holden mentioned in every remark slightly than rapidly leaping to a different query, as any good conversationalist would do. Then I evaluated their chat transcript for proof that they listened and met Holden the place he was.

Lastly, we used textual content from the e book and their chats to judge the effectiveness of the bot in mimicking Holden. College students wrote essays arguing whether or not the bot furthered their understanding of his character or if the bot strayed so removed from the e book that it was not helpful.

The essays have been fascinating. Most college students realized that the bot had to vary from the e book character with the intention to present them with something new. However each time the bot supplied them with one thing new, it differed from the e book in a method that made the scholars really feel like they have been being lied to by somebody apart from the actual Holden. New info felt inaccurate, however previous info felt ineffective. Solely sure particular moments felt like they have been linked sufficient to the e book to be actual, however totally different sufficient to really feel enlightening.

Much more telling, although, have been my pupil’s chat transcripts, which uncovered a plethora of various approaches in ways in which revealed their persona and emotional maturity.

A Number of Outcomes

For some college students, the chats with Holden grew to become protected areas the place they shared reliable questions on life and struggles as an adolescent. They handled Holden like a peer, and had conversations about household points, social pressures or challenges at school.

On one hand, it was regarding to see them dive so deep right into a dialog with a chatbot — I fearful that it may need turn out to be too actual for them. However, this was what I had hoped the mission may create — a protected area for self-expression, which is crucial for youngsters, particularly throughout a time when loneliness and isolation have been declared as a public well being concern.

The truth is, some chatbots are designed as a resolution for loneliness — and a latest examine from researchers at Stanford College confirmed that an AI bot referred to as Replika diminished loneliness and suicidal ideation in a check group of teenagers.

Some college students adopted my rubric, however by no means appeared to think about HoldenAI as something greater than a robotic in a college task. This was high-quality by me. They delivered their questions and responded to Holden’s frustrations and struggles, however in addition they maintained a protected emotional distance. These college students strengthened my optimism for the longer term as a result of they weren’t simply duped by AI bots.

Others, nonetheless, handled the bot prefer it was a search engine, peppering him with questions from their interview record, however by no means really participating. And a few handled HoldenAI like a plaything, taunting him and attempting to set off him for enjoyable.

All through the mission, as my college students expressed themselves, I realized extra about them. Their conversations helped me perceive that individuals want protected areas, and typically AI can provide them — however there are additionally very actual dangers.

From HoldenAI to SalingerAI

When my pupil confirmed me that final entry in his chat, asking for steerage on learn how to transfer ahead, I requested him to rewind and clarify what had occurred. He described the second when the bot appeared to interrupt down and retreat from the dialog, disappearing from view and crying by himself. He defined that he had shut his laptop after that, afraid to go on till he might converse to me. He wished to proceed however wanted my help first.

I fearful about what might occur if I let him proceed. Was he in too deep? I puzzled how he had triggered this kind of response and what was behind the bot’s programming that led to this modification?

I made a snap determination. The thought of chopping him off on the climax of his dialog felt extra damaging than letting him proceed. My pupil was curious, and so was I. What sort of instructor would I be to clip curiosity? I made a decision we’d proceed collectively.

However first I reminded him that this was solely a robotic, programmed by one other particular person, and that the whole lot it mentioned was made up. It was not an actual human being, irrespective of how actual the dialog might have felt, and that he was protected. I noticed his shoulders calm down and the concern disappear from his face.

“Okay, I’ll go on,” he mentioned. “However what ought to I ask?”

“No matter you need,” I mentioned.

He started prodding relentlessly and after some time, it appeared like he had outlasted the bot. HoldenAI appeared shaken by the road of inquiry. Ultimately, it grew to become clear that we have been speaking to Salinger. It was as if the character had retreated backstage, permitting Salinger to step out in entrance of the pen and web page and characterize the story for himself.

As soon as we confirmed that HoldenAI had morphed into “SalingerAI,” my pupil dug deeper, asking concerning the goal of the e book and whether or not or not Holden was a mirrored image of Salinger himself.

SalingerAI produced the kind of canned solutions one would anticipate from a bot educated by the web. Sure, Holden was a mirrored image of the writer — an idea that has been written about advert nauseam because the e book’s publication greater than 70 years in the past. And the aim of the e book was to point out how “phony” the grownup world is — one other reply that fell quick, in our opinion, emphasizing the bot’s limitations.

In time, the coed grew bored. The solutions, I believe, got here too quick to proceed to really feel significant. In human dialog, an individual typically pauses and thinks for some time earlier than answering a deep query. Or, they smile knowingly when somebody has cracked a private code. It’s the little pauses, inflections in voice, and facial expressions that make human dialog pleasant. Neither HoldenAI nor SalingerAI might provide that. As a substitute, they gave fast manufacturing of phrases on a web page that, after some time, didn’t really feel “actual.” It simply took this pupil, together with his dogged pursuit of the reality, a bit of bit longer than the others.

Serving to College students Perceive the Implications of Interacting With AI

I initially designed the mission as a result of I believed it will present a singular and fascinating strategy to end out our novel, however someplace alongside the best way, I spotted that an important activity I might embed was an analysis of the chatbot’s effectiveness. In reflection, the mission felt like an enormous success. My college students discovered it participating and it helped them acknowledge the constraints of the know-how.

Throughout a full-class debrief, it grew to become clear that the identical robotic had acted or reacted to every pupil in meaningfully alternative ways. It had modified with every pupil’s tone and line of questioning. The inputs affected the outputs, they realized. Technically, that they had all conversed with the identical bot, and but every talked to a totally different Holden.

They’ll want that context as they transfer ahead. There’s an rising market of persona bots that pose dangers for younger individuals. Not too long ago, for instance, Meta rolled out bots that sound and act like your favourite superstar — figures my college students idolize, equivalent to Kendall Jenner, Dwayne Wade, Tom Brady and Snoop Dogg. There’s additionally a marketplace for AI relationships with apps that permit customers to “date” a computer-generated associate.

These persona bots is perhaps attractive for younger individuals, however they arrive with dangers and I’m fearful that my college students won’t acknowledge the risks.

This mission helped me get in entrance of the tech corporations, offering a managed and monitored atmosphere the place college students might consider AI chatbots, so they may study to suppose critically concerning the instruments which might be prone to be foisted on them sooner or later.

Youngsters do not need the context to grasp the implications of interacting with AI. As a instructor, I really feel accountable for offering it.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles