In the digital muddle

Deepfakes, generative AI, and the futures of truth

What happened

In the lead up to the Women’s World Cup this month, the French telecom company Orange launched a new ad campaign celebrating France’s national team. For the first minute of the video, we see moment after moment of brilliant soccer executed by French superstars like Antoine Griezman and Kylian Mbappé . . . until we learn that the players are from the French women’s team. This ad, as Business Insider says, uses “deceptive editing in all the right ways,” subverting our expectations, and communicating how exciting, athletic, and dynamic women’s soccer is.

So what

Our first reaction to the ad was simply “wow.” The quality of the deepfake is stunning, and we applaud using video editing technology to champion women’s sports and surface our biases. But then we reflected on the wider implications of such technologies and began to worry.

Some of you might have been following the @deeptomcruise Tik Tok, in which a deepfake Tom Cruise participates in interviews, sings, dances, visits Harvard. and celebrates Canada. The team behind the Tik Tok, Metaphysic.AI, appeared at this April’s TED conference, unveiling a new technology that enables them to produce deepfakes in real time. As Tom Graham, the co-founder of Metaphysic.ai, said on the TED stage, “we build this stuff and I'm worried, right? Worried is the right instinct for everybody to have.”

This point was brought home to us when we listened to an episode of the podcast The World as You’ll Know It about how AI will turbocharge misinformation. The episode opens with speech by Joe Biden articulating why he is reinstating the military draft for the first time in more than 50 years. Don’t recall that speech? That’s because it didn’t happen: it was created whole cloth by AI driven editing tools, as were the images of the Pentagon on fire that briefly crashed stock markets.

Combine those editing tools with the power of large language models like ChatGPT and you can generate an avalanche of authentic seeming false content quickly and at almost no cost. As NYU professor Gary Marcus explained in March, ““For decades, troll farms had hundreds or thousands of iPhones working in parallel to make stories. Now, you don’t just make one story, you can make 100,000 even a million in five minutes or an hour.”

In 2020, the Journal of Future Studies dedicated an issue to exploring what it called an “epistemological fracturing”  that has emerged as “new forces. . . lead people to inhabit mutually unintelligible worlds.” The idea that we are losing our collective ability to agree on what is true is not new. We have been hearing for years about filter bubbles, for example. However, the emergence of deepfake technologies and generative AI have the potential to accelerate and deepen this fracturing. What happens to our civic spaces when we can no longer trust what we see and hear? What might the impacts on elections, public policy, and social cohesion be?

Now What

Bill Gates believes generative AI is as “fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.” It has the potential to help us achieve great things. It also has the potential to help us fall further apart. Right now, says Gary Marcus, “we teach media literacy so that students can understand what they see on TV or read online. [Now] we need AI literacy as well.”

Pranshu Verma, a reporter on The Washington Post's technology team, adds that we need to be thinking and talking together about ‘what do we do from a very early age to create the critical reasoning and create the skeptical thinking” we will need to navigate the AI future. Teachers and educators, he asserts, “are going to have to also grapple [with the fact] there will always be a new thing that tricks us, and . . . that we need to be creating thinkers that are a bit less trickable.”

Food for thought

As educators, we have a pivotal role in creating a future where individuals are well-versed in AI and adept at discerning the nuances of AI-augmented reality. But to create this future, we need to start thinking carefully and deliberately about how we help children from the earliest grades through graduation understand, assess, and use AI.

Here are a few questions to consider:

  • For elementary educators

    • How might we teach young students to think critically about the media they consume, even as AI technology makes it more difficult to distinguish truth from fiction?

    • Should we be concerned about exposing young children to certain AI technologies, like chatbots, that could provide harmful information or influence their development? How might we balance protecting students while still allowing them to engage with innovative technologies?

    • What kind of curriculum could help young students build a healthy skepticism and develop their own inner sense of truth and ethics to guide them through an information landscape shaped by AI?

    • How might we encourage a balance between curiosity and discernment in young students when interacting with AI technologies?

  • For secondary educators

    • How might we update media literacy programs to address the new misinformation threats posed by deepfakes and AI-generated content?

    • Should computer science and tech courses include ethics units on the responsible development and use of AI systems?

    • How might cross-curricular projects have students anticipate and propose solutions to AI's potential societal impacts?

    • How might we encourage students to engage in open dialogues about the societal and personal consequences of AI, ensuring a diverse range of voices and perspectives?

Scene from 2036: In the digital muddle

Jamie, a 26-year-old digital content analyst, starts his day as his smart wall displays a video clip causing quite a stir: an alleged impromptu speech by the state governor. Authentic or digitally manipulated? Jamie earmarks it for a detailed analysis later.

On his way to his favorite café on Boston Avenue, AR glasses perched on his nose display layers of information: snippets of Tulsa's rich history, pop-up messages from friends, and a gentle reminder about a virtual seminar later in the day. One invitation catches his attention - a "Real Meet" gathering at the Guthrie Green that evening, where the authenticity of the governor's video would be the topic for debate.

At the café, he meets Maya, his longtime friend and a digital archivist at the Tulsa Historical Society and Museum. Over coffee, Maya animatedly discusses her upcoming exhibit: "AI's Interpretation of the 2020s". It’s a deep dive into how AI can recreate or interpret historical events, at times blurring the lines of what really occurred. "The challenge," she notes, "lies in discerning between what AI believes happened and what truly did."

The afternoon sees Jamie engrossed in the virtual seminar titled "Deciphering the Digital Mirage." Participants, represented by diverse avatars, find themselves in a detailed virtual recreation of the historic Cain’s Ballroom. The seminar centers on AI's escalating role in shaping both local and global narratives. As part of the event, attendees test their wits against real-time simulations, where the task is distinguishing genuine historical recordings from their AI-enhanced versions.

Come evening, Guthrie Green is alive with the energy of Tulsa's young adults. This is the “Real Meet”, where digital aids are put aside for genuine, passionate human interaction. The focal point of the evening's spirited debate: the governor's video. Is it an honest depiction of the governor's words, or an AI-enhanced facsimile designed to deceive? Printed material and documents — some with QR codes linking to databases and archives (a nod to the bridging of the digital and analog) — are brandished as evidence in the impassioned arguments.

Later, as the night envelops Tulsa, Jamie settles down to pen a post for his blog. Under the headline "Navigating Authenticity in Tulsa's Dual World", he contemplates the city's intricate balance between the concrete and the digitally reconstructed, pondering the relentless human pursuit of true experiences in an era of blended realities.

Generative AI Disclosure

We asked ChatGPT to take on the role of an experienced futurist and suggest several ideas for the critical uncertainties , deepfakes and the future of truth. It gave us three sets of uncertainties, and we chose to explore a future marked by high AI literacy and low resposnibility by media platforms. We had Chat GPT to generate several ideas for scenes set in such a future, and worked with it to develop the story of Jamie navigating the digital muddle in 2036. Then we had Chat GPT play the role of an experienced editor and offer constructive feedback on the entire piece. As with any feedback from editors, we incorporated some of its suggestions, and ignored others. Finally, we worked with Bing Image Generator to create an image of the “Real Meet” gathering on Guthrie Green.

Reply

or to participate.