Artificial Intelligence and the Religious Studies Classroom

Ask E.B. Tylor what religion is and he would say it is, “belief in Spiritual Beings.”
Ask William James and he will tell you it is the “feelings, acts, and experiences of individual men in their solitude.”
Ask Catherine L. Albanese, and she would say it is “a system of symbols (creed, code, cultus) by means of which people (a community) orient themselves in the world with reference to both ordinary and extraordinary powers, meanings, and values.”
Ask Émile Durkheim, Clifford Geertz or others, and you’d get a different answer from the perspectives of sociology, anthropology, theology, philosophy and more.
But what if you ask ChatGPT?
Well, you get a mix of the above, it turns out.
When I asked my friendly, neighborhood artificial intelligence (AI) chatbot, it defined religion as, “a system of beliefs, practices, symbols, and moral codes that connects individuals or communities to the sacred, the divine, or some ultimate reality or truth.”
In the wake of ChatGPT’s viral launch in 2021, the question of how to integrate AI into the classroom — and the religious studies classroom at that — has been at the forefront of educators’ minds. With the global expansion of machine learning, big data, and large language models (LLMs), AI has the potential to radically impact teaching and learning, revolutionizing the way students interact with knowledge and how educators engage course participants.
There are, however, significant concerns about its ethical use, technology infrastructures, and fair access.
In this post, I share how I recently used AI as part of my pedagogy to help prompt deeper understanding of religion in the United States – and what we might have to learn from chatbots about how we define and discuss religion.
The AI Unessay
AI is a technology that enables machines and computers to emulate human intelligence and mimic its problem-solving powers.
The umbrella term “AI” encompasses various forms of machine-based systems that produce predictions, recommendations or content based on direct or indirect human-defined objectives. Based on
LLMs, AI generators like ChatGPT, Jasper or Google Gemini are tools that have been trained on vast amounts of data and text to provide predictive responses to requests, questions and prompts inputted by users like you, me or our students.
As with other advances in technology — from mobile phones and social media to enhanced graphics calculators and Wikipedia — educators have responded to AI in various ways. Some have moved quickly to ban its use and bemoan the submission of essays and other coursework clearly created with the help of AI.
Others have moved to integrate AI into their religious studies pedagogy, inviting students to create videos or infographics with the assist of AI to explain the elements, and role, of rituals to stimulate class discussion or to treat “AI as a tool for lessons that go beyond academics and also focus on the whole person.”
When I recently taught a course on American religion, I decided to assign what I called an “AI Unessay.”
The usual unessay invites students of varying learning modalities and expressions to create final projects that demonstrate their grasp of course material and discussions beyond the traditional essay. These can be hands-on demonstrations, mini-documentaries, artistic visualizations, performative projects or social media campaigns.
The AI Unessay invited course participants to design a set of prompts for an AI generator (e.g., ChatGPT) to write a 2,000-word essay on a topic in American religion, broadly defined. Then, participants were asked to write their own critical response to this AI essay, analyzing its strengths, weaknesses, sources and the process itself.
AI’s Religious Illiteracy
Students chose a variety of topics to cover, from religious themes in metal music and superhero comics to the “trad wife” trend among members of the Church of Jesus Christ of Latter-day Saints and digital seances.
Throughout the semester, I worked with participants to refine their topic selections, come up with AI prompts and conduct secondary-source research and firsthand “digital fieldwork.”
Meanwhile, course lectures and readings were provided to supply helpful context on how each of these themes might be better placed within the wider currents of American religion and its

intersections with U.S. politics, economics and society.
Not all course participants opted for the AI Unessay. Others wrote traditional papers or put together a different kind of creative final project. But the majority of students opted for the AI-based project, saying they not only wanted to learn more about how to productively, and critically, work with AI, but wondered whether the technology was up for the challenge of understanding, parsing out and pontificating on America’s religious diversity.
Though participants did learn some new information from the AI essays and discovered some data they hitherto were unaware of, they were — on the whole — disappointed with the results. They found, like many others, that AI responses were often biased, inaccurate or even harmfully ignorant.
They also found that citations and sources were a decidedly mixed bag, with the chatbots often manufacturing fake data and made-up books or articles.
And finally, numerous students reflected that it was a challenge to get the AI chatbot to write at the appropriate length (2,000 words). The technology was often too efficient, churning out well-structured, but far too brief, answers to questions about metal music’s spiritual intimations or the nuances of the “trad wife” trend on TikTok. When asked to elaborate, participants found AI was overly repetitive or even fabricated false information or created concrete details and data that were inaccurate or exaggerated. It also regurgitated implicit and explicit biases against marginalized religious communities or intra-faith minorities.
As one participant summed it up, “I found AI to be more religiously illiterate than me, which is saying something!”
Where to from here?
Asked to consider why AI was found wanting in its accuracy in depicting American religious diversity, participants surmised that because AI is trained on what internet publics “know” and share about religion, it is just as religiously illiterate as the rest of us. They suggested it takes students of religion who are paying careful attention to help it along, correct its mistakes and continue to critically question the just-so narratives about religion, religions and the religious that can be found online.
In other words, participants discovered how AI amplifies and compounds some of the worst in religious illiteracy.
Writing for the Religion, Agency and AI forum, digital religion scholar Giulia Evolvi reminds us that in an age of hypermediation, “religious communication, like all modern communication, is no longer mediated linearly. Instead, digital media amplifies and reshapes it, creating intensified networks and narratives.”
Thus, in an age when more people will turn to AI to answer their questions about religion and spirituality, it is important that we engage with the technology, critique its biases and weaknesses and continue to pay attention to the ways humans employ the concept of “religion” to make sense of the world around them and their place in it.
Even with the advent of AI technologies — and religious studies students’ use of it — the why of studying religion doesn’t change. Religion remains interesting, intricate and important.
We might just need to shift some of the ways we go about making sense of it and adapt our classrooms and conversations accordingly.