Yale Sociologist Daniel Karell on Radicalism, AI, and the Future of Political Discourse

Authored By 
Rick Harrison
September 25, 2025

Daniel Karell stands outside the sociology building in New Haven

Daniel Karell, a faculty fellow with Yale’s Institution for Social and Policy Studies and an assistant professor of sociology, has spent years studying the intersection of political discourse, radicalism, and digital platforms.

At an ISPS-supported conference last year, he convened nearly two dozen social scientists to discuss the possibilities and limitations of using artificial intelligence models to reliably gain insights into culture, public opinion, and other domains, resulting in a special issue of Sociological Methods and Research.

In this wide-ranging conversation, condensed and edited for clarity, Karell reflects on his intellectual journey — from fieldwork in Afghanistan to experiments with AI-generated news — and shares insights into how meaning is made, contested, and manipulated in today’s fragmented media landscape.

ISPS: What first drew you to study radicalism and contentious politics?

Daniel Karell: A lot of my interest was shaped during graduate school, especially around the post-Sept. 11 War on Terror. In particular, I was fascinated by how Americans and Europeans often labeled certain groups in Afghanistan as radicals, while a sizable portion of Afghans considered them legitimate rulers or as fighting against a kind of Western dominance.

ISPS: How do you interpret this discrepancy?

DK: It’s really a matter of perspective. From an Afghan context, they were fighting a sequence of wars. First against Afghan communists and the Soviet Union, then against themselves in a civil war, and eventually the United States and NATO. For Afghans, it was a long period of political violence and contentious politics. But from the outside, it was often categorized as radicalism.

ISPS: You seem understandably cautious about assigning labels.

DK: Absolutely. Labels such as “radical” and “backlash” are often imposed by outsiders, obscuring complications and the lived realities of those involved. I don’t want to presume that certain groups are radical. I’m interested in how they understand what they are doing. It’s not like gathering medical data or asking if someone graduated college, where definitions are clearer. When considering cultural phenomena, meaning is often contested. It involves asking if people see themselves as part of a movement and what that movement means to them. It can be a big knot, and much of my work has sought to untangle it.

ISPS: Can you provide an example of how you approach such topics with clarity and nuance?

DK: So, for example, while working on an ongoing project, I was reluctant to label the Blue Lives Matter movement as an outright case of backlash against the Black Lives Matter movement. Because many supporters of Blue Lives Matter would likely say they are simply supporting local police officers. But from a scholarly perspective, it fits into the backlash literature. It wouldn’t have reached the size and influence it did if not for pushing back against Black Lives Matter. My work often focuses on this tension between how groups see themselves and how scholars categorize them.

ISPS: How do you resolve that tension?

DK: I just try to be transparent and honest about what we’re doing, in terms of how you are interpreting what other people are expressing. Hopefully my interpretation is ultimately useful for understanding society in a better way. It’s less important for me to nail down whether Blue Lives Matter is an example of backlash or not. Instead, my approach is more like: If we think of this as a possible case of backlash, can that help us understand these phenomena more broadly?

ISPS: You are currently working on a book about backlash, right? What are you hoping to accomplish?

DK: Yes. One goal of the book is to help people understand the nature of backlash. Many people are tempted to think of backlash efforts as an attempt to turn back the clock. I want to encourage people to understand these movements as forward-looking projects. They are often trying to build a new version of the world in which past characteristics are brought into the future.

ISPS: Does Make America Great Again not look to the past?

DK: From my perspective, they don’t actually want to go back to 1952. They want a different future. Just one in which they have the same privileges they had back then, along with many of the benefits of the new world.

ISPS: Your work also blends computational methods with deep sociological theory. Was that a goal of yours?

DK: I think in sociology, as a discipline, we ultimately want our research to produce insights into society. I see computational approaches not as ends in themselves but as tools to deepen sociological insight. The computational approaches developed in the last decade fit quite well into what I was doing. These approaches are particularly important when analyzing contemporary discourse, such as on social media platforms, where there tends to be lots of data. I found that they helped me study topics like political violence and radicalism by analyzing how people are talking about them.

ISPS: You’ve recently done experiments with AI-generated news. What drew you to this topic?

DK: This started as a project with an undergraduate student at Yale. He and I were interested in how people will understand the world differently as they increasingly rely on chatbots to tell them things. In the old days, if you didn’t know what something like the Seattle General Strike was about and wanted a quick answer, you would go to a printed encyclopedia or eventually Wikipedia. Now you just ask ChatGPT about it. Or Google it, and Google’s AI overview provides a summary. More and more, we are getting information that is summarized through a tool built by a company. We’re interested how this might change the world.

ISPS: The results were surprising. What did you find?

DK: My colleagues and I tested whether people learn more from AI-generated summaries of historical events than from human-written ones. We found that people recalled facts better after reading the AI version than a Wikipedia version. Even more striking, AI historical summaries that we generated with political biases were able to shift political opinions. If the summary had a liberal slant, people became more liberal. If it had a conservative slant, they shifted that way.

ISPS: What do you think is happening there?

DK: With the fact retention, I suspect that the large language model essentially took the Wikipedia entry, or something similar, and made it more readable, more fluid, more compelling, and therefore easier to remember. The best models have learned how people prefer to read educational material, and it applies these lessons when summarizing the raw content. A similar dynamic might be at play with the persuasiveness piece. The model may be framing facts in a biased but more engaging way, although we are still analyzing why AI-generated summaries of historical events were able to affect people’s opinions.

ISPS: You conducted a different study in which you created AI-generated, but plausible, cable talk show transcripts using GPT-3.5 and asked people to read either the generated transcripts or real ones from real shows on Fox News. You found that people who read the AI-generated transcripts were more likely to say conservatives seem more logical and the shows seemed more convincing. Real transcripts did not have much effect on people’s opinions. Why did AI texts have more impact? What’s going on?

DK: This was my first foray into the potential effects of AI-generated content. What’s happening here is that the models are first trained on huge amounts of written material. Then the companies building them provide examples of their output to human participants, who give their feedback on what they prefer. I think a consequence of this mode of development in my context with cable news transcripts is that models — because they have incorporated this human feedback of what people like to see — make the hosts and guests more likeable. The AI-produced hosts were more polite. They didn’t interrupt as much. And so, people found them more logical, more reasonable. Because that’s how the model portrayed them. It wasn’t so much about arguments and changing people’s beliefs. It was how people view other people.

ISPS: What role do you think AI — especially generative AI — will play in shaping public discourse and political behavior in the coming years?

DK: In the special issue we put together following last year’s conference, we approach LLMs and chatbots like ChatGPT as tools. But the underlying argument in my other work is that these tools are also filters and mediators for information and what we know about the world. And they are built by companies with agendas. It’s possible that politicized chatbots will deepen fragmentation. Soon you will be able to choose a chatbot that only tells you conservative things about history or only progressive things about history. I think that’s where we’re heading.

ISPS: Can AI be used to strengthen democracy?

DK: The optimistic view is that AI increases access and participation so we can be more inclusive in our decision making and learn better from one another. But these tools right now are controlled by companies. Their goal isn’t civic engagement. It’s profit. My worry is that these tools will increase their inherent biases and shape views in certain ways to grow their use without necessarily improving or even considering the public good.

ISPS: What’s next for your research?

DK: Beyond the book on backlash, I’m launching a new project with ISPS funding on chatbots and social boundaries. I’m curious whether AI can help rebuild norms of politeness and empathy, especially across social divides. Can interacting with chatbots remind people how to be civil to one another? That’s what I want to test.