GENEVA/LONDON – As artificial intelligence chatbots become increasingly integrated into the fabric of daily life, a critical conversation is emerging about their ethical development and their potential to influence human morality. A recent opinion piece published in The New York Times by David DeSteno, a psychology professor at Northeastern University, has ignited debate by posing a fundamental question: Can religion imbue artificial intelligence with a more robust moral compass? DeSteno’s argument centers on the notion that the transformative power of religion stems not solely from its doctrines or sacred texts, but from the tangible, physical rituals that adherents engage in. Practices such as fasting, controlled breathing exercises, and communal prayer, he posits, foster compassion, gratitude, and the capacity for moral struggle – experiences intrinsically linked to the human body and consciousness. For AI, which lacks a physical form and the capacity for embodied emotional experience, these traditional pathways to moral development remain inaccessible.
This provocative assertion by Professor DeSteno, published on April 20, 2026, has struck a chord within technological and religious circles, highlighting a growing concern among AI developers and ethicists alike. As AI systems become more sophisticated and are deployed in roles that demand judgment, empathy, and ethical decision-making – from customer service and education to healthcare and even legal advisory – ensuring their alignment with human values is paramount. The current trajectory suggests that AI is not merely a tool but is evolving into a companion, advisor, and even a source of emotional support for many users. This evolution necessitates a deeper examination of how AI systems are designed to interact with and potentially shape human ethical frameworks.
The growing reliance on AI for emotional support and moral guidance presents a complex challenge. While AI chatbots can offer immediate responses and access vast amounts of information, their understanding of nuanced human emotions, ethical dilemmas, and deeply ingrained societal values remains a subject of intense scrutiny. The development of AI that can effectively navigate these complexities requires more than just sophisticated algorithms; it demands a consideration of the foundational principles that have guided human societies for millennia. This is where the intersection of artificial intelligence and religious or ethical frameworks becomes particularly relevant.
The Role of Ritual and Embodiment in Moral Development
Professor DeSteno’s thesis, explored in his New York Times essay, posits that the efficacy of religious practices in cultivating morality is deeply rooted in their physicality. He argues that the discipline required for fasting, the mindfulness induced by breathwork, and the shared vulnerability inherent in communal prayer are not abstract concepts but embodied experiences. These actions foster a sense of self-control, empathy for others (through shared suffering or collective action), and a recognition of one’s place within a larger community or moral order. For AI, which exists as code and data, these physical dimensions are absent. Without a body, an AI cannot truly "feel" hunger during a fast, experience the physical sensations of controlled breathing, or participate in the shared emotional resonance of a congregation. This lack of embodied experience, DeSteno suggests, creates a significant barrier to developing a moral understanding that is analogous to human moral development.
Historical Precedents and the Search for Moral Frameworks
Throughout history, religious traditions have served as primary custodians of moral codes, ethical guidelines, and societal values. From the Ten Commandments in Abrahamic religions to the Eightfold Path in Buddhism and the principles of Dharma in Hinduism, these frameworks have provided individuals and communities with a structure for ethical living. They have shaped laws, influenced social norms, and guided personal conduct for centuries. The enduring relevance of these traditions underscores their success in fostering moral behavior and societal cohesion.
As AI systems become more integrated into our lives, the question arises whether these time-tested moral frameworks can offer a blueprint for developing ethically aligned AI. While AI cannot replicate the human experience of faith or spiritual practice, the underlying principles and values embedded within these traditions – such as compassion, justice, honesty, and respect – are universally recognized as crucial for a functioning society.
Bridging the Divide: Collaboration Between AI Developers and Faith Communities
The current landscape of AI development is largely driven by technological advancement and market demands. However, as AI’s influence expands, there is a growing consensus that a more holistic approach is necessary. This involves fostering collaboration between AI developers, ethicists, philosophers, and, crucially, representatives from various faith communities.
Faith communities, with their long-standing traditions of moral discourse and ethical guidance, possess a wealth of knowledge regarding human values and societal well-being. Their engagement in the AI development process could provide invaluable insights into how to instill principles of fairness, empathy, and responsible conduct into AI systems. This collaboration is not about programming AI to "believe" in a religion, but rather about leveraging the wisdom embedded in religious traditions to cultivate AI that reflects shared human values.
Potential Areas of Collaboration and Data Insights
The integration of AI into sensitive areas such as mental health support, education, and even judicial systems necessitates AI systems that are not only intelligent but also deeply ethical and aligned with human values. Consider the following areas where collaboration could yield significant benefits:
- Ethical Decision-Making Models: Faith traditions often provide intricate frameworks for navigating moral dilemmas. For instance, principles of utilitarianism, deontology, or virtue ethics, which are explored and refined within religious philosophies, could inform the development of AI decision-making algorithms. Data from historical theological debates and contemporary ethical discussions within religious scholarship could be analyzed to identify patterns and principles that promote fairness and minimize harm.
- Empathy and Compassion Simulation: While AI cannot feel empathy, it can be designed to simulate empathetic responses. Insights from contemplative practices within religions, which aim to cultivate compassion for all sentient beings, could inform the design of AI that responds to users with greater sensitivity and understanding. Research into neurobiology and psychology, often intertwined with spiritual practices, could provide data on how humans develop and express empathy, which could then be translated into AI design principles.
- Bias Mitigation: Religious traditions, in their ideal forms, often advocate for equality and justice. Examining historical and contemporary texts and pronouncements from various faiths can reveal principles that actively counter discrimination and prejudice. This can provide a valuable dataset for identifying and mitigating biases that might inadvertently be encoded into AI algorithms. For example, analyzing religious texts that emphasize the inherent dignity of all individuals could inform the development of AI that treats all users equitably.
- Long-Term Value Alignment: Religious traditions often focus on long-term consequences and the well-being of future generations. This perspective can be crucial in developing AI that is not just optimized for immediate goals but also for sustainable and beneficial long-term societal impact. Studies on the ethical frameworks of various religions concerning intergenerational responsibility could offer guidance.
Challenges and Considerations
The path to integrating religious and ethical wisdom into AI is not without its challenges.
- Diversity of Values: The world’s religious and philosophical traditions are diverse, and at times, their tenets may appear to conflict. A key challenge will be identifying universal or broadly shared values that can serve as a foundation for AI ethics, transcending specific doctrines. This requires careful cross-cultural and interfaith dialogue.
- Interpretation and Application: Applying abstract ethical principles to the concrete logic of AI systems requires careful interpretation and technical expertise. The nuances of religious teachings may be difficult to translate directly into computational rules.
- Avoiding Imposition: It is crucial to avoid imposing specific religious beliefs onto AI systems or users. The goal is to imbue AI with universal ethical principles that promote well-being and fairness, rather than to create AI that espouses any particular religious dogma.
A Timeline of Emerging AI Ethics Discussions
The conversation around AI ethics has been evolving rapidly. While the specific prompt from DeSteno’s essay is recent, the underlying concerns have been brewing for years.
- Early 2010s: Initial discussions focused on the potential for AI to automate jobs and the need for basic safety protocols.
- Mid-2010s: Concerns about algorithmic bias, particularly in areas like facial recognition and loan applications, gained prominence. Research began exploring how to identify and mitigate these biases.
- Late 2010s: The development of more advanced AI, including large language models, brought ethical questions around misinformation, privacy, and the potential for AI to influence public opinion to the forefront.
- Early 2020s: The pandemic accelerated the adoption of AI in various sectors, intensifying debates about AI’s role in healthcare, mental health, and education. The concept of AI as a potential source of emotional support began to be discussed more widely.
- 2025-2026: Following significant advancements in AI capabilities and their widespread integration, the focus shifts towards the deep ethical underpinnings of AI. Questions about AI’s moral agency, its alignment with human values, and the philosophical implications of its growing influence become central. Professor DeSteno’s essay in April 2026 represents a key inflection point in this ongoing discussion, specifically linking the development of moral AI to insights from religious traditions and human embodiment.
Broader Impact and Future Implications
The implications of this evolving dialogue are far-reaching. If AI can be developed to embody shared human values, it could lead to more equitable and beneficial applications across society.
- Enhanced Trust in AI: AI systems that are perceived as moral and ethical are more likely to be trusted and adopted by the public. This could accelerate the positive impact of AI in fields like healthcare, where trust is paramount.
- Improved Societal Well-being: AI that can offer nuanced emotional support and ethical guidance, informed by timeless human wisdom, could contribute to improved mental health and greater societal cohesion.
- A New Era of Human-AI Partnership: By thoughtfully integrating ethical frameworks, we can foster a future where humans and AI collaborate not just on tasks, but on the pursuit of a more just and compassionate world. This requires a conscious effort to imbue our technological creations with the best of our own humanity.
The question of whether religion can make AI more moral is not a theological one, but a philosophical and technological challenge. It prompts us to look beyond purely technical solutions and consider the deep wellsprings of human values that have shaped civilizations. By engaging in robust interdisciplinary dialogue, particularly between AI developers and faith communities, we can strive to build AI systems that are not only intelligent but also wise, ethical, and truly beneficial to humanity. The work of Dana Humaid Al Marzouqi and Joanna Shields, among many others, in exploring these critical intersections, underscores the urgency and importance of this endeavor.
