ROME (OSV News) — In a room full of Dominican friars and Catholic philosophy professors, a priest and AI researcher read aloud excerpts pertaining to ethics from the guiding “constitution” of one of Silicon Valley’s most prominent artificial intelligence companies, drawing laughter from the audience of Thomists.
The moment came on March 6 when Father Jean Gové, coordinator of the European AI Research Group within the Vatican’s Dicastery for Culture and Education, cited passages from Anthropic’s internal guidelines. The company says it aims for its AI model, Claude, to be a “good, wise, and virtuous agent,” without wanting to define those “ethically loaded terms,” and expresses hope that the AI model might one day possess an understanding of ethics that could surpass human ethical understanding.
“I appreciate the laughter,” Father Gové told the conference. “This is a text coming from one of the leading AI companies, frontier companies in the world. … This is the company that … is doing the most comparatively when it comes to ethics, safety, and governance when it comes to AI. This is where we are. This is the state of play.”
Father Gové spoke at the two-day academic conference “Artificial Intelligence: A Tool for Virtue?”, held March 5–6 at the Pontifical University of Saint Thomas Aquinas, known as the Angelicum, in Rome. He said theologians, philosophers, academics and the Church are now being invited to engage with companies that hold ideas like these when grappling with the many issues raised by AI.
The conference comes as Catholic institutions are actively engaged with AI ethics. The Vatican issued a document on the technology in 2025, “Antiqua et Nova,” and Pope Leo XIV has made AI a focus since the first days of his pontificate.
Organized by the university’s Thomistic Institute Project for Science and Religion, the conference brought centuries of Dominican engagement with Aristotelian virtue ethics to bear on examining whether AI systems can be designed and used in ways that help people grow in virtue.
The answer, by most accounts, was a cautious and qualified no, though not without nuance.
Virtue requires more than good output
Dominican Father Alejandro Crosthwaite, a professor of social sciences at the Angelicum, argued that genuine virtue requires faculties no AI system possesses.
“Virtue is not correct output,” he said. “It is right reason embodied in a self-determining agent.”
A large language model, he continued, predicts tokens based on statistical patterns. It does not deliberate, does not possess will and does not apprehend the good as something it is ordered toward.
Father Crosthwaite emphasized that AI “is never a moral subject” and that “virtue ultimately belongs to persons.”
“Simulation is epistemic imitation,” he said. “Virtue is ontological possession. This is not a criticism of the technology. It’s simply a clarification of metaphysical categories.”
The more pressing question, he argued, is not whether AI can become virtuous, but what kind of persons AI helps form.
“If AI replaces prudential judgment, prudence weakens in the human person,” he said. “The ultimate question is not whether the machines become wise. It is whether we do.”
A safer tool, if not a virtuous one
Father Gové, who also serves as the Holy See’s representative to the Council of Europe on AI matters, acknowledged that Anthropic’s guidelines, which decline to commit to any specific ethical framework, leave Claude with “no definitions of what is the good,” with “no hierarchy of goods,” and “no end to which good actions are ordered toward.”
Thomistic virtue ethics would not recognize Claude as truly virtuous, he said. But Father Gové stopped short of dismissing Anthropic’s efforts.
“Does this make Claude a tool for virtue? Not exactly,” he said. “I hope it makes Claude a safer tool. So that’s already something, right?” He also argued that AI ethics require “a triadic relationship between tool, virtue, and regulation, policy, governance,” describing the current state of AI governance legislation as a barren desert.
The risk of replacing teachers and friends with AI
Dr. Angela Knobel, a philosophy professor at the University of Dallas and author of “Aquinas and the Infused Moral Virtues,” warned that algorithms can work against virtuous habit formation.
“AI chatbots are doing what video games and TikTok and other things are designed to do,” she said. “They design it to make you want more of the same.”
Knobel pointed to how algorithmic design in social platforms like TikTok track user behavior, saying, “TikTok is programmed to notice not just what you click on, but also what you pause on and don’t click on. And so, if you see the porn that it shows you and you don’t click on it, but you pause on it, it starts showing you more porn until you do click on it, which is, Aristotle tells us, a very good way to encourage you to do what you don’t want to do, right?”
“This is not to say that technology, including AI, can’t be used in helpful ways,” she said. “It’s just to say that it takes effort to make sure you use it in a non-detrimental way.”
She was especially concerned about AI’s potential to displace the irreplaceable role of human teachers and mentors in moral formation. Growing morally and intellectually, she said, is inherently uncomfortable, and “that is not something most of us can or even want to do on our own.”
“You teach someone to write by making them write, by trying to help them see the ways in which what they wrote falls short, and then asking them to do it again,” she said. “Computers are not very good at doing this.”
AI, she concluded, is “closer to an opiate — the kind of thing that requires extreme caution in its use.”
“I think we have to exercise extreme caution to ensure that we do not let it take the place of our teachers and friends, because if we do, and to the extent that we do, we will certainly allow it to make us worse,” she said.
The danger of disconnection
Dominican Sister Catherine Droste, a theology professor at the Angelicum, warned of what she called “the zombie effect” with people absorbed in devices, oblivious to those around them.
“AI has upped the ante,” she said. “At least with Twitter, Facebook, TikTok, et cetera, even though people were using technology, there was still something of a connection related to human beings, which we’ve lost.”
Still, Sister Catherine allowed that AI could be used prudently in certain contexts. “Before you’re using AI, there has to be prudence,” she said. “But that doesn’t mean you cannot use AI prudently in the sense that it can … give some information that can help you to be truly prudent.”
Courtney Mares is Vatican Editor for OSV News. Follow her on X @catholicourtney.
>