arlier this month, the United Nations Security Council held its first-ever meeting on artificial intelligence's threat to international peace and global security where UN General Secretary Antonio Guterres' urgently called on member states to prohibit lethal autonomous weapons by 2026.

But the worst-case scenarios do not just entail AI triggering a catastrophic nuclear doomsday or even eliminating millions of jobs. AI could also manipulate the ideologies and beliefs that connect and influence billions.

Faith leaders are already warning that AI's ability to use facial recognition, create manipulative deepfakes, or even craft religious sermons could spark a new era of religion being exploited by pernicious agendas – leading to increased persecution and violence. As a faith leader, I know that when extremists manipulate religion, it can become a tool for fractious division.

Far clumsier forces have already weaponized religion and tech to devastating effect. I should know. Much of my work globally has been dedicated to addressing the toxic impact of groups like ISIS that successfully leveraged social media to inflame religious tensions and pull off the world's largest terrorist recruitment campaign.

It's not difficult to see how AI could be a game changer for such pernicious actors on the world stage. Not only could it potentially grant them the power to manipulate and weaponize the values billions around the globe hold dear—but also the power to do it anonymously, ensuring the world is kept blinds to who is truly pulling the strings.

Indeed, the development of unrestricted AI could be used to spread disinformation, recruit new terror members, and ultimately inspire fresh terror attacks in unprecedented fashion. Guterres agreed, and at the event claimed "the malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale."

Yet policymakers are woefully ill-equipped to deal with the scale of the threat.

The UK's Labour Party has already warned that British anti-terror laws are incapable of combatting the AI threat. And while the Biden Administration secured commitments from leading AI industry leaders to mitigate AI risks, none of the "voluntary commitments" pertained to AI's terror threat, such as deepfakes which could be used to inflame religious tension and violence.

So, what can we do about it?

Regulation and responsible frameworks are, of course, an important part of countering this threat, but if we truly want AI to stay out of the hands of extremists, we need the involvement of senior faith leaders. Not only because religious leaders understand how nefarious actors could use AI for their own harmful means, but because they also hold significant influence among those who are vulnerable to exploitation and recruitment.

Indeed, if faith leaders do not have a seat at events like the meeting earlier this month, or the world's first AI global summit this fall, the evolving debate on AI would be missing important subject-matter experts that are necessary to avert any possibility of a new era of AI-powered extremism.

This could mean working with senior religious leadership and organizations like the Vatican or the Muslim World League, which has long countered violent extremism. It could even mean incorporating into AI programming important religious messaging like the Makkah Charter—a universal bill of rights for the global community—based on Islamic scripture and endorsed by 1,200 leading global Islamic scholars.

Moreover, if religious leaders are given a seat at the table, then it could even mean that AI could be used for good.

See, while there are obvious obstacles, AI could also be a profoundly positive game-changing tool for religious practice. It could positively transform how rural and isolated communities around the world that are disconnected from credible sources of religious information or suffer from high levels of illiteracy can access religious knowledge. New tools could ensure such communities can understand how religious rulings and scripture truly relate to their everyday lives in ways we have never seen before.

It could even be used as a tool to combat and prevent extremism.

For example, extremism has traditionally spread because it thrives in an environment of ignorance where religious text is presented as black and white. Just consider how the Taliban, recipients of crude religious education, have taken simplistic religious education and disseminated it to the masses. But accessibility through AI platforms will revolutionize access and allow more robust readings of Islam to be shared, combatting extremism born out of ignorance.

Ultimately, it is crucial to navigate the integration of AI and religion thoughtfully, keeping in mind the potential negative consequences and striving to strike a balance that upholds the values, ethics, and deeper aspects of religious practices and beliefs.

Because when it comes to rapidly evolving tech, we currently have no fail safes. Our alarm cannot fall on deaf ears. Tech giants, policymakers, and politicians cannot be the only ones to dictate the risks and rewards of AI.

And if religious leaders aren't involved in the AI debate, then it may come at a truly heavy cost.

H.E. Dr. Mohammad bin Abdulkarim Al-Issa is one of the world's most senior and respected Islamic religious figures. He is the secretary general of the Muslim World League, the world's largest Islamic NGO spanning 1,200 Islamic scholars across 139 countries.