William Edgar
Ordained Servant: August–September 2025
Also in this issue
by Gregory E. Reynolds
by Danny Olinger
by D. Scott Meadows
“Trust” in Atonement? A Critical Appraisal: A Review Article
by Daniel Y. M. Tan
What It Means to Be Protestant: The Case for an Always Reforming Church, by Gavin Ortlund
by Ryan M. McGraw
Holy Sonnet IV: Oh My Black Soul!
by John Donne (1572–1631)
This is not the place to rehearse the long history of discussions between “science” and the Christian faith.[1] So we will focus on the rather recent phenomenon of AI (Artificial Intelligence). As with some of the previous issues I have examined, there is often a good deal of heat along with any light. But there is increasing attention addressed to this phenomenon, and it is pregnant with cries and whispers.
To begin with, it will help to define AI. It may surprise us to learn that the first occurrence of this term dates back to 1955. Professor John McCarthy defined it simply as “The science and engineering of making intelligent machines.”[2] In its earlier phases AI was applied to ordinary imitative skills, such as teach the machine to play chess. We may remember how in 1997 a machine named “Deep Blue” beat the Grand Master Gary Kasparov.
That was weak AI, or the ability to duplicate certain skills. Think of Apple’s Siri or Amazon’s Alexa, which will articulate facts and figures, such as historical battles or football scores upon request. In more recent times, strong AI has developed this ability to imitate verging on the superiority of the machine over the human brain. Technically, we can say that ASI (Artificial Special Intelligence) is moving toward AGI (Artificial General Intelligence), which claims that a machine can have intelligence equal to that of humans. This could include consciousness, the ability to learn and make plans.
It must be stated in the strongest terms that the goals of strong AI (AGI) are nowhere near being achieved. Researchers are certainly trying to realize these goals. Some even aspire to creating a machine that surpasses human intelligence. So far, this is the stuff of science fiction. Think of the computer HAL in “A Space Odyssey,” who was able to exercise power over its creators.
Many developments have occurred and surely many more are to come. For example, ChatGPT is a human-like dialogue feature. Thus, you can ask the machine almost anything, and it will answer you. A variant is Snapchat, an app which allows you to send a picture, or “snap,” and even create an illustrated story. You can program Snapchat to destroy the picture after use, so no one may “steal” it. Another, related phenomenon is Dall-E (and Dall-E2), which is a system that can create various images (and art) from a description in “natural” language.[3]
One of the fasted growing industries today is robotics. The use of robots has wide application, from medicine to surveillance to finding landmines. Often, the use of robots accomplishes tasks not easily possible for human beings.
Some experts estimate that AI-generated content on the internet in a few years' time, as ChatGPT, Dall-E, and similar programs, will spill torrents of verbiage and images into online spaces.[4]
Space prohibits an extensive history and demographic analysis of AI.[5] The giant service organization Digital Aptech lists four crucial capabilities.
(1) Machine learning. This feature takes large amounts of statistics and data and “digests” them in ways that help solve certain problems and reach certain conclusions. The reason for the label “learning” is that the machine uses algorithms, a procedure to solve mathematical problems in a way that can be stored and repeated. So-called clustering algorithms are used to make profiles of customers. The frequently encountered phrase, “customers who bought such-and-such will also enjoy such-and-such,” is accomplished through clustering algorithms.
(2) Neural network. This is a network of interconnected units, similar to the human brain’s neurons. Information is received and spread among the units. Examples of neural network would be the drones used in disaster relief, or war, and the GPS system of guidance in cars.
(3) Deep learning. Simply larger and more complex versions of neural networking. Examples of this would be speech recognition and image recognition.
(4) Computer vision. This applies the above to the computer. It can identify events by situating them in local images. Some of the visuals we see in the news are made possible through computer vision. It is used for self-driving vehicles.
Predictably, there are cheerleaders and naysayers, and most often a combination of both.
Cheerleaders point to the advantages of AI. They range from the ability to conduct research efficiently, to automating repetitive tasks, to faster decision-making. There are numerous educational benefits. One that caught my attention is the use of virtual reality to teach people about certain social issues. For example, a number of museums are using holograms to allow visitors to have imaginary “conversations” with victims of racism, antisemitism, and adversaries.
At White Plains High School, holograms and other tools are being used to instruct the students about hatred and crimes.[6] Teachers claim this is a better tool than textbooks for introducing them to the sad reality of the Holocaust, which some of them either ignore or deny. Virtual Reality can be used to dissuade people of prejudice against black athletes or Muslim airplane passengers.[7]
Naysayers abound. A surprising early worrier is Joseph Weizenbaum, one of the pioneers of Chatbot.[8] After an outburst of approval for his work, Weizenbaum began to worry that the machine could supersede the “whole person,” that is, the human being in all its grandeur. He created a program affectionately named Eliza, after Eliza Doolittle, the character in George B. Shaw’s Pygmalion, a cockney who developed such skills as a “lady” that she could fool any detractor. As an amateur psychologist, Weizenbaum also worried that the computer could become a sort-of father figure, encouraging “patients” toward Freudian transference.
Many critics simply worry that AI will lead to the loss of freedom. This could take the form of the invasion of privacy. Worse, it could manipulate people’s views by controlling data for nefarious purposes. Users could circumvent due process and orchestrate desired results, much as in the older propaganda of Nazi Germany.
For what it’s worth, Americans are divided in their views of AI. Take, for example, the use of facial recognition in crime solving. According to Pew, more people are concerned than excited about it. Many, some 45 percent, are ambivalent.[9]
The formidable dominance AI could exhibit is a potential for the loss of freedom. The Future of Life Institute has raised important questions. “Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart . . . and replace us? Should we risk loss of control of our civilization?”[10]
The Institute recommends a sane response to these potential threats. It recommends strong policies which control AI, without stifling its usefulness. It also recommends education: seminars, websites, information sessions, and the like. Such measures will help contribute to its mission, which is steering transformative technology toward benefiting life and away from large-scale risks.
But is this enough? Christians will need to draw on biblical wisdom to achieve a balance between legitimate caution and a proactive involvement.
There is already a considerable, often thoughtful, body of literature reflecting a biblical view of technology.[11] AI may appear to be new, but it is simply a very advanced form of what we already have. It helps to revisit the classic trilogy of Creation-Fall-Redemption. God commanded our first parents to replenish and subdue the earth (Gen. 1:26–31). This is sometimes known as the cultural mandate. That ordinance still holds, despite the cancer of sin that entered our world. One of the tools God has given us to accomplish this task is technology.
Definitions of technology are often vague or even circular. Consider this definition from Dictionary.com:
[Technology is] the branch of knowledge that deals with the creation and use of technical means and their interrelation with life, society, and the environment, drawing upon such subjects as industrial arts, engineering, applied science, and pure science.
What are “technical means”? Merriam-Webster defines them this way: “having special and usually practical knowledge especially of a mechanical or scientific subject.”
The words “mechanical” and, even, “scientific” are so nebulous as to evade any useful precision. It helps to look at the big picture. Jacques Ellul, who spent his life studying the subject, says this from the “Note to the Reader” in The Technological Society: “Technique is the totality of methods, rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.”[12] The expression “absolute efficiency” is somewhat pejorative. Yet efficiency is certainly a principal ingredient in technology as it has developed.
Thus, it is right to use the tēkne, or “craft knowledge,” for the purposes of advancing human flourishing. It is an important component of the cultural mandate. But the ideal of efficiency is a double-edged sword. At the same time, the fall into sin has affected every part of creation, including the cultural mandate. Thus, every tool, including technology, has been compromised.
Not surprisingly, the wise biblical answer to our question is to embrace the advantages of AI and avoid the pitfalls. Derek Schuurman, a professor at Calvin University, provides some helpful guidelines. He says three things.[13] First, we should avoid two typical pitfalls: too much optimism or undue pessimism. Optimists see AI as a solution to most significant problems in life. Only Christ can do that. But pessimists will have nothing to do with AI, which is a shame, given some of its benefits. Used properly, features such as ChatGPT can help with research of all kinds.
Second, Schuurman tells us we should be focusing on the ontological issues, rather than on what AI can do. We neglect the great answers to our deepest questions about attempts to substitute AI for our endeavors at our peril. They are found in Genesis 1–2 and related texts. The ontological issue of the constitution of human beings as image-bearers of God cannot be overstressed. Comments on Genesis 1:26–31 abound.[14] The verses are the foundation for our understanding of human beings in their integrity and uniqueness. Though, of course, transhumanism and AI are not mentioned, by implication a critical approach to them is present.
As we saw, the tools for replenishing the earth, in the cultural mandate, include technology. Technology derives from the call of God. This in turn is rooted in the capabilities we are constituted with as creatures made after God’s image. Genesis 1:26–27 contain an implicit critique of both the belittling of humans (as in the Babylonian myths which make them slaves of the gods) and the aggrandizing of them (all depends on the blessing and commands of God).
Third, Schuurman asks that we develop proper norms for the responsible uses of AI. One of the most apropos accounts in the Bible aiming at our issue is Genesis 11:1–9, “The Tower of Babel.” Using the gift of technology, mankind overstepped its bounds and sought to magnify its name above God’s: “Let us make a name for ourselves, lest we be dispersed over the face of the whole earth” (v. 4). Their sin was not in assigning a name for themselves, but in seeking one that effectively replaced both the name of God, and the name he had given them. Fear of being dispersed is an aberrant way to challenge the cultural mandate.
The well-known ensuing story contains both a judgment and a benediction. The judgment is the confusion of languages as well as the forcible incompletion of the tower. The benediction is the preservation of mankind from the ruin that would have followed from the heedless construction. These stories certainly contain norms for the use of AI, albeit inexplicit ones.
This biblical wisdom is reflected in the declaration of the European Parliament.[15] It is a full statement, but at the heart it is striving to keep the balance between “supporting innovation and protecting citizens’ rights.”
Not surprisingly, the Gospel Coalition has many entries on AI. One of the most helpful is titled “How Not to Be Scared of AI,” an interview with Sarah Eekhoff Zylstra and Joel Jacob. Their safe, but sane conclusion: “As Christians, we don’t want to run in fear—after all, God is sovereign over robots too. But neither do we want to be reckless or careless in how we approach it.”[16] They cite Proverbs 14:16, “One who is wise is cautious and turns away from evil, but a fool is reckless and careless.”
As in every ethical decision, a careful testing is still needed for the relatively new field of AI. Hebrews 5:14 is pertinent here: “But solid food is for the mature, for those who have their powers of discernment trained by constant practice to distinguish good from evil.” These words tell us that spiritual maturity is attained by “constant practice” (in Greek, διὰ τὴν ἕξιν τὰ αἰσθητήρια γεγυμνασμένα). The word γεγυμνασμένα (from γυμνάζω gymnazo), translated “training,” resembles the English word gymnasium. Thus, ethical maturity can only be obtained in the “gymnasium of life.”
This principle should apply to decisions about AI. There are, of course, absolute principles. But in general they cannot be verified without trial-and-error. For example, how to decide about algorithms? They must be tested. Contexts must be taken into account. Advantages, disadvantages, benefits, manipulation, all of these should go into making decisions about their opportunity.
Considering AI’s relationship to apologetics, it is incumbent on us to discern those places where AI claims the denial of God’s sovereignty, and those indices of aspirations which point to divine revelation. Wanting to be God, as did the builders of the Tower of Babel, is clearly illicit. It is a sign confirming Romans 1:18, the desire to suppress the truth by unrighteousness. Yet at the same time, AI represents a quest for understanding, a quest for a means of human flourishing, following the cultural mandate.
[1] There is a considerable body of literature on the intersection of religion and faith. Predictably, some of it is skeptical. One thinks of the work of Richard Dawkins, The God Delusion (Harper Collins, Mariner Books, 2006). A much larger body of literature sees the two as, if not compatible, quite congenial. Such are Francis Collins, The Language of God: A Scientist Presents Evidence for Belief (Free Press, 2007), and John Lennox, Can Science Explain Everything? (The Good Book Company, 2019).
[2] See https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf.
[3] See https://openai.com/dall-e-2.
[4] See https://futurism.com/the-byte/experts-90-online-content-ai-generated.
[5] A lively but brief history of AI can be found here: https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/. The group’s Center for the Governance of AI, Future of Humanity Institute, and University of Oxford provided in 2019 an accessible demographic study of AI users, fans, and detractors. See https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/executive-summary.html.
[6] See https://www.timesofisrael.com/back-to-school-exhibits-custom-tailored-for-us-pupils-make-the-holocaust-a-local-issue/.
[7] See https://www.axios.com/2023/05/15/new-vr-role-playing-insight-racism.
[8] See https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai.
[9] See https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence.
[10] See “How to Worry Wisely about Artificial Intelligence” in The Economist, https://www.economist.com/leaders/2023/04/20/how-to-worry-wisely-about-artificial-intelligence.
[11] Egbert Schuurman, Technology and the Future: A Philosophical Challenge (Cántaro, 2009); Jacques Ellul, The Technological Society (Vintage, 1964); Andy Crouch, The Tech-Wise Family: Everyday Steps for Putting Technology in Its Proper Place (Baker, 2017). Gregory Edward Reynolds, The Word Is Worth a Thousand Pictures: Preaching in the Electronic Age (Wipf & Stock, 2021).
[12] Ellul, The Technological Society, xxv.
[13] See https://christianscholars.com/chatgpt-and-the-rise-of-ai/.
[14] I am usually uncomfortable citing my own work, but the relevant pages in Created and Creating: A Biblical Theology of Culture (InterVarsity Academic, 2016), 161–62, contain my study and lists many germane analyses of these crucial words.
[15] See https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence.
[16] See https://www.thegospelcoalition.org/article/potential-problems-ai/.
William Edgar is a minister in the Presbyterian Church in America and emeritus professor of apologetics and ethics at Westminster Theological Seminary, Glenside, Pennsylvania. Ordained Servant Online, August–September, 2025.
Contact the Editor: Gregory Edward Reynolds
Editorial address: Dr. Gregory Edward Reynolds,
827 Chestnut St.
Manchester, NH 03104-2522
Telephone: 603-668-3069
Electronic mail: reynolds.1@opc.org
Ordained Servant: August–September 2025
Also in this issue
by Gregory E. Reynolds
by Danny Olinger
by D. Scott Meadows
“Trust” in Atonement? A Critical Appraisal: A Review Article
by Daniel Y. M. Tan
What It Means to Be Protestant: The Case for an Always Reforming Church, by Gavin Ortlund
by Ryan M. McGraw
Holy Sonnet IV: Oh My Black Soul!
by John Donne (1572–1631)
© 2025 The Orthodox Presbyterian Church