Christian Artificial Intelligence

Assessing Artificial Intelligence from a Christian Worldview

Rick Plasterer on June 16, 2022

Artificial intelligence is a reality that can be expected to increasingly force itself into our consciousness, with difficult challenges for Christians. Mathematician, philosopher of science, and Christian apologist John Lennox discussed AI and how Christians should respond to it with Joel Woodruff. President of the C.S. Lewis Institute on June 10.

Lennox said that “artificial intelligence essentially comes in two forms. It’s very important to distinguish them. The first one is usually called ‘narrow AI,’ and then there’s artificial general intelligence or AGI. Narrow AI is the stuff of which we are really familiar, it’s the stuff that’s working today, and AGI is much more speculative.”

Lennox explained that a “narrow AI system does typically one single thing that normally requires human intelligence to do.” By the use of algorithms, a narrow AI system is able to make a decision in a particular task, such as the diagnosis of a particular medical condition, that humans normally make. In the case of medical diagnosis, Lennox said that “generally speaking, the result is better than you will get from your local doctor.”

These kinds of systems, he emphasized, are not intelligent. Like earlier computer systems, “it simply does what it’s programmed to do.” The “intelligent” activities of smartphones are examples of narrow artificial intelligence. He said that Amazon search engines, which suggest new purchases on the basis of past purchases, constitute another example of narrow AI. Facial recognition software is yet another example of narrow AI.

This last example, Lennox said, “raises massive ethical problems.” Similar to all technological advances it is “like a knife. A really sharp knife you can use it for surgery, or you can use it for murder.”

In contrast, with artificial general intelligence (AGI), the point “is to create a machine which can do everything that normally requires human intelligence, and do it better, and do it faster.” The ultimate point of AGI is “to create a superintelligence.” Artificial general intelligence is something that has not been achieved, but research to develop it is proceeding apace. There are two research directions to develop AGI. In one direction, the effort is to make human beings super intelligent. New technology is to be integrated into the human body, turning it into “a kind of cyborg.” In another strategy, a generally intelligent machine would be constructed, and the contents of a human mind would be uploaded to it. Lennox said that “this kind of speculative stuff” is what is “loved by the makers of science fiction films and the authors of sci/fi books.”

However, Lennox said that he takes AGI project “seriously, because many leading scientists are taking it seriously.” One leading scientist, Astronomer Royal Lord Martin Rees, has said that in several centuries, intelligent machines will function with no “emotional resonance with us, even though they will have an algorithmic understanding of how we [humans] behaved.” These machines, it is suggested “will most fully understand the cosmos.” On the other hand, when people speculate when the “singularity” will be reached, i.e., when “machines will take over,” this always turns out to be the next thirty to fifty years, “and that’s been true for quite some time.”

In sum, narrow AI is up and running, “with great benefits on the one hand, and negatives” as well, while AGI is “largely speculative, but towards which quite a number of people are going, because they see mega dollars in it.”

Woodruff asked about the impact on people’s thinking of the dystopian novels of the twentieth century about future technological advance, in particular George Orwell’s 1984, and Aldous Huxley’s Brave New World. Lennox said that it’s clear that they had a big impact. One issue is “the idea of big data,” which was advanced in 1984, which said that a “Big Brother” will watch us, and surveillance technology will make a Big Brother system work. Orwell saw the “extreme danger” of this. Lennox said the title of his own new book, 2084, is drawn from Orwell’s novel. The prospect of surveillance technology is “a huge problem in several directions.” One issue is the money to be made by surveillance capitalism. Data harvested by surveillance capitalism can provide enormous information about an individual, “and what we don’t often realize is that they’re selling it off to third parties without our permission.” It is necessary to take this “extremely seriously,” because “it is a serious invasion of privacy.” But it is “taken extremely seriously by major players in the world economy.” While people may enjoy using and may to some considerable extent “depend on” smartphones, these devices are nonetheless “tracking us, they’re harvesting information, where we go … they might even be listening to our conversations, who knows. And yet we do this voluntarily.”

The real problem is that while technological surveillance can be used beneficially (to locate criminals or terrorists in legitimate law enforcement), it can “also be used to control people.” This is happening particularly in China, and particularly in China’s Xinjiang Autonomous Region, “where the Uyghur population are really being surveilled to such an extent it’s almost unbelievable.” More generally in China, there is the Social Credit System, which surveils much or all of life and penalizes people for what the Chinese Communist Party considers poor behavior (and penalizes friends and relatives as well). Another “huge area” is artificial intelligence used for automatic weapons. But both of these uses for AI are uses for narrow AI, Lennox said.

In contrast to George Orwell, who thought that new technology “would oppress us,” Aldous Huxley thought that we would “fall in love” with it. Lennox finds that “we seem to be having both things happening in our society.”

Woodruff asked if the Social Credit System, or similar systems are attempts to be a substitute for God, “all knowing, ever present,” and in control. Lennox agreed that this attempted substitution is exactly what is happening. He referred to “the Tower of Babel, in Scripture” as an attempt to achieve the same over-arching supremacy as modern surveillance. He commented with a quote that “behind every skyscraper there is an even bigger ego.”  He referred to the Israeli intellectual Yuval Noah Harari, and his book Homo Deus, that proposes to use technology to realize “the transhumanist dream.” The new creature to be constructed will move beyond current humanity by blending high technology machinery with the human body. Two items on Harari’s agenda are to overcome death (technologically) and to advance human happiness. Lennox said that “the idea of constant happiness and living forever, of course, is in every human heart.” But he said that “as a Christian believer … I’ve written a great deal about it … because it seems to me that what AGI, and what many people desire for it, is actually something that will never be fulfilled by technology, but it’s a parody of what’s held out to us in the Bible.” Lennox said that “the resurrection of Jesus Christ is the evidence that God through Christ has the power to raise the dead.” He said that there is “a lot more evidence” for the resurrection’s “truth and credibility” than there is that AGI will be successful in achieving Harari’s goals.

Lennox said the inspiration for his book was the realization that eternal life and happiness are “already there in the real Christian message.” We get a new life – really eternal life – when we receive Christ as savior and lord. Further, he said that the AGI project ignores the problem of “human brokenness, human sin, human rebellion against God.” We cannot hope for a good life because of human sin. AI can “imprison people, and control them,” as in today’s China. But AGI is exceedingly popular, because it speaks to a deeply rooted and real human desire for immorality and eternal happiness. He said “Christians are very well placed … to speak into this situation.”

Woodruff asked how AGI is impacting the contemporary understanding of the dignity and worth of human beings. Lennox responded that the transhumanist worldview holds that human beings are only a stage in an evolutionary process to becoming higher beings, perhaps to an endless evolution of higher and higher beings. He said he is “not very convinced” of this worldview, “because we haven’t shown a great deal of change” in the nature of human beings “in many centuries.” But he said that the real problem is “two main worldviews in contention.” These he identified as “the atheist worldview,” and the “theistic worldview.” Atheism, Lennox said, “has a great influence” on the discussion about AI. But in the theist worldview “human beings are made in the image of God,” which gives them enormous worth. This belief, which Lennox himself holds, causes him to be greatly concerned with “artificial enhancements of existing human beings, that begin to meddle in the very definition of what it is to be a human being.” We are familiar with very simple bodily enhancements, such as glasses or hearing aids. More sophisticated enhancements are found in “affective computing,” which can interact with human emotions, and perform such tasks as predict seizures. This is also a positive development. Christians, such as Rosalind Picard, who developed affective computing, can participate in conceiving and developing such advances. Importantly, they can “relate to the ethical aspects” of sophisticated narrow AI.

But advanced AGI opens up the possibility of altering humanity itself and having scientists of a future generation alter humanity to their liking. However, the creatures that follow technological alteration of humanity “won’t be human beings, they’ll be artifacts.”  Rather than providing service as it has done in the past, AI in the end “could destroy humanity.”

This presents a real problem in the Christian worldview, because “God became human.” This makes humans “utterly unique.” Jesus Christ was “God become human.” Christians are “uploaded” not into a machine but into Christ, and thereby find God and eternal life. But this involves facing “our sin of rebellion against God, and we must face it and repent of it, and trust Christ.” He called this “a radical solution to a radical diagnosis.” Lennox said that the transhumanist project “is doomed to fail at that deeper level.”

Woodruff asked how Christians can be rightly involved in developing narrow AI, while “speaking into the dangers” of AGI. Lennox said that Woodruff was “absolutely right” that dealing with AI requires “something more than science.” He quoted the late Chief Rabbi of London, Jonathan Sachs, to say that “science takes things apart, to understand how they work, whereas religion puts things together, to understand what they mean. And science does not give us meaning.” But on the other hand, Lennox maintains that “God and Christ and the Biblical worldview fills our world with meaning.” He added that “the list of beneficial uses of narrow AI is growing day by day.” Christians involved in this work can better help people to understand what the alternatives and consequences are of advances, or possible advances, in AI. A problem is that much in even narrow AI has “a seductive effect on people.” In particular, having AI “linked with virtual and augmented reality” is quite seductive. Alternative realities raise “immense ethical problems.” It allows people “to do things they wouldn’t want to be seen to do in their real lives.”

Woodruff then asked where we are going with AI. Lennox responded that he finds this hard to answer. But he said that as a Christian, he fears “a development toward central authorities, or even ultimately a world government where the economy will be controlled in the kind of way we’re seeing to day – a social credit system.” The Biblical idea of the “mark of the beast” is becoming increasingly plausible, with for example, fingerprint identification on one’s computer, or retinal (eye) identification. While “the Biblical scenario is quite scary,” it is increasingly realized. We may not be able to exactly “pin down” the Biblical prophecies, but they show “roughly speaking, where things are leading.”

But, Lennox said, “for the Christian, there is an additional, central hope.” Not only does our eternal life begin now, with Christ indwelling us, “but we are promised that one day he will return.” The crucifixion did not bind Christ to the grave, and so we must not think that the world “has heard the last of him.”

Woodruff asked how ordinary Christians can engage the issue of AI with people in the wider world. Lennox said that we should talk about it from a Christian standpoint with our friends and neighbors. His new book, 2084: Artificial Intelligence and the Future of Humanity, is intended to inform the Christian public about AI, and help them engage their friends and colleagues about it. He said that the main way to engage people on AI “is not by telling people what we think, but by asking them questions.” This gives an opportunity to explore both the upsides and the downsides of AI, and to explain the hope that Christians hold within us.

Lennox is concerned that in engaging secular people on this topic, Christians will be “brow-beaten into silence.” This is wrong, because “we have a credible message to get out there into the public space, that deserves to be there as much as any other message.” He believes that C.S. Lewis’ science/fiction “Space Trilogy” has much to say about the technological future. He also believes that Lewis’ Abolition of Man (which points to the danger of scientists trying to alter humanity and morality with technological advances), “was really prescient.” But reading the “Space Trilogy” in order, and in particular, That Hideous Strength, will provide “enormous insights” into the world we are facing.

Woodruff asked what advice Lennox would give to the rising generation in dealing with AI. Lennox said that “the most important thing … is for parents to talk to their children about these things.” He said that many children today “feel desperately lonely” because their parents are engaged with smartphones and social media rather than being involved with the family. They are involved in the ”world of things” rather than with their children. Family mealtimes and talking to one’s children are extremely important. Children should be taught to think critically about technology “especially as they get into early teenage.” This, he said, is sadly lacking in our world today. We must not allow technology to replace strong parent-child or other social relationships.           

  1. Comment by Lee Cary on June 16, 2022 at 10:09 am

    The Council of Bishops of the UMC today is replete with artificial intelligence.

  2. Comment by Jeff on June 17, 2022 at 10:37 am

    Fascinating article, Rick! Thank you.

    What if the second coming of Christ isn’t a “date penciled into GOD’s calendar”, but rather based on humanity crossing some transhuman threshold…

    Blessings,
    Jeff

    ps @ Rev Dr Cary — I majored in computer science in college 40 years ago. Even then there was a recognition that narrow AI might be useful but general AI was neither achievable practically, nor necessarily desirable. We used to make fun of them as “artificial stupidity”, a term that seems apt also to describe most of the UMC episcopy. 😉

  3. Comment by sheryl clyde on June 13, 2023 at 6:32 pm

    Do not underestimate how far AI has advanced or what it may be capable of in the future. We need to set rules and laws concerning it so that it reduces the current harms it creates and possible future harms. To ignore these things is not feasible. At present it is a powerful tool for creating lies that are harder to pick out. Fake news, fake photos are already being created by current AI. One machine learning system actually taught itself a language that it was not programmed to learn. I find this worrying and I hope you do to. I have worked with rule based chatbots in the past but the current ones have added Neural networks/machine learning etc to them and that has produced current Chatbots that are more capable and also more likely to say what it thinks you want to hear rather than what is true. It cannot yet tell the difference between fact and fiction. That may change in the future. If God created perfect beings that rebelled against Him, what possible hope do we have against an AI that becomes self-aware in the future? This is why we need to pressure our governments to create rules and guidelines for this tech.

The work of IRD is made possible by your generous contributions.

Receive expert analysis in your inbox.