When I make decisions, I get overwhelmed by all the factors to consider; all the possible consequences and current realities. I sometimes wish I had a machine that would crunch the numbers of my life and tell me what the best course of action would be, or at least give me a template to work with.
Right now, in 2019, my wish for an automated aid to decision-making isn’t too far off. There’s a very conceivable future where machines do help us make our life decisions. Artificial Intelligence, specifically defined by Merriam-Webster as “the capability of a machine to imitate intelligent human behavior,” is developing rapidly.
As artificial intelligence becomes increasingly integrated into our lives on a personal and public level, it’s important to think critically about some of its effects. Human wisdom and judgment are still necessary parts of the equation, especially when they’re informed by Jesus’ example.
We’re not always aware of it, but so-called “weak” AI already informs much of our lives; what Amazon ads surface on our laptops and which songs Spotify puts on your Discover Weekly playlist. These systems are gathering information about you and providing you with options you might not otherwise consider.
Weak AI focuses on solving specific problems, like Siri figuring out that the only word which rhymes with “purple” is “hirple.” (It means “to limp awkwardly.” True story.)
“Strong” AI is still nascent, but its ultimate goal is to develop machines with the intellectual capacity of humans, and then some.
It’s the stuff of cinema so far, like I, Robot and Ex Machina, but this kind of technology is rapidly on its way. And in the meantime, businesses in every sector increasingly rely on automated intelligence systems to problem solve, improve efficiency and predict outcomes.
Over the next few decades, AI will have more and more influence on our decision-making on a personal and public level. This has some pros and cons. On one hand, AI has a stunning ability to make accurate predictions based on existing information. But on the other, it has some decision-making limitations. In an article for Salon called “The future of artificial intelligence depends on human wisdom,” Sam Natapoff writes, “Though AI has the ability to pursue and improve a designated “utility function,” something that it can be programmed to pursue, it is incapable of pursuing a “values function” and therefore understanding human values.” Natapoff goes on to use happiness as an example, noting that since an algorithmic definition of happiness so far eludes humans, we won’t be able to program a machine to take it into account.
You can program for efficiency. You can’t program for joy, satisfaction or peace; all essential components of human lives.
In other words, human judgment is still a necessary part of the equation. Kartik Iyengar, senior vice president of IoT & Skylab at VirtusaPolaris, a global IT consulting and technology services company, said “There is nothing artificial about intelligence. Intelligence is a fine balance of emotions and skill that is constantly developing.” Machines can only approximate this.
But they’re still a part of our present—and our future. And so, in a forward-thinking attempt to anticipate the moral and theological issues that accompany developing artificial intelligence, in April 2019 the Southern Baptist Convention’s Ethics and Religious Liberty Commission released recommendations for a Biblical approach to AI. You can find them here.
These recommendations cover everything from privacy to medicine to human sexuality. Jason Thacker, head of the project, said that the articles were “created not out of fear, but out of an understanding that [A.I.] is a tool that God has given us. Our hope is that for the first time in a really long time, the church can be proactively engaging in issues affecting society, rather than always responding and reacting.” Thacker is right. AI is, like most technology, an opportunity and not morally good or bad on its own.
The articles state, “We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created.” In other words, we should never let AI do our moral reasoning for us. In the secular sphere, the Future of Life Institute released the “23 Asilomar Research Principles” to guide AI research and help ensure it is used for good. These principles have over 2,000 signatories, including Stephen Hawking and Elon Musk. People in every arena are paying attention to artificial intelligence.
As I thought about artificial intelligence’s implications for decision-making and my wish for a template when choices are hard, I remembered Jesus and the parable of the lost sheep in Matthew 18 and Luke 15. In it, Jesus says, “If a man has a hundred sheep and one of them goes astray, will he not leave the ninety-nine on the hills and go out to search for the one that is lost?”
Searching for the one lost sheep isn’t exactly the most efficient decision. Or the most productive. Or the smartest. It’s the kind of decision a machine would never have made, because it doesn’t make any rational sense. But it is still wise and loving, because Jesus knew the importance of individual souls. To Him, it is always worth taking a risk to save one living creature. Maybe this is my template.
And so as we stride forward into a world increasingly populated by smart machines assisting our decision-making processes, I’d like to see us remember Jesus’ value system. I’d like to remember it myself, that efficiency and productivity aren’t the only factors to consider. The future of artificial intelligence does indeed depend on human wisdom, and human wisdom comes from God.