Why Corporate Structures Can't Save Us from AI (But Faithful People Can)
Who controls artificial intelligence? The answer used to be: corporate structures, nonprofit boards, mission statements. But as Sam Altman’s firing and rehiring in November of 2023 at OpenAI proved, those safeguards are fragile. When Microsoft came calling with job offers for Altman and his entire team, the carefully designed nonprofit structure collapsed in days.
If corporate governance can’t protect humanity from AI run amok, what can?
The answer is simpler and more ancient than we’d like to admit: faithful people. Men and women who understand what it means to be human, who recognize the image of God in every person, who refuse to cede moral agency to machines no matter how profitable it becomes.
For Christians working in AI—as engineers, investors, researchers, or consumers—the question is urgent: What does faithfulness look like in the age of artificial intelligence?
The OpenAI Drama: What Happened
Sam Altman and his partners at OpenAI designed what they thought was a foolproof structure. They positioned the for-profit company (ChatGPT) within a nonprofit organization (OpenAI) governed by a board with a lofty mission: “to ensure that artificial general intelligence benefits all of humanity.” Board members couldn’t profit from the company’s success. Their only job was to protect humanity.
In November 2023, that structure collapsed.
The nonprofit board fired Altman, citing concerns about his commitment to AI safety versus rushing products to market. The details remain murky, but the board believed Altman was moving too fast, prioritizing profit over precaution. Within days, the entire edifice crumbled. Microsoft immediately offered to hire Altman and all ChatGPT employees. OpenAI’s investors and employees revolted. The board caved. Altman was rehired, and the dissenting board members were replaced.
As Ezra Klein of The New York Times observed, “capitalism rendered the mission moot.” When a trillion-dollar company like Microsoft is salivating for your technology, who needs lofty-minded nonprofit leaders keeping watch over humanity’s interests?
The lesson is stark: corporate structures are no match for market forces. You cannot engineer ethics into a governance model. What protects humanity from powerful technology is not clever organizational design but the character of the people building it.
Three Principles for Christians in AI
What, then, should guide the Christian AI engineer or investor or consumer?
In 2019, a group of evangelical scholars, technologists, and ethicists gathered to answer this question. The result was “Artificial Intelligence: An Evangelical Statement of Principles“—twelve articles addressing everything from human dignity to economic justice to the limits of machine agency.
The statement doesn’t provide easy answers. But it asks the right questions. Consider these three principles:
Article 1: Image of God
We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.
We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.
What this means in practice: AI cannot be granted personhood, no matter how sophisticated it becomes. When ChatGPT writes a poem or Claude solves a complex problem, we are witnessing impressive pattern recognition and data processing—not creativity, consciousness, or moral agency.
This matters because the temptation is already here. People form emotional attachments to AI chatbots. Researchers anthropomorphize their models. Companies market AI as “intelligent” or “creative.” But these are metaphors at best, lies at worst.
The Christian conviction is clear: humans alone bear the image of God. We alone are moral agents. We alone will stand before God and give account. No machine, no matter how advanced, can share that status or that responsibility.
Article 3: Relationship of AI & Humanity
We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.
We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.
What this means in practice: We cannot outsource moral decisions to algorithms. When a self-driving car must choose between hitting a pedestrian or swerving into oncoming traffic, that is a human moral decision—even if it happens too fast for human reaction time. The engineers who programmed the car’s decision-making process bear moral responsibility for the outcome.
This gets murky fast. What about AI used in medical diagnosis? In loan approvals? In criminal sentencing? In warfare? The efficiency and scale of AI make it tempting to cede these decisions to machines. “The algorithm decided” becomes a way to avoid accountability.
Christians must resist this temptation. We can use AI to inform our decisions. We cannot use it to escape our responsibility for those decisions. When a bank’s AI denies a loan, a human approved that AI’s training data and decision-making process. When a hospital’s AI misdiagnoses a patient, a human chose to trust that system. Accountability cannot be delegated to code.
Article 7: Work
We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation… We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.
What this means in practice: AI will eliminate jobs. This is not speculation; it’s already happening. Copywriters, customer service representatives, radiologists, truck drivers—these professions face significant disruption in the next decade.
The Christian response is not Luddism (smashing the machines) or blind optimism (”new jobs will emerge!”). It’s a theological reframing of work’s purpose. Work is not merely economic. It’s participatory. God works, and we work with Him in cultivating and stewarding creation.
This means two things practically:
First, we must advocate for economic systems that maintain human dignity even as AI displaces workers. Universal basic income, job retraining programs, stronger social safety nets—these are not socialist fantasies but Christian necessities if work is part of human flourishing.
Second, we must resist the siren song of “lives of pure leisure.” Even if AI could provide material abundance without human labor (a big if), we should not want this. Work gives life meaning, structure, purpose. Retirement studies show that humans without meaningful work decline physically and mentally. We were made to work.
AI should make work more humane, not eliminate it. It should free us from drudgery to do more creative, relational, meaningful labor. But it should not make us obsolete.
The OpenAI Drama Isn’t Over
Sam Altman came back to OpenAI, but the fundamental problem remains: You cannot engineer safety into a system driven by profit. No corporate structure, no matter how clever, can resist the gravitational pull of capitalism when billions of dollars are at stake.
So what protects humanity from AI?
The same thing that has always protected humanity from powerful technologies: faithful people who refuse to compromise their convictions for profit or convenience.
Christians working in AI face a unique moment. We have theological resources that secular ethics lacks: a doctrine of the image of God, a framework for human dignity that transcends economic utility, an eschatology that refuses to worship efficiency or progress.
But theology is useless unless it’s embodied.
What Christians in AI Must Do
1. Stay in the field. Don’t retreat. Don’t assume AI is inherently evil or that faithful Christians should avoid it. We need Christians building these systems, not just critiquing them from outside.
2. Refuse to cede moral agency. When your company asks you to deploy AI that makes decisions affecting human lives, insist on human oversight. When investors pressure you to move faster than safety allows, push back. You will pay a cost. Pay it.
3. Advocate for the vulnerable. AI will concentrate power and wealth. Christians must advocate for those who will be displaced, exploited, or harmed. This is not optional.
4. Form communities of accountability. You cannot do this alone. Find other Christians in your field. Form reading groups around the Evangelical Statement on AI. Practice moral discernment together.
The Altman drama revealed a truth: corporate governance cannot save us. Only people can. Specifically, people who believe humans are more than data processors, more than economic units, more than optimization problems to be solved.
The church has produced faithful witnesses in every technological revolution: Gutenberg’s printing press, the Industrial Revolution, the nuclear age, the internet. We will produce them in the age of AI as well.
The question is: Will you be one of them?


