There is no avoiding it. Artificial Intelligence (AI) is changing our world. Its emergence has earned its spot as the fourth Industrial Revolution according to McKinsey & Company.1 This is fitting, but the AI Revolution seems significant enough for its own periodization. Whereas the first Industrial Revolution took over manual labor, the age of AI is taking over mental labor (even art! I did not see that one coming a decade ago).
Competition or Collaboration?
As a digital designer, I keep up-to-date with the development of AI, especially as it relates to my business. There is a split between those who think that AI will primarily compete with humans and those who think that AI will primarily collaborate with us. It will certainly do both.
The reality is, far fewer human workers will be needed in the coming years. The copywriting world is already experiencing this. One person has reported that his team of 60 people has already been whittled down to one person (himself), and ChatGPT.2 There is no argument that whatever this “collaboration” is, it is also a replacement.
Who is Responsible?
With all of this happening, many articles and voices are pointing the finger directly at AI. Here is just a quick search for “ai jobs” on Google News:
There is an important thing to notice about all these headlines—who is seen as the main actor? AI, of course. “AI is stealing.” “AI is replacing.” “AI is taking.” While that’s an easy shorthand, it doesn’t tell the whole story. AI is not the main actor. Even though its consciousness is occasionally debated, at the end of the day it is a tool that simulates human thinking.3 And yes, it even simulates human emotion.4 It is a complicated tool, to be sure. And while it has a “mind of its own” in many ways, it is certainly not a free moral agent. It’s not the ultimate decision maker (at least not yet, thankfully).
AI is a product. A product that companies are buying and implementing into their organizations. To blame AI is akin to blaming a murder weapon for the murder. Or, blaming a machine at the assembly plant for replacing human workers. In the former, maybe the weapon needs regulation, but the human agent is ultimately responsible. In the latter, it was the manufacturer who purchased and implemented the machine. The machine didn’t walk in, connect itself, and fire the employees.
This is not to say that AI is morally neutral. AI is morally significant and transformative. This is to say that AI is not a morally responsible agent per se. While I personally think that the AI Revolution will be net negative for humanity, our blame is somewhat misdirected. Why? Because we cannot hold AI accountable. The primary cause is human agency. The humans who build and those who utilize AI are responsible for the respective results. I don’t say this as a completely negative statement.
Maybe in some cases, companies use AI responsibly, helping them gain the necessary efficiency to stay in business and even save jobs. Maybe it is the missing piece for discovering a cure for cancer. AI has great potential for good, and I don’t discredit it for that. It also has great potential for bad, especially if left unchecked. Humans who create ethical or unethical systems are responsible for how they’re built. Humans who use systems ethically or unethically are ultimately responsible for what they do.
A Call for Accountability and Regulation
The point is that right now AI is a scapegoat. It is an easy technology to point the finger at, rather than the decisions of company executives. That said, if and when things go wrong, we need to hold the right people accountable. In the above example, AI didn’t directly replace 59 copywriters’ jobs. People at that organization made a business decision. It probably looked great for their investors too. The more accurate headline might be, “Company X Used AI To Replace Human Workers."
“AI” will continue to replace jobs as long as there’s a financial incentive to do so. “AI” will continue to steal artists’ work unless there are strong penalties in place to stop this from happening.
For better or worse, this technology is not going anywhere. But we must see that human agents are making the decisions to build and use these systems to their own ends. Some will be productive and right, some will be harmful and wrong. Let’s just be sure we don’t let organizations get away with the argument that “the ‘AI devil’ made me do it.” We need real accountability. We need meaningful regulation.
Bristol, Henry, Enno de Boer, Dinu de Kroon, Rahul Shahani, and Federico Torti. 2024. “Adopting AI at Speed and Scale: The 4IR Push to Stay Competitive | McKinsey.” McKinsey and Company. February 21, 2024. https://www.mckinsey.com/capabilities/operations/our-insights/adopting-ai-at-speed-and-scale-the-4ir-push-to-stay-competitive.
Germain, Thomas. 2024. “AI Took Their Jobs. Now They Get Paid to Make It Sound Human.” BBC. June 16, 2024. https://www.bbc.com/future/article/20240612-the-people-making-ai-sound-more-human.
Huckins, Grace. 2023. “Minds of Machines: The Great AI Consciousness Conundrum.” MIT Technology Review. October 16, 2023. https://www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum/.
Gorvett, Zaria. 2023. “The AI Emotions Dreamed up by ChatGPT.” BBC. February 25, 2023. https://www.bbc.com/future/article/20230224-the-ai-emotions-dreamed-up-by-chatgpt.