Ethical Innovation with Generative AI

Generative AI Series, Ep. 1

12.5.2023 by Salla Westerstrand

 

Open AI published the latest version of their large language model GPT-4 on March 14th, and the AI-world exploded. The announcement was followed by enthusiasm and quick introduction of a vast range of applications and adaptations that give a promise of easier life. For example, GitHub CoPilot X allows programmers to generate and modify code, and Microsoft CoPilot for O365 promises to boost the existing Office workflows. GPT-4 is available for developers via Azure OpenAI service to build on, and a range of plugins were announced on March 23rd. The list is growing day  by day. 

 

People are eagerly thinking of ways to make use of the novel opportunities brought by the technology. Maybe you already jumped in and adopted a new tool to your professional or personal toolkit. It has, after all, great potential to help us with many tasks.

 

Meanwhile, the discussion around ethics and societal impacts of these technologies has intensified. Are we missing the big picture?

 

In this article, I offer you three things:

 

  1. First, we’ll look into how different actors have reacted to the recent developments and what does that tell about risks and opportunities.
  2. Second, I’ll take you on a journey into ethics and inspect how two perspectives – human flourishing and freedom – could help us understand the impacts of generative AI on humans.
  3. Lastly, I'll leave you with an action plan for harnessing benefits of innovating with generative AI — ethically.

This blog post is the first of a series of writings around generative AI technologies and their ethical implications. Subsequent posts will appear on the Harmless blog during the spring and summer 2023.

Reactions from the industry: genuine concerns or effective PR?

 

You know something is happening when tech leaders step up and pronounce their worries to the public. Let's illustrate with a couple of examples:

 

The CEO of DeepMind, Demis Hassabis is warning against “moving fast and breaking things” -mentality in his recent interview for the Times magazine. According to Hassabis, we have reached a point where AI “could be deeply damaging to human civilization.”

 

Bill Gates also commented on his blog that “market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely.”

 

Jaron Lanier stated in a Guardian interview that “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”

 

Even OpenAI has recognized the risky nature of their business and announced that their previous mission of democratizing AI through open sourcing is no longer an option. GPT-4 has been the least open of their models so far, and even the CEO Sam Altman commented in an ABC news interview that he was “a little bit scared” of the direction where generative AI is currently going.

 

The discussion got ever more heated after an open letter was published and signed by several AI researchers and industry professionals stating that giant AI experiments should be paused to give time for evaluation of their impacts on humans and society, and to create policies and regulation for robust AI governance. Whereas some of the concerns probably are genuine, the premise of a minimum of 6 months of pause during which it should all be fixed arguably seems problematic.

 

For some, such statements could have been only a means for PR, or for getting more time to catch up. The more powerful we consider these technologies to be the better it is for their business.

 

As noted by Baum et al. (2023) in their recent opinion piece, the direction of AI should not be dictated merely by AI labs working in the industry. Surely enough, the public sector has taken on the challenge and is working tirelessly to figure out how to regulate and guide AI development towards positive outcomes.

Public sector reactions

 

It is a matter of great concern that there is no real indication that OpenAI has conducted large-scale analysis of the long-term impacts of their technology on society or individuals. This has become evident. In the public sector,  ChatGPT has already demonstrated issues with privacy when found disclosing confidential information among users, leading to the Italian government banning the tool until OpenAI took required corrective measures. The European Data Protection Authorities have also followed suit and have put together a task force to take a deep dive into ChatGPT’s privacy-related issues.

 

The EU Parliament has just recently reached a common understanding on how to address foundation models, such as GPT-4, in the upcoming EU AI Act and “sealed the deal” on the Act. In the beginning of May, the key committee secretariats published their amendments, and adopted them with a broad majority on May 11 2023. This has made it ever clearer to the public that the long-awaited European AI regulation is advancing and will eventually come to force. The plenary vote for the draft is scheduled for June 2023.

 

In the context of the US, the National Institute of Standards and Technology (NIST) published in March a Trustworthy & Responsible AI Resource Center, which includes an AI risk government framework that helps organisations to recognize and manage risks related to AI technologies. The Biden administration has announced plans for regulating AI technologies to make sure AI innovation benefits humans, the results of which Silicon Valley must be eagerly – or perhaps anxiously – expecting.

 

One of the big players in the global AI sphere, China, has not stayed silent, either. China introduced new AI regulation shortly after Alibaba and Baidu released their equivalents to ChatGPT.

 

Yet, all of these actors have a limited mandate to impact the ethical direction of global AI development. We need to remember that complying with regulation does not equal being ethical.

 

Next, we’ll discuss these concerns from two perspectives of ethics.

Ethics in the age of generative AI: two perspectives

 

Human flourishment

 

In the field of virtue ethics, Bynum (2006) has suggested bringing the concept of human flourishing into situations involving technology. According to the flourishing ethics perspective, acting ethically requires cultivating human flourishment, which requires theoretical and practical reasoning and intellect. Bernd C. Stahl (2021) and fellow scholars (2022) have demonstrated the potential of this perspective also in the context of AI.

 

Let’s take a look into what could happen to the conditions of human flourishing if we externalize our thinking to digital tools, such as generative AI.

 

It is tempting to think that outsourcing research and creation, text production, or synthesizing information makes our lives better. If I have a tool that fetches the information for me, puts it in a nice, easily digestible form and even communicates it to others, I have more time to do other things I enjoy and build my capacity around things I like the most.

 

However, will externalizing text production, information processing, or synthesizing affect our own capacity to flourish – as a virtue ethicist might put it – as human beings? Will we become less capable, less intelligent ourselves, if we stop using our brain on such activities?

 

What we are missing is that the very process of connecting pieces of information, subjecting ourselves to different viewpoints – literally putting our brain cells to work – is what made humanity intelligent in the first place. That upkeeps our cognitive reserve and mental capacity, keeping our brains healthier and high functioning.

 

We have received education, been subjected to literature and culture, learned to process information, as well as evaluate its meaning in different contexts. In other words, we have been challenging our brain, developing neural connections and abilities that have enabled us to progress as humanity. If we take that for granted and outsource our thinking, reasoning, and search for information, we might be surprised by the speed of loss of intelligence. When unused, the connections between brain cells do perish, after all.

 

From a virtue ethics perspective described above, a trend of externalising thinking would indicate a decrease in our capacity to reason, and thus to act ethically. Maybe in the end, AI will indeed reach human-level intelligence – not because it gets developed to our current level but because humans regress to the level of machines and meet them in the middle.

 

Or, maybe we find a way to support our brain health and encourage people to still engage in brain-challenging activities, even though some burdensome tasks are externalized to AI. If so, this is the direction towards which we should steer our focus in exploring generative AI. Only by recognizing the risks can be avoid rushing into “everything-AI”, eventually finding ourselves missing the erudition and intelligence we once had.

 

Freedom and human autonomy

 

Freedom is a broad concept. In the context of AI, we recommend checking out Mark Coeckelbergh's recent book Political Philosophy of AI, where he gives one of the most approachable, yet still comprehensive overviews of different layers of freedom. Here, I concentrate on aspects around surveillance and influencing decision architecture.

 

In a nutshell, attempts to influence our decision architecture is nothing new. However, generative AI seems to have an exponential capacity to both create content with which we can effectively manipulate the decision-making of others, as well as restrict freedom and human autonomy through optimised real-time surveillance.

 

Firstly, AI can and has been used to create effective measures of surveillance. Even though biometric surveillance is subject to strict restrictions in the upcoming EU AI Act, its ethical dimensions are under debate after the French government passed a legislation to allow temporary AI surveillance during the Paris 2024 Olympics, making France the first EU country to legalise AI-powered surveillance.

 

Even if the French government would not use their technology in a blatantly destructive manner, ethical issues still prevail. Surveillance does not need to lead to concrete restrictions of freedom, or infringements of other human rights and freedoms in order to be unethical. The mere existence of a threat of surveillance, of knowing that someone is watching, leads to changes in our behaviour, which can be considered detrimental to human autonomy.

Related reading

The Age of Surveillance Capitalism by Shoshana Zuboff

Platform Socialism by James Muldoon

The Digital Republic by Jamie Susskind

The Political Philosophy of AI by Mark Coeckelbergh

Second, let’s talk about AI-enhanced nudging and other measures that aim to change our decision architecture, to persuade us towards actions we would not otherwise necessarily do.

 

Our information infrastructure has for long been modified – if not always. Governments are implementing policies that persuade us to act like good citizens, such as attractive bins to reduce littering, or simply markings on the roads to guide the traffic. Behavioural urban design can encourage us to choose a bike over a car, or a walk instead of taking a bus.

 

Still, you might not be surprised to hear this again: this coin also has a flipside, where generative AI can play a part.

 

Hui (Max) Bai et al. (2023) conducted an experiment with GPT-3 and discovered that AI-generated content can persuade people in political issues just as well as humans. With the new multimodal GPT-4, the persuasive power might be even greater (this is yet to be confirmed, though, so take this with a grain of salt).

 

An individual piece of content can seem harmless. Maybe someone would have posted it anyways. Yet, a message affects our opinion formation differently when it is supported by only one, rather than twenty, five hundred, or ten million people.

 

Artificially created, interactive, and realistic-looking deepfakes can create an illusion of a larger movement supported by masses, even in cases where such content is the work of a single actor, the indentity and goals of whom might never be revealed.

 

If we want to maintain that humans are autonomous beings who are capable of making their own decisions and hence deserve to have access to high-quality information, then we need to protect the structures that make it possible for us to evaluate the information we gather and trust that we are not being constantly manipulated.

 

In the age of generative AI, this means, for example, being transparent about the sources of information and clearly indicating whether we are trying to interact with an algorithm. It means developing effective measures for detecting deepfakes, and educating people to recognise trustworthy sources of information.

 

Still, doing so in practice has appeared difficult, and only few concrete solutions have been suggested to tackle the issue. The complexity of the issue is why the ever-so-real-looking fake information has been proposed by Kilovaty (2019) to threaten the very existance of modern democracies.

 

Yet again, we should not let the difficulty of solving the issue overwhelm us. We were able to develop impressive generative AI systems, so I see no reason why we could not innovate solutions to mitigate their misuse, or the detection thereof.

 

Why are the protective technologies lagging behind, then? The companies holding the resources and knowledge seem to lack incentive. There just hasn't been a profitable enough market for protection of democracy. (A topic that I'll set aside for now to be discussed in future writings – but if you are intrigued, Joseph E. Stiglitz' book The Price of Inequality is still topical, and a good place to start). 

What is the key to success?

 

No one claims the issues discussed above do not raise important questions. But why should individual businesses or organisations exploring the potential of generative AI care?

 

A short answer: Ethics is no longer on the nice-to-do-list. It is a necessity for anyone who wants to harness the benefits of generative AI.

 

It is as much about risk management and compliance as it is about building your brand and attracting and retaining talent.  

 

In this blog post, I have briefly discussed generative AI from the perspectives of human flourishing and freedom, which showed that even an elementary ethical analysis can help us get a better grasp of risks and opportunities, and to use that information to strive towards truly beneficial technologies.

 

That is where the power of ethics in tech resides: Ethics is one tool that can help us recognise the underlying impacts and point the light towards the road to success.

 

Yes, it also gives you warnings and pinpoints concerns, which can feel discouraging. But so do technological, regulatory and physical constraints, as well. As Segikuchi and Hori (2020) show, ethical constraints can be harnessed to support creative design. If you are unsure how, and how the abstract-sounding ethics turns into real, tangible business processes, an ethicist would be happy to help you find out.

 

Ethical reasoning can make innovation with generative AI flourish, as it helps to set the way for high-quality, robust products from the very beginning.

 

What should you do if you want to harness the true benefits of generative AI?

 

  1. Recognise the real risk. In today's competitive landscape, no one can become the forerunner in AI without investing in ethics. Putting resources in ethics has become a prerequisite for success. Hiring an ethicist – either internal or external – is not the risky option here. Not doing so is.

 

  1. It is time to walk the walk. Forerunners are truly concerned with ethics, and are building their leading positions as we speak. Mission statements and ethics-friendly communication without real action is called ethics washing. And real action requries resources. Work with ethicists to harness the benefits and mitigate harms – you are not expected to do it all alone.

 

  1. Innovate. When you have a solid foundation for your generative AI journey, let your inspiration flow! With robust processes in place, you can tap into the best expertise of your teams and their innovative potential. Ethics helps to align the work with people's values, which creates fruitful conditions for thriving innovation.

 

Lastly, we need to keep in mind that it is not just generative AI that brings opportunities and risks. Baum et al. (2023) put it well in their recent opinion piece:

“Just because the linguistic character of LLMs is particularly anthropomorphic and therefore appeals to the public's imagination and fear, as well as to the potential overestimation of the capabilities of these models, this does not necessarily imply that these models are inherently or overall more dangerous than others.” – Baum et al. (2023)

Therefore, the need for robust, ethical AI development is not restricted to any hype technology. These are processes that will adapt and last you a lifetime.

 

The truth is, we will probably not be discussing ChatGPT or other GPT-powered solutions forever. Even OpenAI's Sam Altman recently commented that the next step for AI is not growing the size of generative AI models but coming up with new ideas.

 

But for now, Generative AI merits a bit more of our attention. In the coming episodes of this series, we will dive deeper into ethical aspects of Generative AI, elaborating on the aspects related to human flourishment and autonomy.

 

Stay tuned!

Do you have a perspective on generative AI you would like to share with others? E-mail info@harmlessconsulting.com, and tell us about your idea, if you are interested in contributing to the Harmless blog.

Sources

 

(Sources in this list are also linked in the blog.)

 

Bai, Hui (Max), Voelkel Jan G., Eichstaedt, Johannes E., Willer, Robb (2023). Artificial Intelligence Can Persuade Humans on Political Issues. PREPRINT. 10.31219/osf.io/stakv. 

 

Barulli, Daniel and Stern, Yaakov (2013). Efficiency, capacity, compensation, maintenance, plasticity: emerging concepts in cognitive reserve. Trends in Cognitive Sciences 17(10), https://doi.org/10.1016/j.tics.2013.08.012. 

 

Baum, Kevin, Bryson, Joanna, Dignum, Fran, Dignum, Virginia, Grobelnik, Marko, Hoos, Holger, Irgens, Morten, Lukowicz, Paul, Muller, Catelijne, Rossi, Francesca, Theodorou, Andreas and Vinuesa, Ricardo (2023). "From Fear to Action: AI Governance and Opportunities for All". Frontiers in Computer Science, vol. 5, doi: 10.3389/fcomp.2023.1210421. 

 

Bertuzzi, Luca (2023). "AI Act: MEPs close in on rules for general purpose AI, foundation models.." Euractiv, April 20, 2023, https://www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-close-in-on-rules-for-general-purpose-ai-foundation-models/. 

 

Bertuzzi, Luca (2023). "MEPs seal the deal on Artificial Intelligence Act". Euractiv, April 27, 2023, https://www.euractiv.com/section/artificial-intelligence/news/meps-seal-the-deal-on-artificial-intelligence-act/. 

 

Bynum, Terrell Ward (2006). "Flourishing Ethics." Ethics Inf Technol 8, 157–173. https://doi.org/10.1007/s10676-006-9107-1

 

Coeckelbergh, Mark (2022). The Political Philosophy of AI. An introduction. 

 

Committee on the Internal Market and Consumer Protection, and Committee on Civil Liberties, Justice and Home Affairs (2023). "DRAFT Compromise Amendments on the Draft Report. Proposal for a regulation of the European Parliament and of the Council on harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9 0146/2021 – 2021/0106(COD)). https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf


Derico, Ben (2023). "ChatGPT bug leaked users' conversation histories". BBC News, 23 March 2023, https://www.bbc.com/news/technology-65047304.

 

Foroudi, Layli (2023). "France looks to AI-powered surveillance to secure Olympics." Reuters, March 23, 2023, https://www.reuters.com/technology/france-looks-ai-powered-surveillance-secure-olympics-2023-03-23/ 

 

Future of Life Institute (2023). "Pause Giant AI Experiments: An Open Letter". March 22, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/. 

 

Gates, Bill (2023). "The Age of AI has begun." GatesNotes, March 21,  2023, https://www.gatesnotes.com/The-Age-of-AI-Has-Begun

 

Hattenstone, Simon (2023). "Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’" The Guardian, March 23, 2023, https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane?fbclid=IwAR165-_OrnTIa1IgpQiWn9QV2XMP0RR5ct1jBsGz04qJFu01qVI-eMGWu5o 

 

Kharpal, Arjun (2023). "China releases rules for generative AI like ChatGPT after Alibaba, Baidu launch services." CNBC, April 11, 2023, https://www.cnbc.com/2023/04/11/china-releases-rules-for-generative-ai-like-chatgpt-after-alibaba-launch.html. 

 

Kilovaty, Ido (2019). "Legally Cognizable Manipulation" Berkeley Technology Law Journal, 34(2), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3224952. 

 

Knight, Will (2023). "OpenAI CEO Sam Altman says the age of giant AI models is already over." Wired, April 17, 2023. https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

 

McCallum, Siona (2023). "ChatGPT banned in Italy over privacy concerns." BBC News, April 1, 2023. https://www.bbc.com/news/technology-65139406

 

Muldoon, James (2022). Platform Socialism. 

 

NIST (n.d.). "NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC)." https://airc.nist.gov/Home

 

NIST (n.d.) "AI RISK MANAGEMENT FRAMEWORK." https://www.nist.gov/itl/ai-risk-management-framework. 

 

Ordonez, Victor, Dunn, Taylor and Noll, Eric (2023). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News March 16, 2023. https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122

 

Perrigo, Billy (2023). "DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution." Time magazine Jan 12, 2023, https://time.com/6246119/demis-hassabis-deepmind-interview/

 

Peters, Jay and Roth, Emma (2023). "Elon Musk founds new AI company called X.AI". The Verge, April 15, 2023. https://www.theverge.com/2023/4/14/23684005/elon-musk-new-ai-company-x

 

Schechner, Sam (2023). "ChatGPT Ban Lifted in Italy After Data-Privacy Concessions". The Wall Street Journal, April 28, 2023, https://www.wsj.com/articles/chatgpt-ban-lifted-in-italy-after-data-privacy-concessions-d03d53e7.

 

Sekiguchi, K., Hori, K. (2021). Designing ethical artifacts has resulted in creative design. AI & Society 36, 101–148. https://doi.org/10.1007/s00146-020-01043-6.

 

Stahl, Bernd Carsten, Rodrigues, Rowena, Santiago, Nicole and Macnish, Kevin (2022). A European Agency for Artificial Intelligence: Protecting fundamental rights and ethical values. Computer Law and Security Review, vol 45.  https://doi.org/10.1016/j.clsr.2022.105661.

 

Stahl, B.C. (2021). Concepts of Ethics and Their Application to AI. In: Artificial Intelligence for a Better Future. SpringerBriefs in Research and Innovation Governance. Springer, Cham. https://doi.org/10.1007/978-3-030-69978-9_3

 

Sterling, Toby (2023). "European privacy watchdog creates ChatGPT task force" Reuters, April 14, 2023. https://www.reuters.com/technology/european-data-protection-board-discussing-ai-policy-thursday-meeting-2023-04-13/

 

Stiglitz, Joseph, E. (2012). The Price of Inequality. 

 

Susskind, Jamie (2022). The Digital Republic. 

 

The White House (2023). "FACT SHEET: Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety." May 4, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/. 

 

Zuboff, Shoshana (2019). The Age of Surveillance Capitalism. Profile books.