Ever since GPT-3 and its relatives in AI exploded onto the scene, there’s been one question that I’ve asked myself on-repeat. How will these technologies affect our intelligence?
Put another way, I wonder what will be left of us as human beings after we’ve offloaded all of our skills and tasks onto the many neatly programmed AI technologies that are already changing the shape our society.
Well, in this blog, I’ll attempt to find out.
Sam, M.Ph., is a Digital marketer, tutor, and writer with a passion for technology and ethics.
This blog post is part of the series of writings around generative AI technologies and their ethical implications. Subsequent posts will appear on the Harmless blog during the spring and summer 2023.
Underpinning my interest in this question are two, interrelated philosophical theories known as the theory of extended cognition and the theory of extended mind. They provide an interesting context for us to explore the ways in which AI will change how we think and behave.
Championed by philosopher, Andy Clark, these theories suggest that our cognitive and mental processes literally extend outside of our brain, and spill into artifacts in the environment. Clark provides two examples to explain what he means.
Let’s take a look.
To show how our cognitive processes extend into the environment, Clark discusses an individual playing a game of Tetris as an example. When we play Tetris, the cognitive processing that’s going on when we use the controller to quickly rotate the blocks to fit those at the bottom, is not just going on in our heads.
Clark argues that there’s a coupling that occurs between ourselves and the environment we are engaged with, and in this coupling there is an extension of cognitive processing. In the Tetris example, the relevant environment is the game controller, our body and the screen we are looking at.
Here’s how Clark explains it:
"In playing Tetris, the player is not simply using the game as a tool or instrument to achieve some goal, but is rather enmeshed in a complex and dynamic system that involves the player's body, the game environment, and the rules of the game itself…the player's cognitive processes become tightly coupled with the game itself, forming a kind of 'cyborg' system that blends human and machine into a single, highly integrated cognitive unit."
To support his argument, Clark explains that when a component is removed from this cognitive unit, for example the video game controller in the Tetris example, a significant performance deficit occurs, and we are no longer able to function cognitively at the same level.
As a result, Clark argues, cognition must be extending outside of the brain.
In relation to Clark’s more specific argument about the extended mind, he discusses a similar relation between human and artifact - this time, in the form of a person named Otto, and a notepad and pen. In this example, Otto can only remember things by writing them down. In other words, he has no memory function within his brain, and therefore has to write down everything he needs to remember.
In a similar way to the Tetris example, the argument goes that when you remove the external artifact (in this case, a notebook and pen), a performance deficit occurs, and as a result, it must be that mental processes are occurring outside the brain. Here’s how Clark’s puts it:
“Otto’s notebook is not just a passive repository of information, but an active part of his cognitive system, playing a crucial role in allowing him to navigate the world and achieve his goals. In this way, Otto's example suggests that the boundaries of the mind can be extended to include external resources that are intimately coupled with our cognitive processes.”
Collectively, these arguments put forward the claim that our cognitive and mental processes extend out into artifacts that are very much separate from our physical brains, and in some cases even, we are heavily reliant on these external artifacts to actually carry out those cognitive and mental processes.
Within the context of AI then, I find this very interesting. When we use AI, we aren’t just playing a game, and we aren’t jotting down things to remember; we are scaffolding several hyper-complex cognitive processes onto various devices in order to fulfil many different tasks.
In other words, when we use AI, we are extending so much more out to an external artifact. It is this that has prompted my interest in how this will affect our intelligence.
But first, before we dive further into the hypotheticals of AI extension, let’s take a look at a more familiar example of a day-to-day artifact that we all currently use to carry out tasks that we could otherwise do in our brain.
The question I’d like us to consider is the following: did using a calculator make you better at math?
While the data on this question is limited, what I found suggests that while using a calculator allows you to produce answers with greater accuracy and speed, it also increases your reliance on calculators to do much of the cognitively demanding aspects of the answer. So, does a calculator make you better at math?
Well, it seems that using a calculator makes you better at producing the correct answers to mathematical questions, but worse at producing that answer without a calculator. In this sense, we have effectively scaffolded our mathematical processing onto a calculator, and as a result, when we don’t have it, we are less able to perform relevant computations by ourselves.
Given this, what happens when we scaffold our cognitive processing to the many tools that are provided to us by Generative AI? For it’s not only division and multiplication that we are now scaffolding to external objects, but also marketing strategies, music creation, and the art of conversation.
The worry is that doing this will make us less able to do these things by ourselves. Given that AI tools are now being produced to replace so many of our day-to-day tasks, there is a big risk that we will reduce our own ability to carry out a whole number of tasks by ourselves.
Fundamentally though, this worry is obviously contingent on our AI tools suddenly being removed from the market, and therefore from our productivity toolboxes. This is what would cause the huge drop in cognitive ability, as Clark argues is the case with extended cognitive systems. As a result, some might think it is a problem not worth worrying about, given AI is here, and is here to stay.
And yet, we are all abundantly aware of the developing conversation that is occurring on the regulation of AI, with world-leaders and CEOs voicing their concerns about its safety, and the need to slow or stop the rate of AI development.
Given this, it is entirely possible that one day, our access to these AI tools will be reduced or capped, and at that stage we will be left, once again, to grapple with many of our day-to-day tasks by ourselves.
It's in this way that we could see AI changing our intelligence in interesting, and perhaps, unhelpful ways. Imagine we successfully scaffold the majority of our duties onto various AI tools, and imagine that as a result, we have much more spare time. As opposed to using this time productively, to innovate and focus on new passions, we instead spend more of our time engrossed in our phone, in video games and in television.
AI is carrying out much of the ‘hard’ work we were usually faced with, and as a result, we spend more time practicing escapism. In such a situation, you can see how our intelligence would be negatively affected by the emergence of AI. And yet, we could also imagine a situation in which we use our spare time to take on new challenges, develop new competencies, and carry out new tasks. In this scenario, AI would inevitably be seen as a tool that has positively affected our intelligence, and has contributed to the development of our character and the attainment of our goals.
As a result, it seems that what is really significant to the question of how AI will shape our intelligence, is the power of our own choice in the matter. Fundamentally, we have this enormously powerful tool at our disposal, and we have to decide, very carefully, what to do with it.
Given its ability to effectively take on many of our cognitive tasks, it is vital that we consider which tasks we would be happy for the AI to carry out, and which tasks we, as human beings, would like to carry out independently. The answer to this question will undoubtedly be unique to the individual or organisation, and will be contingent on their ethical principles, their values and their goals. What would you like to achieve? Is it safe to offload this task onto an AI? What do you like doing for its own sake? What are the inherent benefits you receive from carrying out this task?
Asking and answering these questions empowers the human, and ensures that AI is still a tool that we use, and not something that consumes who we are.
Over the last few weeks, I have been considering such questions for myself. In particular, I’ve done this in relation to the various Generative AI tools that are now available for writers looking to supercharge their work.
After experimenting with these tools at various stages in the writing process, I realised that there were some parts of the process that I felt compelled to carry out by myself. In other words, I realised that there were some tasks that were intrinsically valuable to me, as a person and as a writer, and as a result, I would not want to offload such tasks to AI – even if they could probably do a better job.
If there is one thing I would like you to take from this article, which is something I think is especially relevant to those working in the field of AI Ethics, it would be this:
It is now essential for us to consider how we would like certain AI technologies to integrate with our tasks and skills.
What functions and tasks do we carry out that hold too much value to ourselves, and our organisations, for us to consider offloading to AI tools?
Perhaps there are certain tasks that represent and cultivate values that are important to you or your business, or perhaps there are tasks that are too ethically saturated for you to entrust it to a certain software. In this way, and to answer the question we began with, Generative AI has, and will, change the type of intelligence that we possess.
And more importantly, it is our total responsibility to choose how this looks.
Do you have a perspective you would like to share with others? E-mail firstname.lastname@example.org, and tell us about your idea, if you are interested in contributing to the Harmless blog.
Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19. http://www.jstor.org/stable/3328150
Garrison, J. (2023). President Biden warns AI could overtake human thinking. USA Today. https://eu.usatoday.com/story/news/politics/2023/06/01/president-biden-warns-ai-could-overtake-human-thinking/70277907007/#:~:text=WASHINGTON%20%E2%88%92%20President%20Joe%20Biden%20on,about%20the%20rise%20of%20AI.
Bedingfield, W. (2023). Musicians, Machines, and the AI-Powered Future of Sound, Wired. https://www.wired.co.uk/article/generative-ai-music
Clayton, J. (2023). Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence, BBC News. https://www.bbc.co.uk/news/world-us-canada-65616866
Thaler, S. (2023). New dating app pairs users with with AI chatbots designed to combat ‘ghosting’. New York Post. https://nypost.com/2023/05/10/dating-app-launching-this-month-will-have-users-chatting-with-ai-bots/
Doyle, K. (2023). AI for Marketing: How (and Why) You Should Build an AI Marketing Strategy. Jasper.ai. https://www.jasper.ai/blog/ai-for-marketing
Andy Clark bio: https://profiles.sussex.ac.uk/p493-andy-clark