Advanced ML Development
Generative AI & Large Language Models (LLMs)
ML Business & Strategy
MLOps
Tools, APIs & Frameworks

AI is a Human Endeavor ML Basics & Principles

An interview on AI regulation, existential threats narratives, and the need for public discourse about AI

Mhairi Aitken
Aug 29, 2023

As AI advances, calls for regulation are increasing. But viable regulatory policies will require a broad public debate. We spoke with Mhairi Aitken, Ethics Fellow at the British Alan Turing Institute, about the current discussions on risks, AI regulation, and visions of shiny robots with glowing brains.

devmio: Could you please introduce yourself to our readers and a bit about why you are concerned with machine learning and artificial intelligence?

Mhairi Aitken: My name is Mhairi Aitken, I’m an ethics fellow at the Alan Turing Institute. The Alan Turing Institute is the UK’s National Institute for AI and data science and as an ethics fellow, I look at the ethical and social considerations around AI and data science. I work in the public policy program where our work is mostly focused on uses of AI within public policy and government, but also in relation to policy and government responses to AI as in regulation of AI and data science. 

devmio: For our readers who may be unfamiliar with the Alan Turing Institute, can you tell us a little bit about it? 

Mhairi Aitken: The national institute is publicly funded, but our research is independent. We have three main aims of our work. First, advancing world-class research and applying that to national and global challenges. 

Second, building skills for the future. That’s both going to technical skills and training the next generation of AI and data scientists, but also to developing skills around ethical and social considerations and regulation. 

Third, part of our mission is to drive an informed public conversation. We have a role in engaging with the public, as well as policymakers and a wide range of stakeholders to ensure that there’s an informed public conversation around AI and the complex issues surrounding it and clear up some misunderstandings often present in public conversations around AI.

NEW & PRACTICAL ENDEAVORS FOR ML

Machine Learning Principles

devmio: In your talk at Devoxx UK, you said that it’s important to demystify AI. What exactly is the myth surrounding AI?

Mhairi Aitken: There’s quite a few different misconceptions. Maybe one of the biggest ones is that AI is something that is technically super complex and not something everyday people can engage with. That’s a really important myth to debunk because often there’s a sense that AI isn’t something people can easily engage with or discuss. 

As AI is already embedded in all our individual lives and is having impacts across society, it’s really important that people feel able to engage in those discussions and that they have a say and influence the way AI shapes their lives. 

On the other hand, there are unfounded and unrealistic fears about what risks it might bring into our lives. There’s lots of imagery around AI that gets repeated, of shiny robots with glowing brains and this idea of superintelligence. These widespread narratives around AI come back again and again, and are very present within the public discourse. 

That’s a distraction and it creates challenges for public engagement and having an informed public discussion to feed into policy and regulation. We need to focus on the realities of what AI is and in most cases, it’s a lot less exciting than superintelligence and shiny robots.

devmio: You said that AI is not just a complex technical topic, but something we are all concerned with. However, many of these misconceptions stem from the problem that the core technology is often not well understood by laymen. Isn’t that a problem?

Mhairi Aitken: Most of the players in big tech are pushing this idea of AI being something about superintelligence, something far-fetched, that’s closing down the discussions. It’s creating that sense that AI is something more difficult to explain, or more difficult to grasp, then it actually is, in order to have an informed conversation. We need to do a lot more work in that space and give people the confidence to engage in meaningful discussions around AI. 

And yes, it’s important to enable enough of a technical understanding of what these systems are, how they’re built and how they operate. But it’s also important to note that people don’t need to have a technical understanding to engage in discussions around how systems are designed, how they’re developed, in what contexts they’re deployed, or what purposes they are used for. 

Those are political, economic, and cultural decisions made by people and organizations. Those are all things that should be open for public debate. That’s why, when we talk about AI, it’s really important to talk about it as a human endeavor. It’s something which is created by people and is shaped by decisions of organizations and people. 

That’s important because it means that everyone’s voices need to be heard within those discussions, particularly communities who are potentially impacted by these technologies. But if we present it as something very complex which requires a deep technical understanding to engage with, then we are shutting down those discussions. That’s a real worry for me.

Stay tuned!
Learn more about ML Conference:

devmio: If the topic of superintelligence as an existential threat to humanity is a distraction from the real problems of AI that is being pushed by Big Tech, then what are those problems?

Mhairi Aitken: A lot of the AI systems that we interact with on a daily basis are opaque systems that make decisions about people’s lives, in everything from policing to immigration, social care and housing, or algorithms that make decisions about what information we see on social media. 

Those systems rely on or are trained on data sets, which contain biases. This often leads to biased or discriminatory outcomes and impacts. Because the systems are often not transparent in the ways that they’re used or have been developed, it makes it very difficult for people to contest decisions that are having meaningful impacts on their lives. 

In particular, marginalized communities, who are typically underrepresented within development processes, are most likely to be impacted by the ways these systems are deployed. This is a really, really big concern. We need to find ways of increasing diversity and inclusiveness within design and development processes to ensure that a diverse set of voices and experiences are reflected, so that we’re not just identifying harms when they occur in the real world, but anticipating them earlier in the process and finding ways to mitigate and address them.

At the moment, there are also particular concerns and risks that we really need to focus on concerning generative AI. For example, misinformation, disinformation, and the ways generative AI can lead to increasingly realistic images, as well as deep fake videos and synthetic voices or clone voices. These technologies are leading to the creation of very convincing fake content, raising real concerns for potential spread of misinformation that might impact political processes. 

It’s not just becoming increasingly hard to spot that something is fake. It’s also a widespread concern that it is increasingly difficult to know what is real. But we need to have access to trustworthy and accurate information about the world for a functioning democracy. When we start to question everything as potentially fake, it’s a very dangerous place in terms of interference in political and democratic processes.

I could go on, but there are very real concrete examples of how AI is already having presented harms today and they disproportionately impact marginalized groups. A lot of the narratives of existential risk we currently see are coming from Big Tech and are mostly being pushed by privileged or affluent people. When we think about AI or how we address the risks around AI, it’s important that we shouldn’t center around the voices of Big Tech, but the voices of impacted communities. 

devmio: A lot of misinformation is already on the internet and social media without the addition of AI and generative AI. So potential misuse on a large scale is of a big concern for democracies. How can western societies regulate AI, either on an EU-level or a global scale? How do we regulate a new technology while also allowing for innovation?

Mhairi Aitken: There definitely needs to be clear and effective regulation around AI. But I think that the dichotomy between regulation and innovation is false. For a start, we don’t just want any innovation. We want responsible and safe innovation that leads to societal benefits. Regulation is needed to make sure that happens and that we’re not allowing or enabling dangerous and harmful innovation practices.

Also, regulation provides the conditions for certainty and confidence for innovation. The industry needs to have confidence in the regulatory environment and needs to know what the limitations and boundaries are. I don’t think that regulation should be seen as a barrier to innovation. It provides the guardrails, clarity, and certainty that is needed. 

Regulation is really important and there are some big conversations around that at the moment. The EU AI Act is likely to set an international standard of what regulation will look like in this regard. It’s going to have a big impact in the same way that GDPR had with data protection. Soon, any organization that’s operating in the EU, or that may export an AI product to the EU, is going to have to comply with the EU AI Act. 

We need international collaboration on this.

devmio: The EU AI Act was drafted before ChatGPT and other LLMs became publicly available. Is the regulation still up to date? How is an institution like the EU supposed to catch up to the incredible advancements in AI?

Mhairi Aitken: It’s interesting that over the last few months, developments with large language models have forced us to reconsider some elements of what was being proposed and developed, particularly around general purpose AI. Foundation models like large language models that aren’t designed for a particular purpose can be deployed in a wide range of contexts. Different AI models or systems are built on top of them as a foundation.

That’s posed some specific challenges around regulation. Some of this is still being worked out. There are big challenges for the EU, not just in relation to foundation models. AI encompasses so many things and is used across all industries, across all sectors in all contexts, which poses a big challenge. 

The UK-approach to regulation of AI has been quite different to that proposed in the EU: The UK set out a pro-innovation approach to regulation, which was a set of principles intended to equip existing UK regulatory bodies to grapple the challenges of AI. It recognized that AI is already being used across all industries and sectors. That means that all regulators have to deal with how to regulate AI in their sectors. 

In recent weeks and months in the UK we have seen an increasing emphasis on regulation and AI, and increased attention at the importance of developing effective regulation. But I have some concerns that this change of emphasis has, at least in part, come from Big Tech. We’ve seen this in the likes of Sam Altman on his tour of Europe, speaking to European regulators and governments. Many voices talking about the existential risk AI poses come from Silicon Valley. This is now beginning to have an influence on policy discussions and regulatory discussions, which is worrying. It’s a positive thing that we’re having these discussions about regulation and AI, but we need those discussions to focus on real risks and impacts. 

devmio: The idea of existential threat posed by AI often comes from a vision of self-conscious AI, something often called strong AI or artificial general intelligence (AGI). Do you believe AGI will ever be possible?

Mhairi Aitken: No, I don’t believe AGI will ever be possible. And I don’t believe the claims being made about an existential threat. These claims are a deliberate distraction from the discussions of regulation of current AI practices. The claim is that the technology and AI itself poses a risk to humanity and therefore, needs regulation. At the same time, companies and organizations are making decisions about that technology. That’s why I think this narrative is being pushed, but it’s never going to be real. AGI belongs in the realm of sci-fi. 

There are huge advancements in AI technologies and what they’re going to be capable of doing in the near future is going to be increasingly significant. But they are still always technologies that do what they are programmed to do. We can program them to do an increasing number of things and they do it with an increasing degree of sophistication and complexity. But they’re still only doing what they’re programmed for, and I don’t think that will ever change. 

I don’t think it will ever happen that AI will develop its own intentions, have consciousness, or a sense of itself. That is not going to emerge or be developed in what is essentially a computer program. We’re not going to get to consciousness through statistics. There’s a leap there and I have never seen any compelling evidence to suggest that could ever happen.

We’re creating systems that act as though they have consciousness or intelligence, but this is an illusion. It fuels a narrative that’s convenient for Big Tech because it deflects away from their responsibility and suggests that this isn’t about a company’s decisions.

devmio: Sometimes it feels like the discussions around AI are a big playing field for societal discourse in general. It is a playing field for a modern society to discuss its general state, its relation to technology, its conception of what it means to be human, and even metaphysical questions about God-like AI. Is there some truth to this?

Mhairi Aitken: There’s lots of discussions about potential future scenarios and visions of the future. I think it’s incredibly healthy to have discussions about what kind of future we want and about the future of humanity. To a certain extent this is positive.

But the focus has to be on the decisions we make as societies, and not hypothetical far-fetched scenarios of super intelligent computers. These conversations that focus on future risks have a large platform. But we are only giving a voice to Big Tech players and very privileged voices with significant influence in these discussions. Whereas, these discussions should happen at a much wider societal level. 

The conversations we should be having are about how we harness the value of AI as a set of tools and technologies. How do we benefit from them to maximize value across society and minimize the risks of technologies? We should be having conversations with civil society groups and charities, members of the public, and particularly with impacted communities and marginalized communities.

We should be asking what their issues are, how AI can find creative solutions, and where we could use these technologies to bring benefit and advocate for the needs of community groups, rather than being driven by commercial for-profit business models. These models are creating new dependencies on exploitative data practices without really considering if this is the future we want.

devmio: In the Alan Turing Institute’s strategy document, it says that the institute will make great leaps in AI development in order to change the world for the better. How can AI improve the world?

Mhairi Aitken: There are lots of brilliant things that AI can do in the area of medicine and healthcare that would have positive impacts. For example, there are real opportunities for AI to be used in developing diagnostic tools. If the tools are designed responsibly and for inclusive practices, they can have a lot of benefits. There’s also opportunities for AI in relation to the environment and sustainability in terms of modeling or monitoring environments and finding creative solutions to problems.

One area that really excites me is where AI can be used by communities, civil society groups, and charities. At the moment, there’s an emphasis on large language models. But actually, when we think about smaller AI, there’s real opportunities if we see them as tools and technologies that we can harness to process complex information or automate mundane tasks. In the hands of community groups or charities, this can provide valuable tools to process information about communities, advocate for their needs, or find creative solutions.

devmio: Do you have examples of AI used in the community setting?

Mhairi Aitken: For example, community environment initiatives or sustainability initiatives can use AI to monitor local environments, or identify plant and animal species in their areas through image recognition technologies. It can also be used for processing complex information, finding patterns, classifying information, and making predictions or recommendations from information. It can be useful for community groups to process information about aspects of community life and develop evidence needed to advocate for their needs, better services, or for political responses.

A lot of big innovation is in commercially-driven development. This leads to commercial products instead of being about how these tools can be used for societal benefit on a smaller scale. This changes our framing and helps us think about who we’re developing these technologies for and how this relates to different kinds of visions of the future that benefit from this technology.

devmio: What do you think is needed to reach this point?

Mhairi Aitken: We need much more open public conversations and demands about transparency and accountability relating to AI. That’s why it’s important to counter the sensational unrealistic narrative and make sure that we focus on regulation, policy and public conversation. All of us must focus on the here and now and the decisions of companies leading the way in order to hold them accountable. We must ensure meaningful and honest dialogue as well as transparency about what’s actually happening.

devmio: Thank you for taking the time to talk with us and we hope you succeed with your mission to inform the public.

Top Articles About ML Basics & Principles

Behind the Tracks

DON'T MISS ANY ML CONFERENCE NEWS!