“I believe that ethical AI is an important consideration for investors wanting to make a positive impact. Investing in ethical AI means investing in companies and initiatives aiming to ensure that AI-enabled technology meets its goals and is aligned to human values. In other words, investing in ethical AI means investing in a vision of this technology that serves the public and promotes fairness and equality in outcomes for the people using it. This is certainly in line with the theories of change of many impact investors I know.”
– Alison Fort
Why AI ethics?
In 2018 I joined Katapult with an ambition to extend my impact journey. For 10 years I had been active in the field of impact investing and sustainability; supporting investors moving capital for good and businesses building enterprises that address the worlds global challenges articulated in the UN Sustainable Development Goals.
At Katapult, the exploration of technology as a force for good and the risks inherent in the large-scale adoption of transformative technologies, such as AI, was to become a growing part of my work.
As an investor in impact technology startups at Katapult, my impact role focuses on 2 key questions:
- How to fund companies that leverage technology to generate positive effects for people and the planet by virtue of their operations? and,
- How to steward technologies as a force for good? Embedding ethical considerations into decision making to ensure a positive trajectory and avoiding outcomes where technology causes great harm.
Whilst working with wealth holders at Katapult Foundation and Toniic I often meet impact investors with an appetite to invest in tech-for-good but rarely do they have an understanding of the ethical risks posed by transformative technologies, like AI. I started questioning, what role could these investors play in promoting an ethical trajectory for technology if they were equipped with the tools to do so?
Faced with this challenge within Katapult we wanted to expand our impact framework to include an optional layer of ethical AI screening tools to apply to companies that needed this extra scrutiny. This was my first exposure to the field of AI Ethics.
Wanting to learn more, in September 2021 I embarked on a two-year Mst AI Ethics & Society programme at the University of Cambridge. My motivation was to ‘up my game’ in the field of AI ethics; to gather the necessary experience, skills, knowledge and network to guide investors and tech founders to examine the ethical risks of technology and make better decisions. Having been part of the first cohort started in September 2021, I want to reflect further on my journey through the world of AI ethics and how we are integrating the key themes into the discourse in the Katapult community.
AI: reflections on its history and role today
Artificial intelligence is cited as a driving factor of the 4th Industrial Revolution whereby increases in automation and digitalisation of operations previously conducted by humans have irreversible impacts on human society. Change does not stop at transforming how industries produce economic output. AI is set to shape how we live, how we interact with each other, and even think about our species (hello homo deus!).
Whilst for techno-optimists AI offers paths to solving the world’s most pressing problems by accelerating scientific progress and unlocking new avenues of research, there is a risk that it could exacerbate existing inequalities and consolidate new forms of authoritarianism.
How do we manage the AI revolution in a responsible and ethical way? Which policy frameworks and regulations will we need to address accountability challenges? How can social and economic models mitigate negative effects on labour markets? These questions require us to think carefully about the values we want to promote as a society. And when it comes to start-ups, how are they explicitly considering the capabilities of their AI systems – critically interrogating the alignment between what was being optimised for and the positive outcomes they actually want to see?
During one of my first weeks on the course, Stephen Cave, the Executive Director of the Leverhulme Centre for the Future of Intelligence, provided a lecture on the history of intelligence and its role in shaping power. It is a lesson that stuck with me.
The story begins with Plato. According to Plato, the only life worth living was a life seeking truth through reason. In his Republic, this led him to conclude that the ideal form of government would have a philosopher as the ruler, as only philosophers understand reality for what it is.
Back in Ancient Greece, a society permeated by mysticism and myths, this was a powerful statement. Aristotle brought the role of intelligence to another level: reason was the determining factor in shaping a natural social hierarchy, with the intellectually gifted on top, and those endowed predominantly by emotions, such as women, destined to serve in the domestic sphere. The fetishisation of intelligence and its role in shaping power structures defined Western thinking through Colonisation (see “mission civilisatrice” to people inhabiting “uncivilised” lands).
After this historical excursus, we were pointed to thinking about the contemporary narratives created around artificial intelligence: is AI just an instrument, like a screwdriver that can be put back on the shelf without determining the social and physical environment around it, or is it an ideology, a myth, that promotes hype and techno-solutionist visions of society that exclude alternative futures, such as those lived by indigenous communities?
Engaging with these questions for me meant reflecting on what I can do with the tools and network available to me to promote a vision of AI advancing human well-being within planetary boundaries. In other words, I started worrying about what humans might do with AI and the narratives around AI as opposed to what AI might do one day by itself. This meant seeking how to increasingly connect AI ethics to my day job.
Integrating AI ethics at Katapult Future Fest
To help spread awareness on the theme of AI ethics, I integrated a dedicated discussion track at Katapult Foundation’s annual festival. Held in Oslo every year, Katapult Future Fest brings investors, startups, and thought-leaders to collaborate and take action towards reaching and transcending the UN Sustainable Development Goals. The themes of tech-for-good and tech-for-bad have been long standing at KFF with previous speakers on tech & society including Tristan Harris [KFF Talk here], Rumman Chowdhury and Jamie Arbib.
What future society do we want and how will technology carry us there? One of our opening speakers this year, Anders Sandberg asked us to consider differential technologies and how the decisions we take in the next 5 years will impact the next 50. (We will publish more on an interview with Anders in October). From there content ranged from hearing accounts on the impact of AI driven technology on freedom of thought narrated by international human rights lawyer Susie Alegre to a workshop from Holistic AI challenging the audience to consider AI risk factors and mitigations in concrete product scenarios.
In her recent book “Freedom to think”, Susie Alegre presents a radical account of the impact that big tech is having on public opinion as well as individual agency. She challenges the idea that “ethics” is a panacea to the problem, for her regulation and outright bans, such as ban on “surveillance advertising” need to be put in place in order to limit the power the adoption of AI enabled technologies have on our freedom. Alegre relates:
“When my daughter asked why she couldn’t have an Alexa like her friends, I told her that it is because Alexa steals your dreams and sells them.”
Providing a pragmatic and perhaps more hopeful antidote, we heard from Holistic AI, a London-based AI auditing and risk management startup. Building a bottom-up framework to assess ethical consequences, their portfolio of clients includes traditional consumer goods multinationals to emergent fintechs.
Companies using AI increasingly invest in control measures and procedures aiming to achieve ethical outcomes not only because of increased regulatory pressure but because of transforming industry level standards that demand commitment towards solutions beyond technical fixes. Holistic provides a toolkit to help firms operate in line with the standards of fairness, privacy, explainability, and robustness.
During the workshop we assessed the weighting of such ethical values against specific case studies: from a marketing campaign using AI to predict consumer responsiveness to discounts, a recruitment agency adopting algorithms for candidate screening to a hair care company leveraging personal data for targeted product recommendations. Whilst the stakes are clearly different, this exercise puts us in the shoes of risk managers and product owners working with AI: to deliver fair and ethical outcomes, companies must understand and score the risks AI poses across the entire lifecycle of their individual businesses. There is no one size fits all.
Finally, Sara Zannone, Head of Research at Holistic, gave us an ignite talk about Gödel’s incompleteness theorem: in any reasonable mathematical system there will always be true statements that cannot be proved. This theorem is based on a paradoxical self-reference: “this sentence is unprovable” is both true and formally unprovable.
We often see Maths and Science as certain and perfect, but they are also incomplete. Perhaps, we should be more ready to see technological advances, including AI, through a less monolithic lens of seemingly linear progress. Inviting paradox and considerations of competing yet coexisting alternatives can be an antidote to simplistic tick box approaches for “solving” AI ethics.
It’s only the beginning…
Bringing it back to the potential for impact investors. My journey here continues.
I believe that ethical AI is an important consideration for investors wanting to make a positive impact. Investing in ethical AI means investing in companies and initiatives aiming to ensure that AI-enabled technology meets its goals and is aligned to human values. In other words, investing in ethical AI means investing in a vision of this technology that serves the public and promotes fairness and equality in outcomes for the people using it. This is certainly in line with the theories of change of many impact investors I know.
But what does this look like in practice? Year two of my Masters programme will include research on the role of capital in AI startups. How are investors integrating impact into venture capital? Can ethical investing lead to ethical AI?
I will report out when I have more to say and in the meantime, if I reach out to you for a call or meeting to aid my work (you know who you are), please reply to the email!