A word from Matt Bain, Marketing Director at Spark

 

matt-july-intro-1170x432.jpg

This month we explore the ethics of AI in relation to business, politics and data sovereignty. Future State speakers Danielle Krettek Cobb, Dr Jonnie Penn and Mikaela Jade share their insights.

 

danielle-krettek-cobb.png

Danielle Krettek Cobb
Google

Danielle is a trailblazing force in empathic design, with two decades of work grounded equally in science and soul. Her work has transformed some of the world's largest organisations and tech companies. At Google, Danielle founded Google Empathy Lab. She works with teams from AI and BARD to ATAP, Devices, Inclusion and Crisis Response.
jonnie-penn.png

Jonnie Penn
University of Cambridge

Dr Jonnie Penn, FRSA, is a professor of AI Ethics and Society at the University of Cambridge. He is a historian of technology, a #1 New York Times bestselling author, and a sought-after public speaker. He was formerly a MIT Media Lab Assembly Fellow, Google Technology Policy Fellow, Fellow of the British National Academy of Writing and a popular broadcaster.
mikaela-jade.png

Mikaela Jade
Indigital

Mikaela founded the Indigenous edu-tech company Indigital. She seeks to develop innovative ways to digitise and translate knowledge and culture from remote and ancient communities. Her aim is for Indigital to help create meaningful pathways for Indigenous people into the digital economy and the creation of future technologies.

 

Technology and Emotional safety

How technology helps brands embrace emotional safety

Our chat, our banter, our kōrero, is all “grist to the mill for AI” according to  Danielle Krettek Cobb. She has spent years working the large language models that sit behind its various applications and she points out that chatbots trained on people’s conversations provide an insight into how people feel because “language is really loaded with emotion.”
Speaking at the Future State event, Danielle noted that AI is also used to map where people go online, and that data is then used to steer them around the Internet – in good and bad directions. This leads to questions about emotional safety and what that means for businesses adopting AI and other technologies.
“For businesses, what’s cool is that because the social sciences have developed, we know how these things work for people, what’s healthy, we just need to apply that to these spaces,” she says.
With technology now such an intuitive part of our daily lives, technology itself isn’t the challenge. For Danielle, this means “we’re up against our human stuff, where are our limitations? What’s hard for us to do?”

Brands moving into the well-being space

If successful brands can have an emotional impact on our lives, does it follow that business has an active role to play in individual, and collective wellbeing? 
Clothing brand Patagonia is, according to Danielle, a stand-out for walking the talk when it comes to its purpose and values. The brand is famed for saying, “Don’t buy this jacket”, the idea being that you don’t need a new item of clothing until the one you have is so worn out it can’t be sewed back together. 
She says it’s courageous for companies to say they are interested in the whole person – the pretty and the not-so-pretty. To show that they care about everyone, not just those who contribute to their bottom line, and ask “how can we take a bigger view?”
 
emotional-safety-video-tile-1170x432.jpg

AI and the state of play

We’ve only just begun

Of all the fictional heroes to conjure, it’s Tintin that springs to Jonnie Penn’s mind when asked about the state of AI today. Penn appears to have a penchant for the intrepid reporter, his dog and motley band of problem solvers.

This simply drawn cartoon from the twentieth century often explores politics, history, culture and technology. This makes it an apt metaphor for someone whose job it is to think about how AI and humanity will rub along together. Penn references a scene where a character notes that it’s been a tough week, and Tintin replies – but it’s only Wednesday. And that, according to Penn, is where we are with AI. 

“We are mid-swing on some major developments,” he says, noting that ChatGPT, which was only launched last November, is the first AI tool to go mainstream. 

Penn, who was at the Future State event in Auckland presented by Spark Lab and Semi Permanent, says we are in a phase of ‘capability overhang’, a term coined by an AI policy expert Jack Clark. It means that AI is so big and complex that we don’t yet know what uses it can be put to. 

The next industries set to feel the impacts of AI

Penn says agriculture, logistics and construction will be impacted by AI because they are industries where precision really matters. He cites three examples of AI tools that are faster, more efficient, and better for the environment than current options. 

First up is ‘laser weeding’ as just one example in agriculture. While it sounds like science fiction, it’s an AI application that has the potential to eliminate the use of chemicals on the land. 

“If you can drive an autonomous vehicle over the crop and shoot lasers into budding lettuce leaves and kill the individual weeds, then you don’t need to use herbicides. So, this sounds crazy but these kind of robotic weed killers can kill 200,000 weeds in an hour,” he says. 

In the area of logistics, Penn cites a company using AI to take the air out of cardboard boxes. “They’ve built a system that automatically fabricates a cardboard box just for the parameters of the thing you’ve ordered,” he explains.

In construction, Penn explains how AI can be used for predictive maintenance. “If you want to survey the top of a chimney, getting a human up there to look can be dangerous, expensive, and laborious. Instead, you can fly a drone over it and use pattern recognition, AI, and machine learning tools to identify any degradation that is happening that will need to be addressed.”

Generative AI and compensation for artists

Images of chimney rot being fed into AI models is one thing, but what about works of art? What about the paintings, sketches and drawings that are feeding AI tools that generate illustrations in seconds? How do the artists, who created these singular visions, get compensated for their hard work? 

“I think the sooner we can figure out the kind of copyright questions around this tool, which is brand new… the sooner we can start to direct the benefits from these systems,” Penn says. 

He also notes that AI tools are creating something new artistically. “There’s also a lot of creative work being done with these tools that is seemingly novel,” Penn says. 

Greater participation in global commerce

Meanwhile AI, and the large language models that feed it, can enable greater participation in the global economy for those who have been historically excluded. “Much of it (global commerce) is in English, and this will allow them (non-English speakers) a bit of agency that they might not have had before because of proofreading, and just getting that extra bit of effort can mean they will be taken seriously for their ideas,” he says.

“Of course, these products are free now. They may not be free in the future. But that’s a whole separate conversation.”  

 

ai-play-video-tile-1170x432.jpg

Ethics of AI

Ethics in AI as a competitive advantage

Being ethical in the way your business approaches AI can be your competitive advantage. That’s the advice from Jonnie Penn.

We’re all abuzz about AI because of ChatGPT, which was unleashed onto humanity just over six months ago. However, it wasn’t the first AI chatbot of its kind. Microsoft famously released a similar service a few years ago, but within hours it was spitting our vile statements and had to be withdrawn. This was because the tech giant hadn’t invested in guard rails to stop its AI tool from being perverted by hate speech. Open AI – the company that created ChatGPT, which is aided by Microsoft’s public cloud Azure – is doing that work, and is enjoying unprecedented success as a result. However, ChatGPT is in no way a finished product. 

“If you’re a business interested in AI, it’s important to remember that it is a socio-technical tool. So, it’s not purely technical, it has a social component,” he says. 

AI isn’t another version of being human

“The best hack I have for businesses thinking about using AI… is to let go of the neural metaphor,” Penn says. 

That is, to view AI as a logistical tool, not as a cognitive tool that processes information in the same way as humans. AI might not be sentient, but it is powerful, so businesses need to be thoughtful in how they approach using its vast capabilities. 

Penn advises organisations to start small. To think of one area of their business that could benefit from what is essentially a statistical tool that is capable of ‘pattern recognition on steroids’. 

What can’t AI do?

Conversely, Penn says that businesses need to invest time in creating a ‘Decomputerisation Statement’ – that is, a list of all the things a company does that can’t be automated. Penn says it could be in areas that are grounded in an organisation’s culture or history that give their product or service its unique selling point. 

“It may be there are lots of different things that would give a brand or business its identity. But if you can capture that, I think it helps as a kind of counterbalance to this excitement about automation. You need the best of the human and the best of these automated tools,” he says. 

Targeted, not blanket, pause on AI

The field of AI ethics in which Penn participates, has been slow to develop the idea of consent, he says. “Until one can comfortably say ‘No’ to AI tools, the relationships involved are likely not premised on informed consent.” 

With AI being dominated by a handful of companies, there is talk of a need for global governance. Many experts are now calling for a six month pause on AI development. Is any of this necessary?

According to Penn, we need to think about AI as it is applied in different ways, rather than take a blanket approach. “I think the energy should go towards targeted kinds of prohibitions,” he says. 

An example of a ‘fundamentally dubious’ use of technology is predictive policing, the idea that AI can identify potential crime, and by extension, potential criminals. “That’s just unsound, they’re not good ideas, you cannot predict crime,” he says. 

Penn points out that there’s a London municipality that has voted to ban facial recognition technologies because of its negative impacts. That community decided collectively that just because you can use AI tools, it doesn’t always mean you should. 

ai-ethics-video-tile-1170x432.jpg

 

The jobs at risk from AI

Employment in the age of AI

Everyone in every occupation – from teachers to cleaners, to artists to journalists – is wondering what AI will mean for their ability to earn a living. They ask the question ‘if a machine can do part, or all, of my job, where does that leave me?’ 

“The knee jerk reaction to conversations about AI and job loss should be retraining,” says Jonnie Penn. 

He says that AI tools can’t replace the knowledge an employee has. So instead of replacing people with AI, it’s better to help people learn to work alongside these tools. An example might be to turn today’s contact centre representatives into tomorrow’s prompt engineers, feeding questions into AI models. 

“Human employees are not totally replaceable, it’s more about working with the strengths of what humans can do – empathy, care, judgement – alongside these tools, which are more equipped to fast-paced decision making,” he says. “We should be mindful of keeping humans in the loop.” 

In the US, Penn says teachers can earn up to $40 an hour training a model to do their job, but there is more to teaching than imparting rote learning. “Teachers help you discover who you can be,” he says. 

Talk to the people targeted for replacement

In theory, an AI tool might be able to perform a role faster, more effectively and for far less money, but what is the intangible benefit that a human brings? While it might seem like an awkward conversation to have, Penn recommends that before deploying an AI solution, businesses talk to the people whose jobs will be impacted. 

“The first call I would make if I was the CEO of a company thinking about using AI is to go the people I’m thinking of replacing, and say: ‘Here’s what this can do. How can we work together to make the most of what you know and what it can do?' I think that will give businesses an advantage that is higher than just layoffs,” he says. 

He cites the example of a hotel in New York where they wanted VIPs to be able to check into their rooms early, so they used an AI tool to find out what rooms needed cleaning first so they would have more stock available for people coming in. But the workers on the ground told them that obeying the AI tool’s instructions would mean pushing heavy carts from one side of the large hotel to another after each room was cleaned, and that didn’t make sense from an efficiency perspective. 

Cultural appropriation

There is a fierce legal battle raging over the fairness of using artists’ work – even the essence of what they produce – to create AI imagery, and how this appropriation can be compensated in a way that is fair and equitable. 

If there is no ability for artists to earn a living, we may find there are fewer of them in a world beset by climate anxiety and challenges to democracy, a time when their talent and perspective is more important than ever, Penn says. 
“I think it mistakes the role of an artist in society to overemphasize just the output, because it’s more about what being an artist means… the kind of brilliance they bring to the world, and how they see what is important, what they value.” 

Journalism matters

Along with artists, journalists have struggled in the age of the internet, and now face new challenges in the age of AI. Their emphasis on balanced and accurate reporting can look quaint when you consider the misinformation and fake news generated by AI tools. 

Penn is hopeful that more people will begin to question the diet of news they consume and look for credible information sources. And in doing so it will mean that tech companies may need to accept the same accountabilities ascribed to publishers. 

“A difference in terms of the future of quality information versus disinformation is to hold tech companies to account, to treat them as publishers. The same as newspapers, because they will have the kind of infrastructure to start to do the work of weeding out the kind of garbage that unnecessarily disrupts our daily lives or our democratic societies,” Penn says. 

 

ai-jobs-video-tile-1170x432.jpg

 

Digital twins and the challenge of data sovereignty

 

Entire Pacific Nations are creating digital twins – a full-scale photorealistic digital copy of something that’s in the real world – of their country. This is because they know it will be going underwater due to the impact of climate change. 

These digital twins will contribute to the Metaverse, a virtual space where people can have experiences beyond what is possible in the physical world. Technologies like virtual reality and augmented reality, aided by developments in machine learning and artificial intelligence, are bringing the Metaverse to life. 

It’s space that Mikaela Jade, founder of Indigital, is intimately familiar with, as a member of the World Economic Forum Metaverse Governance Steering Group and a delegate on the United Nations Permanent Forum on Indigenous issues. 

“As First Nations people we have a very large, vested interest in what is on our country. So, we therefore have a large, vested interest in the digital copy of the country also,” she says. 

A huge movement has begun across the world focussed on data sovereignty. This has implications for people creating a digital twin, as they need to think about indigenous rights when using data and information.                                     

Otherwise, indigenous people will become as dispossessed in the virtual world, as they have been in the physical world. Mikaela cites an example near the Parramatta River, where a woman created a maternity hospital for Aboriginal and Torres Strait Islanders who were not able to give birth safely. It was then earmarked for destruction, despite indigenous people protesting, and in December 2021 it was destroyed. 

“Because it had a heritage building on it, there was a requirement to create a digital twin before they destroyed it. So, there is a digital twin of that very important and special country… but we don’t have access to the digital twin, and we also now can’t go to the site.” 

Mikaela points to this example as a way of showing the differences between Western cultures and First Nation cultures when it comes to the attachment they place on objects and places. The digital twin of the maternity hospital is, to the New South Wales government, a collection of data about a building that once existed. For First Nations people though, it is a representation of their ancestors, “our knowledge systems and our language, and everything.” 

The questions remain as to how indigenous people can participate in the process of creating a digital twin, how they can get access to the data, and how they can have agency and autonomy over how that digital twin is used in the future.

digital-twin-video-tile-1170x432.jpg

Tips for handling misinformation in the age of AI


Learn how to stay on the path of reality, when all around us AI is being weaponised to spread fake news and misinformation.

Read more

Hear more from Jonnie Penn and Danielle Krettek Cobb


Explore last month's theme of Technology, Sustainability and Community to hear more insights from our Future State speakers.

Read more

Future State: New Realities

Want to hear more from our Future State key speakers?

Sign up to Spark Lab to get notified when we release new articles, podcasts, videos, tips and more from the event. 

Explore Future State