This month we explore the ethics of AI in relation to business, politics and data sovereignty. Future State speakers Danielle Krettek Cobb, Dr Jonnie Penn and Mikaela Jade share their insights.
Of all the fictional heroes to conjure, it’s Tintin that springs to Jonnie Penn’s mind when asked about the state of AI today. Penn appears to have a penchant for the intrepid reporter, his dog and motley band of problem solvers.
This simply drawn cartoon from the twentieth century often explores politics, history, culture and technology. This makes it an apt metaphor for someone whose job it is to think about how AI and humanity will rub along together. Penn references a scene where a character notes that it’s been a tough week, and Tintin replies – but it’s only Wednesday. And that, according to Penn, is where we are with AI.
“We are mid-swing on some major developments,” he says, noting that ChatGPT, which was only launched last November, is the first AI tool to go mainstream.
Penn, who was at the Future State event in Auckland presented by Spark Lab and Semi Permanent, says we are in a phase of ‘capability overhang’, a term coined by an AI policy expert Jack Clark. It means that AI is so big and complex that we don’t yet know what uses it can be put to.
Penn says agriculture, logistics and construction will be impacted by AI because they are industries where precision really matters. He cites three examples of AI tools that are faster, more efficient, and better for the environment than current options.
First up is ‘laser weeding’ as just one example in agriculture. While it sounds like science fiction, it’s an AI application that has the potential to eliminate the use of chemicals on the land.
“If you can drive an autonomous vehicle over the crop and shoot lasers into budding lettuce leaves and kill the individual weeds, then you don’t need to use herbicides. So, this sounds crazy but these kind of robotic weed killers can kill 200,000 weeds in an hour,” he says.
In the area of logistics, Penn cites a company using AI to take the air out of cardboard boxes. “They’ve built a system that automatically fabricates a cardboard box just for the parameters of the thing you’ve ordered,” he explains.
In construction, Penn explains how AI can be used for predictive maintenance. “If you want to survey the top of a chimney, getting a human up there to look can be dangerous, expensive, and laborious. Instead, you can fly a drone over it and use pattern recognition, AI, and machine learning tools to identify any degradation that is happening that will need to be addressed.”
Images of chimney rot being fed into AI models is one thing, but what about works of art? What about the paintings, sketches and drawings that are feeding AI tools that generate illustrations in seconds? How do the artists, who created these singular visions, get compensated for their hard work?
“I think the sooner we can figure out the kind of copyright questions around this tool, which is brand new… the sooner we can start to direct the benefits from these systems,” Penn says.
He also notes that AI tools are creating something new artistically. “There’s also a lot of creative work being done with these tools that is seemingly novel,” Penn says.
Meanwhile AI, and the large language models that feed it, can enable greater participation in the global economy for those who have been historically excluded. “Much of it (global commerce) is in English, and this will allow them (non-English speakers) a bit of agency that they might not have had before because of proofreading, and just getting that extra bit of effort can mean they will be taken seriously for their ideas,” he says.
“Of course, these products are free now. They may not be free in the future. But that’s a whole separate conversation.”
Being ethical in the way your business approaches AI can be your competitive advantage. That’s the advice from Jonnie Penn.
We’re all abuzz about AI because of ChatGPT, which was unleashed onto humanity just over six months ago. However, it wasn’t the first AI chatbot of its kind. Microsoft famously released a similar service a few years ago, but within hours it was spitting our vile statements and had to be withdrawn. This was because the tech giant hadn’t invested in guard rails to stop its AI tool from being perverted by hate speech. Open AI – the company that created ChatGPT, which is aided by Microsoft’s public cloud Azure – is doing that work, and is enjoying unprecedented success as a result. However, ChatGPT is in no way a finished product.
“If you’re a business interested in AI, it’s important to remember that it is a socio-technical tool. So, it’s not purely technical, it has a social component,” he says.
“The best hack I have for businesses thinking about using AI… is to let go of the neural metaphor,” Penn says.
That is, to view AI as a logistical tool, not as a cognitive tool that processes information in the same way as humans. AI might not be sentient, but it is powerful, so businesses need to be thoughtful in how they approach using its vast capabilities.
Penn advises organisations to start small. To think of one area of their business that could benefit from what is essentially a statistical tool that is capable of ‘pattern recognition on steroids’.
Conversely, Penn says that businesses need to invest time in creating a ‘Decomputerisation Statement’ – that is, a list of all the things a company does that can’t be automated. Penn says it could be in areas that are grounded in an organisation’s culture or history that give their product or service its unique selling point.
“It may be there are lots of different things that would give a brand or business its identity. But if you can capture that, I think it helps as a kind of counterbalance to this excitement about automation. You need the best of the human and the best of these automated tools,” he says.
The field of AI ethics in which Penn participates, has been slow to develop the idea of consent, he says. “Until one can comfortably say ‘No’ to AI tools, the relationships involved are likely not premised on informed consent.”
With AI being dominated by a handful of companies, there is talk of a need for global governance. Many experts are now calling for a six month pause on AI development. Is any of this necessary?
According to Penn, we need to think about AI as it is applied in different ways, rather than take a blanket approach. “I think the energy should go towards targeted kinds of prohibitions,” he says.
An example of a ‘fundamentally dubious’ use of technology is predictive policing, the idea that AI can identify potential crime, and by extension, potential criminals. “That’s just unsound, they’re not good ideas, you cannot predict crime,” he says.
Penn points out that there’s a London municipality that has voted to ban facial recognition technologies because of its negative impacts. That community decided collectively that just because you can use AI tools, it doesn’t always mean you should.
Everyone in every occupation – from teachers to cleaners, to artists to journalists – is wondering what AI will mean for their ability to earn a living. They ask the question ‘if a machine can do part, or all, of my job, where does that leave me?’
“The knee jerk reaction to conversations about AI and job loss should be retraining,” says Jonnie Penn.
He says that AI tools can’t replace the knowledge an employee has. So instead of replacing people with AI, it’s better to help people learn to work alongside these tools. An example might be to turn today’s contact centre representatives into tomorrow’s prompt engineers, feeding questions into AI models.
“Human employees are not totally replaceable, it’s more about working with the strengths of what humans can do – empathy, care, judgement – alongside these tools, which are more equipped to fast-paced decision making,” he says. “We should be mindful of keeping humans in the loop.”
In the US, Penn says teachers can earn up to $40 an hour training a model to do their job, but there is more to teaching than imparting rote learning. “Teachers help you discover who you can be,” he says.
In theory, an AI tool might be able to perform a role faster, more effectively and for far less money, but what is the intangible benefit that a human brings? While it might seem like an awkward conversation to have, Penn recommends that before deploying an AI solution, businesses talk to the people whose jobs will be impacted.
“The first call I would make if I was the CEO of a company thinking about using AI is to go the people I’m thinking of replacing, and say: ‘Here’s what this can do. How can we work together to make the most of what you know and what it can do?' I think that will give businesses an advantage that is higher than just layoffs,” he says.
He cites the example of a hotel in New York where they wanted VIPs to be able to check into their rooms early, so they used an AI tool to find out what rooms needed cleaning first so they would have more stock available for people coming in. But the workers on the ground told them that obeying the AI tool’s instructions would mean pushing heavy carts from one side of the large hotel to another after each room was cleaned, and that didn’t make sense from an efficiency perspective.
There is a fierce legal battle raging over the fairness of using artists’ work – even the essence of what they produce – to create AI imagery, and how this appropriation can be compensated in a way that is fair and equitable.If there is no ability for artists to earn a living, we may find there are fewer of them in a world beset by climate anxiety and challenges to democracy, a time when their talent and perspective is more important than ever, Penn says.
Along with artists, journalists have struggled in the age of the internet, and now face new challenges in the age of AI. Their emphasis on balanced and accurate reporting can look quaint when you consider the misinformation and fake news generated by AI tools.
Penn is hopeful that more people will begin to question the diet of news they consume and look for credible information sources. And in doing so it will mean that tech companies may need to accept the same accountabilities ascribed to publishers.
“A difference in terms of the future of quality information versus disinformation is to hold tech companies to account, to treat them as publishers. The same as newspapers, because they will have the kind of infrastructure to start to do the work of weeding out the kind of garbage that unnecessarily disrupts our daily lives or our democratic societies,” Penn says.
Entire Pacific Nations are creating digital twins – a full-scale photorealistic digital copy of something that’s in the real world – of their country. This is because they know it will be going underwater due to the impact of climate change.
These digital twins will contribute to the Metaverse, a virtual space where people can have experiences beyond what is possible in the physical world. Technologies like virtual reality and augmented reality, aided by developments in machine learning and artificial intelligence, are bringing the Metaverse to life.
It’s space that Mikaela Jade, founder of Indigital, is intimately familiar with, as a member of the World Economic Forum Metaverse Governance Steering Group and a delegate on the United Nations Permanent Forum on Indigenous issues.
“As First Nations people we have a very large, vested interest in what is on our country. So, we therefore have a large, vested interest in the digital copy of the country also,” she says.
A huge movement has begun across the world focussed on data sovereignty. This has implications for people creating a digital twin, as they need to think about indigenous rights when using data and information.
Otherwise, indigenous people will become as dispossessed in the virtual world, as they have been in the physical world. Mikaela cites an example near the Parramatta River, where a woman created a maternity hospital for Aboriginal and Torres Strait Islanders who were not able to give birth safely. It was then earmarked for destruction, despite indigenous people protesting, and in December 2021 it was destroyed.
“Because it had a heritage building on it, there was a requirement to create a digital twin before they destroyed it. So, there is a digital twin of that very important and special country… but we don’t have access to the digital twin, and we also now can’t go to the site.”
Mikaela points to this example as a way of showing the differences between Western cultures and First Nation cultures when it comes to the attachment they place on objects and places. The digital twin of the maternity hospital is, to the New South Wales government, a collection of data about a building that once existed. For First Nations people though, it is a representation of their ancestors, “our knowledge systems and our language, and everything.”
The questions remain as to how indigenous people can participate in the process of creating a digital twin, how they can get access to the data, and how they can have agency and autonomy over how that digital twin is used in the future.