Exploring questions about AI
"The thing is, we don't have all the answers". This was the concise summary by James Finister at the end of the AI webinar panel which he, Katrina Mcdermid and Simone J Moore had just concluded. Maybe that's what makes the world of AI such an intriguing space at the moment. We are aware of a world of opportunity but how is it going to unfold? At least we know the main questions"
Are we ready for AI?
Katrina's session on Human Centered Design had encouraged us to apply the fundamental techniques that we would apply to other problems. If AI amplifies the result, we had better make sure we are laying a firm foundation so it can amplify the right result. This became an underlying theme that we came back to. The biggest favour AI may be doing us at the moment is forcing us to improve our organisational maturity. One of the things about being part of itSMF is that it gives us a space to share our experiences in applying theory to real practical situations. And we know from that that organisations everywhere struggle with things we feel we should really be doing better at. But let's not dress things up. We would only be deluding ourselves. James shared the results of a survey where over 90% of respondents said they were investing in AI but only 1% were confident it would be a successful investment. That is quite alarming. But it is also a hopeful indicator, that maybe enough alarm bells might be rung to bring action on the underlying issues.
Getting emotional?
Simone homed in on the blurred lines that result when AI pretends to be human. Are we designing for emotional mimicry rather than emotional insight. Surely the latter is what we want? AI can't care for us but should be equipping us so we can care better. Katrina and James also talked about some of the effects of this. It is easy for us to design a process that works from our perspective (e.g. 3 contacts before closing a ticket) without asking whether the user actually wants to be contacted, or whether our approach is wrong. Are we the right people to be designing those processes in the first place? Meanwhile we try to improve how calls are raised, but when the most common cause for calls is "I'm confused", does our style of gathering information to aid categorisation actually help? AI can do a certain level of analysis, but it only uses certain limited cues and doesn't have the situational sensitivity to pick up the sort of things an emotionally aware human could identify. So, let's use what AI can do, but not over-sell it, and let's not under-value what our staff can do and how we can support them to do it better.
What's the standard?
And then there is the question of what assumptions AI makes about us. If someone who isn't autistic answers a question in a way that an autistic person might ask it does that lead to AI labelling them incorrectly? There are ethical challenges for those of us setting the rules for how AI is applied. One of the challenges in this space is the sheer volume of standards that come into play when you use AI. Making sense of this is difficult not least because it keeps changing. It confuses those who are immersed in the standards, so what chance do those of us who aren't have? If you are looking for somewhere to start you could do worse than consult ISO42001, a management system for AI and IEEE P7014.1 which defines ethical considerations and good practices regarding the use of emulated empathy in general-purpose artificial intelligence.
The right question?
We have struggled for years with persistent challenges, and AI doesn't make them go away. As customers we don't know the right question to ask. And what we get as an option to select, or a response can seem like a foreign language. There's a similar dynamic with AI where you need to be in tune with the way that AI is expecting you to ask questions. Another challenge is around data quality. While AI can help us with data cleansing it can also build on dirty data to produce flawed corollaries. So AI makes it imperative that we address these challenges - we can't sidestep them.
It is easy to get obsessed by the AI itself and lose sight of what we are trying to achieve. If what we are trying to achieve is an enhanced human-to-human connection, is the AI helping that or getting in the way. Or to put it another way, what problem are you trying to solve, and how does AI help you do that? That's back to Human Centered Design.
So what?
The impact AI is having is huge and it is rewiring the way we behave. It's not just about learning the language to use to get the best responses from it, it is also about adjusting our expectations of what we can achieve. For example, if you are working on an idea, you could present it to AI and ask it to refute it, to help you assess how strong an idea it is.
We concluded by asking what excites our panelists about AI. They picked out
· In the world of coding (though could equally apply elsewhere) making good, better
· Highlighting shortfalls in governance
· Opening up possibilities of uncovering new meaning. What if we do things differently?
· Helping us past the drudgery - showing us our best intelligence
There are of course challenges as well as opportunities with examples given of how people are trying to use AI to cheat the system
To conclude
So, questions that matter include
· How can we address the underlying challenges that would cause AI to make bad worse rather than good better?
· How should AI's role in imitating human emotion be understood?
· How are we all going to engage with the standards and constraints to ensure we use AI responsible?
· How can AI help us to be more fully human?
· How do we update our controls to account for ways that AI could be abused?
I have a couple of reflections to end on :
· The Turing test asked 75 years ago “Can machines imitate humans well enough to be indistinguishable?". Now that the answer is "Yes", we realise that that isn't necessarily quite what we want to achieve.
· BBC Radio 3 have recently commissioned a piece of music to be composed about the arrival in global awareness of AI. The composer, Oyvind Torvund, commented "Instead of using AI to compose the piece I have used my own imagination ". I wonder what we will think in 10 years' time about the relationship between AI and our creative imagination.
What now?
This panel discussion was the last in a series of 4 sessions. You can read about the other 3 under Publications > Blogs on www.itsmfi.org or by following these direct links:
If you would like to be part of ongoing discussion about AI, do let us know. As you can imagine this is a subject of wide interest and various of our Chapters are looking at it.