Home Main Menu

Speak to the human Podcast

How do you really get people using AI well at work?

Guest: Dr Guy Champniss

16/04/26 | 43 mins

Most organisations are treating AI as a technology challenge. But the real impact depends on people: how they use it, what they use it for – and whether they engage with it at all.

In this episode, social psychologist Dr Guy Champniss shares findings from his new research into what he calls “adoption quality” – a framework for understanding why organisations are struggling to realise AI’s potential, the impact it has on how employees feel and behave, and what leaders can do to respond.

Based on a survey of more than 1,200 employees in the UK and US, the research found that only 41% are in a good position to use AI effectively. The biggest barriers aren’t technical – they’re psychological. And the most powerful barrier of all is something people rarely talk about: the threat to their professional identity.

James and Guy discuss:

  • Why AI adoption is a behaviour change challenge, not just a technology challenge
  • The factors that shape adoption quality – and why many organisations are focusing on the wrong ones
  • Psychological barriers to AI use, including the hidden role of professional identity and social belonging
  • How getting this wrong affects happiness, motivation and connection at work
  • What organisations can do to make AI use identity-enhancing rather than identity-threatening
  • How individuals can take a more critical, mindful approach to AI in their own work

Guy’s full report, Unlocking High Quality AI Adoption in the Workplace, is available to download from his website: https://www.meltwater-consulting.com/aqs-report

This is the first episode hosted by James Woodman. Episodes 1–23 were hosted by Sarah Abramson.

Transcript (AI generated)

[00:00:00] Guy Champniss: AI may not be your responsibility. It might belong to IT. It might belong to human resources, it might belong to the leadership team, whatever it may be. Its impacts are very much your responsibility, and if you do not pay attention to it, you could find yourself on the wrong side of some of these sources of psychological debt.

[00:00:20] Some of the biggest barriers to AI adoption are the things that people don't necessarily talk about. These things are particularly important. They are a real concern for employees and they're having a sizable impact, negative impact on how AI is generating value for organisations.

[00:00:39] James Woodman: That was Dr. Guy Champniss, our guest today.

[00:00:42] I'm James Woodman and this is the Speak to the Human Podcast. If you don't recognise my voice, that's 'cause I'm new here. Sarah Abramson hosted 23 episodes and she built something really special. Now I'm in the chair and continuing what she started. Although I'm not [00:01:00] completely new, I was Sarah's very first guest in episode one, and the subject of curiosity came up a lot in that interview.

[00:01:07] Our shared interest in being professionally curious about people and organisations, how people behave, why they do the things they do, figuring out how to use that understanding to make a positive difference. And right now, I think that's what a lot of organisations need when it comes to AI. They want adoption, effective use, risk management, and many of them see that as a technology challenge.

[00:01:31] So they make it all about the tools, the systems, the IT, but the real impact comes from how people use AI every day. So that gives us our question for this episode. If you want people to use AI at work and to use it well. What are the human factors that make a difference? That's a big topic. We can't cover everything today, but Guy Champniss is here to give us a way in.

[00:01:57] He's a social psychologist who applies [00:02:00] behavioural science to how organisations and the people in them innovate and change. Later, we'll get on to how he's bringing that thinking to AI adoption. But first Guy, why does this question matter? We hear about a lot of organisations treating AI mainly as a technology or IT challenge.

[00:02:17] What happens when they do that?

[00:02:19] Guy Champniss: I think the major challenge there, of course, is it's a, it's a very, it's a very narrow, and it's a very convenient lens through, through which to, to, to look at the, the challenge to look at what you're trying to do. Um, and it's, and it's, it's so narrow. It's, it's, it's incorrect.

[00:02:37] I mean, if you, if you think about any successful technology company. It is essentially successful because it's a behaviour change company. So the very obvious examples, you know, the Amazons, the Ubers, the Airbnbs, the Netflixes of this world. Yes. You know, they are, they are positioned as technology companies, but the reason they succeed, so they, their, what drives [00:03:00] their success is they've understood the importance of using their technology to change consumer behaviour.

[00:03:09] If no one used their technology. This brilliant technology would just sit there doing nothing. Um, I think there's a really interesting, um, uh, sort of parallel it's, it's, it's absolutely stating the obvious, but also at the same time it's quite profound. One of the previous US surgeons general, um, C. Everett Koop made the point of saying, you know, drugs, even the best ones don't work in people that don't take them.

[00:03:33] And I think that's, that's what we're seeing currently with AI, or that's certainly be, that's beginning to, you know, emerge with AI that no matter how good these tools are, if you don't think about what drives our human motivation to use those tools, um, you are sort of, you, you're sort of just, you know, tossing a coin.

[00:03:54] You're just throwing darts at a dartboard in terms of whether these things are gonna work. Because when you look at human [00:04:00] behaviour. Simply giving someone something that works is very rarely sufficient to get them to use it.

[00:04:08] James Woodman: You bring in a new piece of technology, the people who own that technology and implement it and launch it and roll it out are the IT department.

[00:04:17] It's about technology. So they're the ones who are deciding how this gets adopted. They're the ones who are trying to make it a success so that there are consequences from thinking about it through a technology lens, aren't there?

[00:04:30] Guy Champniss: Yes. And I, and I would say that I, I think I, I would say that actually AI is, um, you know, this is an organisational change, business transformation challenge.

[00:04:40] It's not a technology challenge. If you think through the, you know, the, the complexities, the, the, the, the, the nuances around what drives behaviour in the workplace, you know, in any way. It is really, I mean, I would say, you know, it's naive to think that actually simply plunking this technology in front of [00:05:00] someone, um, uh, is gonna be enough to, to, to have people realise, okay, fine.

[00:05:05] I'm, I'm not gonna use this. You can, you can mandate it, you can basically coerce people into using it. And that happens a lot from the organisations as various things do. You can turn old things off and that leads to people having to use the new stuff. But I think, you know, what we're seeing with AI is.

[00:05:20] Its potential is not just in that binary, use it, don't use it, it's in how you use it. And so again, it really requires that far more careful, nuanced understanding of what are my motivations to use it crucially, what are my motivations to avoid it? What are my motivations to use it in certain situations?

[00:05:41] All of these things are, are, are important. Honestly IT technology departments are not well equipped to to, to focus on those aspects.

[00:05:51] James Woodman: Really interesting, and we're gonna come back to those aspects of AI adoption. I want to stick with what you were saying. You were talking about motivation, but I want to ask [00:06:00] you about your motivation to think about this from the point of view of people and what they are doing with the technology.

[00:06:06] You are making a case for approaching this in that way. What interests you about that? Why do you think in this way, what's brought you to the point of believing this is the right approach?

[00:06:17] Guy Champniss: So I started off my sort of, um, posting an MBA, I was involved in sort of insights work for, for a large agency group, and I just felt the quality of those insights.

[00:06:28] Was was not as good as it could be in the sense that we were sort of very focused on, sometimes we were prioritising certain measures that were cleaner and they presented better in front of clients over what was actually probably close to the truth of why things were happening. That led me to go do my PhD.

[00:06:45] 'cause I thought actually I wanna be part of the solution here. So I, I have a fundamental interest in, I suppose, the messiness of human behaviour. Trying to sort of untangle that messiness to sort of understand, well, what, where, where's the signal? Just here as opposed to the [00:07:00] noise? Which by the way, I think is absolutely possible.

[00:07:03] Um, so that drives me. Uh, but also I do think you can reduce, if that's the right word, you can distil most business challenges to a behavioural challenge. That, you know, businesses are essentially the people in them. And anything you're trying to do with the business, whether that's, um, pivot, enter a new market, introduce a new product, uh, culture change, whatever it may be.

[00:07:30] At some point you bump up against the people in the organisation and they are the ones. Consciously or subconsciously they decide whether these things work or not. So I, and I am definitely biassed, but I would, I would take the position of saying every business challenge, every business opportunity requires you to understand about behaviour because at some point you want to change someone's behaviour, whether it's employees, stakeholders, customers, consumers, regulators, whatever it may be. So [00:08:00] I, you know, for me, I feel it's a, it's a fundamental truth around sort of business successes. You have to understand bus, I have, you have to understand behaviour.

[00:08:07] James Woodman: I have always felt that so strongly.

[00:08:09] There's sense when people talk about businesses doing things or organisations deciding things or whatever it is. That is not true. Businesses can't do things. The people in them do things. You, in the organisation's, people decide things and choose things, and then from outside there might be a perception that the business has done it, but it's, it's all about people.

[00:08:29] So now you are taking all this interest and experience in human behaviour, and you are applying it to AI. You've just published a piece of research, which is all about unlocking high quality AI adoption at work. Tell us what question you set out to answer. What were you, what were you hoping to achieve with that?

[00:08:49] Guy Champniss: So the main question was to address the human aspect of AI because you know, as you were mentioning earlier on, the [00:09:00] conversation is dominated by the technology. Um, and it's dominated by the models. It's dominated, dominated by their, their performance. And it's, and the, the conversation itself is dominated by the, the companies that make them.

[00:09:14] And we wanted to look with this piece of research at how much the human aspects, the, the human AI relationship was contributing to. Healthy, high quality adoption, and we felt that part was missing in the conversation. To this point, it is popping up in certain places in a sort of sporadic fashion, but what we wanted to do is create a more sort of coherent, consistent framework to say this is the other bit of the AI equation that most organisations are not looking at, and this is why it matters.

[00:09:47] James Woodman: So give us the, the headlines from the research particularly. What, what surprised you in what you found?

[00:09:53] Guy Champniss: Well, a couple of things surprised us. Uh, firstly, we found, based on our measures. [00:10:00] That only about two in five employees that we spoke to is in a particularly good place with AI at work. 40%. Okay. So the minority of employees we spoke to are in a, in a particularly good position.

[00:10:14] The other thing that was very interesting was that actually some of the biggest barriers to AI adoption are the things that people don't necessarily talk about. And those are things around the erosion or the attack on your professional identity at work, and also the attack on what it means to be part of a community at work.

[00:10:35] These social aspects. There's some chat, there's some commentary. There's some research that looks at some of the. Cognitive consequences of AI that's quite well publicised. There's been nothing to date really about the social consequences of of AI at work, and we saw that these things are particularly important.

[00:10:55] They are a real concern for employees and they're having a sizable [00:11:00] impact, negative impact on how AI is generating value for organisations.

[00:11:05] James Woodman: We're gonna come back to those things in more detail and also how people might respond both at an organisational level and also individually. It would be helpful if you could explain how you, how you measured this.

[00:11:17] 'cause you've, you split this adoption quality into four components. I know two of them are about what happens more at the organisational level two, are a bit more personal and individual. I think those personal ones are particularly relevant to what we're talking about today in terms of individual human behaviours.

[00:11:35] So talk us through all four, but particularly focus on the the individual level ones please.

[00:11:41] Guy Champniss: Sure. So, so the, the, the, the, the two that are less interesting to our conversation, but we felt were an important part of measuring this sort of whole idea of a AI adoption quality. One is what we call the means.

[00:11:53] So in other words, do people have the skills, uh, uh, to, um, use AI? Yeah. Have they been [00:12:00] trained? Do they have the knowledge and the understanding and the expertise? There's a second one we call machinery, and that's both soft and hard. So the hard machineries are the tools deployed and easily accessible within the business.

[00:12:13] And the soft machinery is things like the culture and the leadership of the organisation. How much do they endorse and encourage, uh, the use of these tools? The two that are probably more interesting for us though, uh, that they sit more at the, sort of the personal, I should say, the psychological level, what is what we call, um.

[00:12:30] Relevance, AI relevance, and that describes how much you think that AI makes a difference to you and your company, your role, and the sector you work in. That's an interesting one because it sort of mirrors something in health psychology where we know that one of the major drivers of someone taking medication is their thoughts about.

[00:12:56] What's the likelihood of me catching the disease and what happens if I catch it? [00:13:00] How? How severe will it be? I think there are some clear parallels with AI. People that think it's not really relevant to me, doesn't really impact me if I don't use it. That means for them, they have a very low AI relevance score and we think that takes away their motivation to engage with it.

[00:13:17] The fourth one is what we call readiness. There's relevance in terms of how much you think is important. Readiness describes really how ready are you to accept and where are you psychologically with the technology. This is a really complicated area, one that's not really talked about in the literature more broadly, but there are lots of psychological impacts of using AI, particularly in an unstructured way, and we wanted to sort of bundle those up in a sort of coherent framework for organisations to be able to look at this and say, well, actually, are we measuring these things? And if we're not, we probably should be, because this gives us a really clear understanding of just how willing and motivated

[00:13:59] our [00:14:00] employees are gonna be to use the technology, and ultimately, every business does need that. They need their employees to embrace it, to experiment with it, and to use it. And this, this, this idea of AI readiness tries to quantify how likely that is to happen in a business.

[00:14:17] James Woodman: Tell us about that. Tell us about these psychological barriers, and particularly how they influence what people do or what they might not do, what they might avoid.

[00:14:25] Guy Champniss: Yeah, so in terms of what they may do or not do, so, so we also measured three things. We with three measures of sort of AI usage that we felt were valuable to businesses. Firstly, we looked at how often do people use AI. Then we looked at what type of tasks do they use it for, and that ranges from very simple email drafting type and, and sort of grammar checking tasks all the way through to more sophisticated strategic planning type tasks.

[00:14:53] So, so, so that was our second measure. And the third one, which is sort of inversely related to those two, was [00:15:00] how likely are they to avoid using AI even in situations where they knew it'd be useful. So those, those are three things to bear in mind. Those are the things we're measuring in terms of AI usage.

[00:15:10] The, the areas in terms of readiness that we explored range from concerns over degradation in your decision making abilities. This is well documented that actually the more I use it. I get sort of cognitively lazy. I don't engage with the material so much. I don't own the answers as well. I don't interrogate it in some ways.

[00:15:31] So that's something called cognitive, what we call, what's, what's called cognitive debt. Um, but some of the others relate to things like, for example, losses of autonomy in other words I feel like I'm losing control over my work, uh, a loss of competency or mastery. And that describes the fact the more I use AI.

[00:15:49] The worst I feel about what I do in terms of, I'm just not as skilled as I thought I was. AI can now do something in 30 seconds. It was taking a day to do. So what does that tell me about my skill [00:16:00] levels In my job. There's relatedness. This describes our. Desire to be part of a social group and that actually there's good evidence to show that AI is taking that away from us.

[00:16:11] That actually, the more you interact with AI, the less likely it is that you'll spend time with coworkers, both formally or informally, and that has huge implications for collaboration for example. There's credibility, a loss of credibility. Again, there's research that shows that if you announce to your peers that you're using AI.

[00:16:30] They think less of you by and large. Interestingly, even if they're also using it, they don't think less of themselves, but they think less of you for using it. And then the last one we looked at was what we called professional social identity. In other words, does the specific use of AI, does it dent or damage?

[00:16:48] What it means to you to be doing the job that you do. In other words, are there certain behaviours, certain characteristics that are important to you? because they define what it means to be, uh, head [00:17:00] of accounting, a senior marketeer, whatever it may be. Are there aspects of using AI that damage what it means to be that type of person?

[00:17:10] And we looked at all of those, all those sects to sort of understand how much. How much, um, negative impact are these things having on employees because. If AI is mandated, if AI is pushed through as a technology first solution, these negative consequences will accrue and will have an impact. At some point, they'll grow very silently, but at some point something will come to the surface.

[00:17:39] We, we, we bash it all under the term of what we called psychological debt. And our concerns are that businesses are accruing the psychological debt by not looking specifically at these particular aspects of the human AI relationship.

[00:17:53] James Woodman: It's interesting when you mentioned credibility, you said that people [00:18:00] think less of their peers, their colleagues, if their colleagues are open about the fact that they're using AI, but.

[00:18:09] They don't think less of themselves. So I think it's okay for me to use AI, but it's not so okay for the people around me to use it. But I think it, it also sounds as though, actually I probably do think less of myself. I might not admit that to anyone else. But I think what you're saying in these factors is it.

[00:18:27] It diminishes me in some way. I feel that personally.

[00:18:30] Guy Champniss: So that can definitely happen. So for example, yes, you know, if you may feel that, um, your, your, um, your competency, your mastery is dropping as a result. So yes, that can happen. Um, the, the credibility piece though is looking at whether or not you as an individual

[00:18:49] feel negatively impacted your perceptions of other people. And those perceptions are that actually, I think, I think less of you. Um, there will be caveats [00:19:00] around that, of course. Um, but, but the research says that actually, you know, we, we think less of other people. We think they're maybe cheating, that they're becoming lazy, they're taking shortcuts.

[00:19:09] Um, none of these are qualities which add to your credibility in the workplace. It is probably important to say that, you know, these six um, aspects of what we call psychological debt. We're not saying that they're all distinct. They could feed into each other in some way. So probably a better way to look at them I think, is to say, well, they, there's six different doors that may be open onto the same room.

[00:19:33] And actually for certain organisations going through one door rather than another might be a slightly cleaner, clearer way to think about, well, what do we need to do within the organisation to make sure that we're not causing these negative consequences to sort of build up with our, with our teams.

[00:19:49] James Woodman: To focus in on this, uh, the, the identity strand.

[00:19:53] So that feeling that your professional identity is, is being threatened. I think you said that [00:20:00] people can feel uncomfortable talking about that, but this is not something that people open up about. But also you found that it's potentially the main reason that we avoid using AI.

[00:20:11] Guy Champniss: Yes. So, um. Why, why may people not talk about it?

[00:20:15] It might be that it's just a relatively alien concept. I think this idea of social identities in the workplace is, it's a bit messy. It's a bit complicated, so it doesn't, um it doesn't get talked about as broadly as it probably should, mainly because the theory as the science tells us, we switch from one identity to another quite often, which means that we may appear very inconsistent in our behaviours, and I think most research would like to show consistency of some sort.

[00:20:47] So it's a bit of a, an inconvenient truth around how we operate as social creatures. Um, so yes, maybe that explains that people don't talk about it as much. I think it's probably quite personal. It's quite difficult to talk about that [00:21:00] actually. You know, it feels, it feels quite profound at individual level.

[00:21:05] I mean, interestingly in our research, the cohort that, um, expressed it as a, as a negative the most were consulting and professional services employees. And that sort of mirrors some of the, sort of the, the general chatter you see in the press around the big consulting firms and the sort of the existential threat that AI is creating for them.

[00:21:30] Um, to a certain extent, the lawyers as well. But yes, you're right. What was very interesting is even though they talked about it very little, well, they acknowledged it very little. When we look at the variations in that level of acknowledgement. We see the greatest variations in those AI behaviours we're measuring.

[00:21:48] In other words, small movements in the sense that it's attacking my social identity correspond to the largest movements in terms of people using AI less, [00:22:00] people using it for more basic tasks and for people avoiding using it. So yes, I think in that respect it's maybe, it's maybe more under the surface. It seems to have a, a disproportional outsized influence on some of those key behaviours that every organisation is keen to grow and establish within their, within their teams.

[00:22:24] James Woodman: You've got a specific example of how that can play out with people who are experts when you give them AI, haven't you?

[00:22:32] Guy Champniss: So, so I think it offers a plausible explanation for this. Um, ironically, um, we were contacted by one of the large uh, technology businesses that has, is one of the main, um, architects of AI, uh, of gen AI.

[00:22:51] And, uh, you know, behind all the sort of hoopla and the headlines around how successful it was, they acknowledged that actually they had a trouble internally. They had a problem [00:23:00] internally getting certain individuals that actually used the very technology they designed. And those individuals were the senior software engineers.

[00:23:07] They didn't want to use the very thing they built. And you know, the question was around is, you know, is there a problem with the training programmes? Is the product itself not very good? And our view was, well that's unlikely. We think what's more likely is you are asking them to use technology, you ask them to engage in a specific behaviour which damages their social identity.

[00:23:30] And what we mean by that is that these senior engineers have spent potentially decades building a level of expertise, a level of sort of nuanced, tacit understanding of how to do these things. And suddenly you are putting in front of them what their own company says is a highly democratising.

[00:23:48] Technology, that's fine for people that don't have those skills, but for these individuals, it was a direct attack on what it meant to have that role in the organisation, and that could well have been [00:24:00] driving their resistance to use the tech. This was completely new to the company. They said we, we had just assumed, going back to your first point, we'd assume this is a problem with training or technology, not with psychology.

[00:24:12] James Woodman: Okay. Um, so what do you, what can we do about this? I bet there are people listening who are thinking this. I mean, specifically this point about people feeling their professional identity is being threatened, that they're being diminished in some way by the arrival of AI and by people using it. What, what can they do about that?

[00:24:32] If they feel that this is a problem for people in their organisation? How do you suggest they respond?

[00:24:38] Guy Champniss: So for the identity piece I think so I think to, to caveat all of this, I think it's very, very dependent on the specific situation. But if we had to pick a couple of sort of general approaches to this, I think the first thing to think about is you need to reframe the use of that technology, not as an identity damaging behaviour, but as an identity [00:25:00] enhancing behaviour.

[00:25:01] And that sounds like semantics, but I think, you know, as is the case with all communications and framing, you know yes. That's what it comes down to. So for example uh, uh, uh, clinicians ha uh, one of the groups that have basically pushed back historically the most, um, about using AI in terms of, um, diagnosis, therapy, choices, et cetera.

[00:25:25] Uh, Phillips has done a very interesting job there of basically trying to position the use of AI as taking away some of those more rudimentary lower order tasks for clinicians, which allows them to exercise their expertise and their judgement , uh, uh, give them more room to do that, and also to spend more time focusing on caring for the patient, the, the patient interaction.

[00:25:49] So that's an example there where actually, you know, you are converting what's initially seen as a, an identity challenging, identity damaging behaviour and use the technology to being one that actually is being [00:26:00] seen to endorse or increase the distinctiveness of what it means to be an oncologist, for example.

[00:26:06] So that's the first thing to think about, I think, is, you know, can you be a little bit more careful around how you frame the use, the technology? Can you identify some of those specific behaviours that are really. Um, the behaviours that really signal what it means to be a specialist in that group and make sure that the use of technology is not damaging those, it, to a certain extent, it's a little bit of common sense just there.

[00:26:32] A second one is around building social norms. So the more you can get key individuals within those social groups, so other oncologists in, in the, in the case of the, the, the, the, the sort of medical example or whatever other social group it is, the more you can get those individuals to talk about and share their experiences of use in the technology, again, the less it becomes a technology or a behaviour that is damaging the group and more, it becomes a behaviour that is actually supporting the group.

[00:26:59] [00:27:00] And the third one very practically is there are lots of very specific use cases for aI at specific moments in sort of workflows. Some of those moments are gonna have a, uh, an outsized impact on what it means to be a professional in that environment. If that's the case, move it to somewhere else. Try and slide the moments when you use the technology away from what those very sort of high profile, high impact moments, that can also help reduce the sense that actually this is damaging what it means to me to be a a prominent, successful, aspirational member of this social group.

[00:27:36] James Woodman: Can you give me a quick example of that? That sense of sliding the moments from one place to another? What would that look like?

[00:27:43] Guy Champniss: Well, so the obvious one is when you take something from being very public to being private. So even though we know in the sort of social identities that actually private behaviours also do still impact on what it means to be a member of the group, it certainly diminishes it to a certain extent.

[00:27:59] So [00:28:00] rather than having something happening in front of peers, for example, you can take it away and have it happen privately. So in other words, you sort of slide that, that. That particular behaviour, that moment using the technology. But it could also work the other way in the sense of actually you could protect social identity by sliding a, what is a private or an individual exchange of the technology to being a group exchange.

[00:28:22] Because by doing it in a group environment, you are actually creating a, say for example, you are assessing the quality of the outputs of an AI exchange if you do that with your peers. Actually what it does is it creates an opportunity for you to demonstrate your particular expertise. You know, you might be specialists in one area and one of your peers and something else that's actually, that's a great identity building behaviour rather than a damaging one because suddenly you've now got a new opportunity.

[00:28:49] Social identities work in the sense that the identities particularly strong. When I've got more opportunities to demonstrate, I'm a member of this group. And so that's the collective decision making, that collective [00:29:00] review process. So that's the example of taking it from private to public. That could be a good opportunity to say, actually, by doing this, you are strengthening a membership of the group.

[00:29:08] So again, I think it really varies from case to case, but um, you know, a lot of this is not particularly complicated or sophisticated, but it does need to be looked at, uh, and, and can yield, as I say, outsized benefits through these very small sort of interventions.

[00:29:24] James Woodman: That is such an interesting point because I think what you're talking about is switching gen AI use from being something that is effectively solitary and it's like having a, whatever, an invisible colleague who you collaborate with on a piece of work, making it something that you do as part of a team of other human beings that feels really powerful.

[00:29:46] Organisations could choose to do things there. Leaders could choose to do things there, which can happen across a business, across an organisation, but that would make a genuine difference to individuals through seeing them happen and being part of [00:30:00] it.

[00:30:01] Guy Champniss: Uh, I think so. I think it would, it would, it would switch off a lot of those sort of, um, destructive motivations around using it, but also it could create opportunities for, um, stronger, stronger teamwork to build more effective teams to increase collaboration.

[00:30:16] Um, so I think actually, yeah, you could have some very positive spillover effects as a result of saying, right, okay, we're now using AI for these tasks. We're gonna try and, um, you know build in the sort of collective reviews of what we're doing. I think, you know, there are some good examples of the likes of Coca-Cola and P&G doing this certainly around innovation, but of course it doesn't need to be constrained to those, those particular sort of sectors.

[00:30:38] I think any organisation of any size could do it.

[00:30:40] James Woodman: And it makes it visible too, doesn't it? And is I can see other people doing this. I can see my boss doing this. I can see the leaders talking about this. Maybe that makes it more okay. It also maybe makes it more likely that if something isn't going okay, if there's something uncomfortable about the way we're using AI or there are risks that people haven't picked up on, [00:31:00] perhaps we're more likely to spot them if we do it openly together like that.

[00:31:04] Guy Champniss: I think that's a good example of where, you know, those six different sources of what we call psychological debt, you know, that sort of action could, could disable or suddenly reduce lots of them at the same time, you know, your sense of autonomy would be protected. That sense of relatedness, of being part of a working team would be bolstered.

[00:31:19] Your own credibility could be demonstrated. Um, your sense of social identity would be supported. So yes. You know, you could see lots of knock on effects of doing something like that, which would be beneficial, you know, in, in lots of ways.

[00:31:31] James Woodman: It, it strikes me that going back to the really the top level of the research, so the four components you were measuring, the, the means, the machinery, the, the relevance and the readiness scores.

[00:31:42] Although they are separated and I understand why, but they are also connected in that if the, if, if the means, if I don't feel I have the skills or I don't feel confident using AI, I probably don't feel psychologically ready if within the machinery, [00:32:00] which includes culture and the way leaders talk about this and the position within the organisation, if that is not right for me, again, I'm not gonna feel ready.

[00:32:10] I might not feel that it's relevant. If this isn't relevant to what we do here, then it's, and it's not relevant to me. So they're separate, but also they do overlap in the same way that the six different, um, the six different barriers you've talked about overlap within the readiness, the, the components of the overall adoption quality sit together too, don't they?

[00:32:30] Guy Champniss: Yes, and I think that's, I, I think I, I think that's a benefit of it, uh, in the sense that those four areas as those headlined and then the sort of specific bits within those four areas. In a way, to a certain extent, they are different ways to ask the same question, and I think by asking that question differently.

[00:32:51] You may be looking at the same thing, but you see it from a different angle. You might see something that's behind it that you can't see from another angle. So I, I do think it gives a more sort of [00:33:00] nuanced, but ultimately quite a holistic view. And again, I think that the important bit there is around this idea of what does the, what does the human infrastructure need to look like within the organisation, and what does the human AI relationship need to be?

[00:33:15] To answer that question, you can't really answer with one question. You sort of need to ask it in a variety of ways, and I think that's what we're trying to do there each, each of those four areas, it's like taking a couple of steps to your right and looking again, and then a couple of steps again and looking again, et cetera.

[00:33:30] James Woodman: That sounds like the perspective of somebody who's spent their career figuring stuff like this out and finding out how to understand it better. We, we

[00:33:38] Guy Champniss: saw, well, I think, I think the answer there, yeah, there's, there's no, there's no silver bullet, but there can be silver buckshot and the buckshot comes from walking all the way around it, I think.

[00:33:44] Yeah.

[00:33:45] James Woodman: So we started this conversation by asking what are the human factors that make a difference to AI adoption? And I should say that your report goes much, much deeper than this conversation possibly can in the time that we've got. So [00:34:00] we'll include a link to that in the show notes for this episode.

[00:34:03] But I think if we're asked you to simplify all of this, really boil it down as much as you can. What is the most important thing you want people to take away from this conversation?

[00:34:13] Guy Champniss: Well, I would say, um, do not see AI adoption as a technology challenge to be solved. See it, um, increasingly as a human challenge to be solved and, and whatever that means for how the organisation approaches it.

[00:34:30] Uh, and I would also say that. Be more sophisticated, be more intelligent about trying to understand where those barriers to adoption come from. We've seen from our research that some of the things that people talk about the least, potentially matter the most, and we need to get far better at measuring those things and compensating for them because there are real concerns, or there are real consequences I should say, of [00:35:00] not doing these things.

[00:35:00] Not only does it lead to potentially very unhappy, very stressed, uh, potentially very alienated employees if all this psychological, psychological debt accrues, but also it'll impact organisations because the organisation will become more brittle.

[00:35:15] Collaboration will start to fall away. Training budgets will get ever larger and become less effective. And there's also a much bigger sort of macroeconomic piece that actually all these AI, AI companies are, they're relying on revenues coming in from licences. For of this, those licences will only be awarded if the value of AI becomes clearer.

[00:35:37] So again, all of those factors distil all the way down to behaviour. We have to be more focused on, on driving that behaviour.

[00:35:46] James Woodman: Are there simple? Signals, warning signs, things that somebody could pay attention to where they work, where you would expect if these things are problems or becoming more of a problem.

[00:35:59] You could, [00:36:00] you could recognise that.

[00:36:01] Guy Champniss: Yes. I think there are two ways you can spot these. You can either ask people directly, which is we did in our research, and that does stand as a sort of, you know, a, a sort of, uh, company wide survey tool, but you could also spot certain behaviours. Yes. So I, so for example, you might see collaboration dropping.

[00:36:16] You might also see people engaging sort of work around behaviours. So in other words, are they trying to slightly reshape their own job descriptions to avoid. Those aspects of their job where the use of AI is in some way accruing psychological debt, maybe it really impacts what it means for them from a social identity point of view.

[00:36:34] Maybe it really undermines their sense of autonomy. You know, are they increasingly looking to sort of work with people at other parts of the organisations, try and break free of those things. So I think there are quite a few ways you could try and you could try and spot those.

[00:36:47] James Woodman: From all of this, this point about paying attention to what's happening with AI and understanding how people are responding, what is your number one piece of advice to someone listening to this?

[00:36:57] Guy Champniss: So my number one piece of advice is that [00:37:00] AI may not be your responsibility. It might belong to IT. It might belong to human resources, it might belong to the leadership team, whatever it may be. Its impacts are very much your responsibility and if you do not pay attention to it, you could find yourself on the wrong side of some of these sources of psychological debt.

[00:37:20] Now, why does that matter? Well, it matters for the organisation in terms of productivity and performance, et cetera, but probably more importantly, it matters to you because we know that if these sources of of psychological debt increase, you'll become less happy. You'll become less motivated, you will feel more coerced into your role.

[00:37:41] You will lose your sense of connectedness with your peers. These are all things that are pre-dated AI in terms of we know these are really important drivers of what it means to be happy, uh, and, uh, productive in a role. If you don't pay attention to these [00:38:00] things, AI could. Take those things away from you.

[00:38:04] So it is incredibly important to be mindful of, to be open to, but also critical of how you interact with AI in your role.

[00:38:15] James Woodman: If, if someone's listening to this and they are finding AI adoption at work difficult, personally, so not as a leader, not whatever their role is, but just as an individual at work.

[00:38:26] What, what would you say to them? What would you say about responding to this sense of worry or resisting technology or losing their professional identity?

[00:38:36] Guy Champniss: So I think I would say. Where most of us are in the same position is the truth. Um, again, because the story is owned by, the narrative is owned by the technology companies, it is presented as this bright, shiny future that everyone else, except for you and me are currently in.

[00:38:54] The reality is I think everyone is in that same position. There is very little [00:39:00] real knowledge around the impacts just here. So the first thing I'd say is not to panic, I think. Um. I would say it's very important to critically engage with the technology. That means understanding it a little bit. I think it's important for us to dispel this, this sort of technology as magic perception that that is, that is, that is very prevalent.

[00:39:23] Understand a little bit how it works because actually that it becomes far more mechanical and actually bizarrely extremely inefficient for what it does, and that can make us feel better. And then also. In effect, do a little bit of an audit to say, well actually, how can this be useful for me in what I do?

[00:39:42] It cannot be something which takes away your decision making skills. It cannot be something which chips away what it means for you to be in your role. And that is largely, I think within, within your gift, within all of our gifts to sort of say, well actually look at this. How can I make AI [00:40:00] work for me?

[00:40:01] How can it make my work better? I think there is a positive answer to that question for everyone, but it starts with needing to engage critically with the technology and to not panic that, that we all think we're the only ones not to have yet worked that out.

[00:40:19] James Woodman: I think that's really helpful and I think for me, certainly having felt personally that sense of, of being threatened by this, that kind of professional identity loss or risk of that.

[00:40:29] I think it's interesting 'cause we've talked quite a bit about how to reduce that sense, how an organisation might change the way they work or change what they do to help people feel more, more ready. Like AI is more relevant, other psychological barriers aren't there so much, but I, I do feel also those feelings tell us something important.

[00:40:48] If you feel that AI you, if you feel the sense that you are using AI yourself in ways that threaten what you do professionally in ways that reduce your credibility in ways that have [00:41:00] social consequences in the team that you work in, it feels to me that it's. It's important to pay attention to that, not just to push through it and say, well, my employer wants me to use AI and to embrace this technology.

[00:41:12] But to say, actually the fact I feel that it's important and we should be talking about that and having a culture where we can be open and where other people talk about it, we respond positively. That feels like the way that you ensure that people are in charge of the technology, that the technology isn't taking the lead, that we continue to keep control.

[00:41:31] That is something I wanna find out more about how people are responding to that, how people do it, and it's gonna be the focus for another of these podcasts soon. Guy, it has been brilliant speaking to you. Thank you. Thank you for doing the research. Thank you for sharing the results, and thank you for giving us your time and insights today.

[00:41:50] Guy Champniss: My absolute pleasure. Thank you very much for, for the invitation and I think yes, we are, we're in a inflexion point. I think, you know, as the technology to a certain [00:42:00] extent stabilises and matures, I think it's inevitable and important that the conversation does. The attention swings towards how does this work for us and um, how can we make sure that actually that that human AI relationship within organisations is healthy and productive. Uh, and I think that conversation is just getting started.

[00:42:21] James Woodman: If you want to dig deeper, Guy's, full report is on his website. We'll put the link in the show notes. Thank you for listening to Speak to the Human. I'm James Woodman, and if you've got a question you'd like Acteon to explore in a future episode about AI or behaviour change, or anything to do with the human experience of work.

[00:42:40] Please get in touch hello@acteoncommunication.com. See you next time.

Want to share our goodies?

Sign up to our newsletter...

for communications nuggets, behavioural insights, and helpful ideas. All treats and no spam.