What are the implications of GenAI for learning and development? What’s really happening at the moment… and how do we make sense of it, while staying grounded in something that feels like human reality?
In this episode, Sarah talks with Ross Stevenson, founder and Chief Learning Strategist of Steal These Thoughts. Ross has spent nearly 20 years in L&D, and helps organisations bridge future skills and new technology adoption. He’s known for his no-nonsense approach, sharp insights and regular newsletter that unpicks technology trends in the L&D space.
Ross combines a reassuring understanding of the tech itself alongside pragmatic, human-centred thinking. Together we explore:
- How the role of L&D is evolving – and doing so differently in different contexts.
- Why most teams are still experimenting rather than embedding AI.
- How psychology, confidence and culture shape adoption of GenAI.
- Why we need to start with business problems which GenAI can help us solve, rather than starting with solutions in search of problems.
- Keeping - and elevating - human creativity and critical thinking as we develop better ways to use GenAI.
Transcript (AI generated)
[00:00:00] Sarah Abramson: Speak to the Human is a podcast that explores how we build connections with people in their professional work. It's about the human experience at work and about how to foster that connection and belonging to support people and their organisations to flourish. I'm your host, Sarah Abramson, and I'm looking forward to you joining me in hearing from our brilliant guest.
[00:00:27] My guest in this episode is Ross Stevenson, founder and chief learning strategist at Steal. These Thoughts. He helps learning and development professionals to harness technology to improve performance and skills in their organisations, and is of course now working a lot in the AI space. He has a no BS approach, helping to make technology adoption accessible and actionable.
[00:00:49] And he's a great communicator with his regular email newsletters, helping to cut through the hype surrounding ai, combining a reassuring understanding of the technology [00:01:00] itself alongside a really pragmatic human centered and no nonsense style with a sense of humor thrown into boot. So welcome to the podcast, Ross.
[00:01:08] Ross Stevenson: Cheers. Thank you very much for that introduction. I should start getting some of these. Recorded and you used elsewhere. 'cause that would greatly help me. When people say, what's your bio? I'm like, I, I dunno.
[00:01:18] Sarah Abramson: Lemme play you this.
[00:01:19] Ross Stevenson: Yeah.
[00:01:21] Sarah Abramson: Cool. Well, let's start by, um, if I can ask you just to Yeah. Sort of briefly tell us about the work that you do.
[00:01:27] Ross Stevenson: Yeah. So, yeah, again, that's, I can't, I suppose it's hard to describe in some ways. So I, a long time I've been at l and d for nearly. Somehow nearly 20 years. I dunno how that happens, but it happens. Um, I have been running my own show, I suppose you can call it, for about three years now. So I have a company called Steal These Thoughts because Ross's L and D Academy was a rubbish name and I didn't really want to use that.
[00:01:49] Um, so still these faults, it was that existed as you alluded to, because I've been writing a new set. Of course, still these thoughts for about. Seven or eight years now. So way before it became trendy in the last [00:02:00] kind of like 36 months to have newsletters and it's all focused on learning technology. So I've always kind of worked in this space with enterprises, startup scale ups around learning tech, building it, using it, making it meaningful, making it practical.
[00:02:15] And basically what I've done today is just kind of amplified what I do with the newsletter. So I have about 5,000 people that read that for the space of. Learning and educational tech and how they can apply it practically, um, and hopefully get some common sense stuff out of it. Kind of my approach is like less of all the kind of, I don't wanna say nerdy stuff, that's the wrong way to to say it, but less of the technical stuff and more of the, here's how to actually use it for you in your context and what you're doing.
[00:02:41] And now I do that for clients as well. So I've been very lucky where people read my stuff, they reach out and say, Hey, would you like to come? Work with us and help us do this stuff for technology purposes. So, uh, I do a lot of work with clients on that. And then of course, I have partnerships with.
[00:02:55] Different companies as well who are building products in advisory [00:03:00] capacities and also collaborative capacities on many kind of different things that they're doing. So what I do is very much a mixed bag, but I suppose the easiest way to kind of describe it is that I'm more nowadays of a analyst and researcher and kind of sharing that from the market movements to.
[00:03:15] People in our industry who are as nerdy as I'm.
[00:03:19] Sarah Abramson: Yeah. And I mean, I, I agree. I read your newsletter regularly and, uh, I'm not a technologist and it makes sense to me. Um, and I think it really is grounded in that kind of practicality and, uh, and all of that. So, yeah. It sounds like the work that you're doing is pretty broad ranging, but, um, I think fair to say that you've been anchored in learning and development, so, Hmm.
[00:03:40] It'd be interesting to hear from you what you are seeing happening in l and d at the moment with AI adoption in particular, and how fast things are changing on the, on the technology front.
[00:03:51] Ross Stevenson: Yeah. Multimillion dollar question. I think technology front evidently rapid change, I think can't keep up with it if we're quite [00:04:00] honest.
[00:04:00] Like, people always say to me, oh, you seem to know all this stuff. I, I can barely keep up with it. It, it feels like it's changing every 25 minutes when my phone pings off and something else has happened, so. I think we can definitely say from the, the tech perspective, you know, completely transformed.
[00:04:15] Whether we are aware of all of it or not is another question in terms of what l and d teams are doing. I, I think it's far ranged right, and I think many different companies and teams have different views of what adoption looks like. So it's really interesting to not only collect the data that I get from my audience where I have as an example this AI.
[00:04:36] Kind of maturity tool where people will come on, they'll answer a few questions, and it gives me an understanding of where most professionals and teams are in terms of their maturity today. And I think as we speak now, it's had about just over 500 people that have kind of answer stats consistently live.
[00:04:52] And most people still sit in an expiration phase. So there's four levels, and level two is, is expiration. So if we think about that, we're kind of three years [00:05:00] removed now. From when chat GPT first came out and it's version one and kind of herald it, that generative AI stampede that's happened. And most teams are expiration.
[00:05:10] But that's fine because so many things are happening so fast, it is really difficult to understand what's meaningful and how do we practically use that to get some ROI. And the reason why I say that is because there's so many, I say quote unquote case studies out there where I think well. Yes, you're using ai, but is it really meaningful?
[00:05:31] And I think one of the kind, the elephant in the room with that is it's mostly content creation. Mm-hmm. In the l and d industry. For most industries it is at the moment, it's kind of the low hanging fruit. It's the obvious, simplest thing to do. But I think that the l and d industry and maybe a market in which share, this is the, maybe it's not the biggest return on investment or value because people are already overloaded with content.
[00:05:55] Everything's so content centric. It's like. Kind of elevating [00:06:00] potentially bad practices and using a superpower to, to build even more there. So it's really interesting to understand that some companies see that as amazing and see that as adoption and see it as, oh, this is fantastic and we are using AI and doing this.
[00:06:11] I am not in agreement in that, as you can probably sense, but, and I think there are other L and d teams who are really trying to solve the puzzle of. Okay, how do we collaborate with these tools? And they're really clear on, well, actually what that looks like is helping us do better work and helping us enhancing what we're doing, not outsourcing our thinking tool, the tools, not outsourcing everything that we do.
[00:06:34] So it's a real kind of like up and down landscape in terms of who's doing what, who feels like it's valuable. What's the context around that for people? So there is kind of, I'd love to say to companies, it doesn't exist. It's one kind of framework where it's like, that's what good looks like and that's what good looks like for everyone.
[00:06:52] But it, it's really not the case. Everyone has different needs, everyone has different contexts. Although, as I say, I'm not a huge fan [00:07:00] of build more content. For some people in a one person at n and d team, that's a really, really big value proposition for a company that's got 25 instructional designers and everything else going on.
[00:07:12] It's not, there's, there's better ways to be using it and there's better inroads to make. So yeah, I still think, look, the kind of TLDR too long didn't, to ride too long, didn't listen, I should say in that is that. Most people are using it, but I still think most teams are stuck with innovations that are two years old.
[00:07:31] I still see people talk about very basic things with prompting an LLM or scratching the surface. Um. Except I think that's good or bad. I just think that's my kind of view on it at the moment is that there's so much stuff going on. I appreciate it's, it's hard to keep up and see what is actually valuable and what is meaningful, but I think as an industry, we look at most of the solutions, l and d specific solutions.
[00:07:58] They're about two years [00:08:00] old still. Mm-hmm. Um, but that makes sense because like I say, the, the industry. The people in the industry haven't gone further than that. So I'm not expecting anything crazier than that to kind of exist at the moment for that audience. So, yeah, that would be, I think,
[00:08:14] Sarah Abramson: well, I guess it, it takes a while to kind of adopt.
[00:08:17] Mm-hmm. And if you've, especially if you've invested previously in a whole way of doing things and you've built Yeah. Teams around it and systems around it, it's, you know, it's, it's not easy for organisations to. Move quickly. Is it, but it's really interesting what you're saying there. I wonder if it'd be great to get your views on whether it relates to a sort of shake up in how we think about learning and acquiring Yeah.
[00:08:40] Information and whether this relates to a sort of different way of doing things where, you know, we, we've had content creation, we've had instructional design, but is it. Does it point to a new way of people learning and how l and D might look in the future? What the role of L and D [00:09:00] might be, and how employees are kind of pulling information themselves and building information themselves to suit what they need?
[00:09:06] Does it, does the whole paradigm change?
[00:09:08] Ross Stevenson: The caveat is, I don't know, like dub a speculation. I'm looking at the, it's the science most people are looking at. I'm looking at the trends and the kind of technological changes, but I think that the obvious answer for me is yes. How that actually looks like. I don't know.
[00:09:21] I'm kind of like looking at what people are doing, looking at the trends and making guide and I, I described like kind of pontificating on, I did something other day about what L and d could look like in 2027 and I kind of said it'd be way less production heavy. 'cause it feels like it's really production heavy right now.
[00:09:39] And it's more focused on enablement. And what I mean by enablement is that working with stakeholders, being a performance consultant. Being more of an individual that's kind of setting context in the organization or helping people understand their context, which is kind of a huge shift as an industry overall.
[00:09:56] We hope that most people are doing that, but I think it's probably only about five to [00:10:00] 10% that probably do that at the moment in time. So a gun to my head, someone said, well, does NND look like in 2027? I think there is a lot more potential involvement from AI in. Now, whether that's creation, I don't know, maybe creation will come down because I think what we have to kind of respect at the moment is across all industries.
[00:10:20] If we look at, I keep going back to market and, but S-E-S-E-O is a prime example. Search engine optimization, where, you know, Google has been the big player, like Google search makes all the money ad driven. That's where you go to finances. We're actually seeing that drop in now. Not huge at the moment, but what we're seeing is people go and talk to a large language model and do all their web searches through there.
[00:10:43] You're seeing the same in N and D, and I get that. 'cause why would you go to an NMS or LXP that isn't AI enabled and doesn't have a chat interface to help you understand your context and then help you kind of like. Assimilate information or break down [00:11:00] that information specifically for you. When a chat GPTA co-pilot, or a Claude or whatever can do that, you're just not, you're not gonna win that fight.
[00:11:08] Like they're always gonna go to that LLM. Mm-hmm. So that is happening. I think that's really interesting because you're seeing people use it personally and you know, I know in organisations where the battle before, which was how do we stop people going to Google search and trying to find out how to do their job?
[00:11:22] It's now. Well, people are going to chat GBT to go and understand about leadership and other stuff and that. So yeah, n and D will will have to change how it works because as I say, you're not, you're not gonna win that fight. Like, you know, a large language model is always gonna win that fight and people are gonna go with what they know.
[00:11:40] So, um. That's a huge shift.
[00:11:42] Sarah Abramson: What are you, um, whatcha you observing in terms of, um, organizational attitudes to that? Like how much organisations are trying to control what employees are doing, whether, you know, I've heard stories about, like crazy stories about organisations saying you must not use this, and then [00:12:00] employees finding really creative ways around it versus that kind of encouragement of experimentation, sharing information.
[00:12:07] And kind of embracing the possibilities, even if it, it, it, even if it adds risk, because we don't quite know what people are doing yet. And, and, and the leaders who are setting that agenda maybe understand it a lot less than the people who are doing the experimenting. But what are you observing broadly?
[00:12:25] Ross Stevenson: Yeah. It's, it's really varied and it's varied in the, it depends who's leading that team and it also depends on. Germany, I find if they're an enterprise, a scale up or a startup, the obvious things is scale up. And startups have a bit more breathing from, they're, they're a lot more, you know, I don't wanna say less legal framework, but they're a bit more risky in terms of, yeah, you know, we'll just go and buy a bunch of tools and we'll try 'em out and see what works best.
[00:12:52] And that's kind of an attitude there. It doesn't mean. I don't see that in some enterprises. I just think it really boils down to the [00:13:00] characters who lead those teams. And to your point, you were just saying there in terms of the leaders who are championing that, who and who are enabling that as well, but also those who understand it.
[00:13:09] I think there's very few at the moment, outside of, let's say, the general population understanding of generative AI that really understand what's going on. So it's very hard for them to kinda. Make any markers of what could we be doing or what should we be doing? So I think it, yeah, it's, it's really up and down more in the enterprise space.
[00:13:30] Of course, they're a bit more locked down because generally what you find is they only have access to one or two tools. What you still find is that people are still using the other ones on their phone. Just doing whatever. But then again, that's really no different to like, if someone said to you at work, you could only use Bing search from Microsoft, but Google's the biggest one.
[00:13:50] You're just gonna use Google on your phone. You're just gonna compare. It's gonna happen. You can't,
[00:13:54] yeah,
[00:13:55] you can't actually stop that. I think what people should more focus on is how can they help people understand the [00:14:00] benefits of the tools they've got internally? And I don't really see that happen a lot at the moment in time.
[00:14:05] People are chasing so many things. They don't take the time to understand. What do we actually have and what can we do with it? And then the other part of that is you just get some people that are completely blanket banned everything or they've locked so much down that nothing's happening. And that's a really weird space to kind of be in.
[00:14:21] There's not too much you can do with that at the moment.
[00:14:24] Sarah Abramson: It reminds me of during COVID when. Some companies instilled like tracking devices to see how much people were at their computers or using their keyboards. I was like, yeah. It is such a clear way of saying, we do not trust you to do your job and we're not gonna treat you like an adult.
[00:14:38] You're a child and you will do, what was that? A hundred percent.
[00:14:41] Ross Stevenson: Yeah.
[00:14:41] Sarah Abramson: Coming back to l and d, I mean, how much of a role do you think there is for l and d. In developing those, um, AI skills, whether it's prompt engineering or whether it's confidence or whether it's ways of sharing, um, new ideas, use cases, that kind of thing.
[00:14:57] What, what, what part do you think l and D can be [00:15:00] playing?
[00:15:00] Ross Stevenson: Yeah, I think they, they play a role in it with others, and the reason why I say with others is 'cause what I find generally happens is that most teams in an NND perspective have just been tasked and told you need to make the company understand AI and do all this.
[00:15:13] That's not gonna happen. On their own mostly because, you know, if we're quite open about it, is that L and d teams aren't tech people. Hmm. So they don't understand really how the tech works. So they, you need kind of this like they would call it community or coalition of people that involves tech teams, software engineering teams, all of these people that can come together and basically say, how do we build our approach to, whatever you want to call it, AI fluency.
[00:15:39] AI literacy, and what does that look like? And then within the l and d team, it's then understanding. What does that look like from a resources standpoint? What does that look like from experience standpoint? And what are the things that we are actually trying to enable people to do? So yes, that's the technical stuff, which is, you know, predominantly you could probably buy a lot of it off the shelf or engage people to do that.[00:16:00]
[00:16:00] But then there's also the elements of the actual, or how do you think about this? What's kind of the operating system of how you work with AI in itself. So certainly have a role to play in that from a. Enablement of the technology, but also the broader human skills element, but also the kind of psychological bit about how do you now think about the way you work and what you do.
[00:16:24] Now that we've got artificial intelligence kind of let loose across many, many applications and many, many areas of work, so yeah, involved, supported, but definitely as part of a larger ecosystem. But I appreciate not everyone gets that benefit.
[00:16:41] Sarah Abramson: I'm really glad that you brought up the word psychology there because I think it's such an interesting part of this and the psychology, the confidence, the way that people feel about ai.
[00:16:52] You know, that kind of range of attitudes that people have got at the moment. Some people are really fearful, don't like it, just instinctively think that something [00:17:00] off and others who are kind of embracing it and that and that. I think. I in, in the conversations I'm having that kind of reflects what people are saying that they're doing is a lot of it's about confidence as much as it's about what they're act, how they're actually using ai, if at all.
[00:17:18] And I think that touches again on what you were saying at, at the beginning of, of that answer, why you're talking about technologists. Mm-hmm. And. L and D not being technologists for example, almost. And that kind of psychology of thinking. For the majority of us who haven't got a technology based background, that this is a world that we don't understand yet, and in order to understand it, we have to somehow acquire this technology capability that feels a bit beyond us.
[00:17:49] Do you, do you think that's true and like if, how do we give people who don't see themselves as technologists the confidence to use [00:18:00] what's in front of them?
[00:18:01] Ross Stevenson: Yeah. I think the way I look at it is you don't need to be an expert. You just need to be savvy. And what I mean by that is kind of across the board of most tech, like we all have smartphones, we're all using different applications, but it still astounds me that most people don't understand how algorithm based platforms like LinkedIn, Facebook, and TikTok work, and their signals from that, you know, and how that decides a lot of stuff in their life and what they see and what they interact with.
[00:18:29] Even very simple things. Mobile banking absent at one point was like, you know, outrage and uproar about moving from a traditional bricks and mortar model into that. But you don't even see that nowadays. And it's the same thing with ai. It's like if we look at just generative ai, like AI as a whole is huge.
[00:18:47] There's a huge topic, but the waters are muddied now because there's been so much kind of marketing out there from product teams and marketing teams of big corporations. It's that people think everything that's AI is now what they see from a narrative standpoint. [00:19:00] But there's this real savvy stuff where it's like you can spend, you know, there's many, many YouTube videos, there's many, many, what we wanna call 'em, micro courses, short course.
[00:19:07] There's 10 minutes where you can go off and understand, okay, so large language models, which let's say 90% of people are using for most of this stuff nowadays, you can go off and understand actually, well, how do they work? So then by understanding how they work, I can know what tasks they're gonna be helpful with and what tasks they're gonna be harmful with.
[00:19:30] And I don't, you know, evidently I'm biased 'cause I'm a nerd, but I don't think that's like, you know, I didn't go to machine learning school or AI school or anything like that, but I've spent hours and I spent time with people to understand it more deeply. But on the surface, you can go there and get, okay, so that's how this works here, right?
[00:19:48] So in the future, if I want to go, here's my list of tasks, these are the things I won't do because. It looks like it's gonna be more harmful, and these are the things where it's actually going to excel. The reason why I say that is because I [00:20:00] think just having that knowledge just opens up the world of what is actually possible and what are the limitations, and then that also then connects into the fact that where you get people who misunderstand how the technology works and 'cause that misunderstanding.
[00:20:14] You get elements of where they start to fear it because they think it's gonna take their job or it's not good enough. It can't do all these things, but no one's ever said it's gonna do all these things. It's just the what they see in the media versus actually understanding. So I think there's a great deal within taking 15, 20 minutes to understand most everyday technologies you are using to be like, well, how does that work?
[00:20:36] And why do I get X? Because I do Y. And then from that you can then have a broader understanding of the systems that are around you that you're engaging with all the time and given your data to, and you know, disseminate information. But then also make smarter decisions on what are you gonna use, when are you gonna use it, and why are you gonna use it?
[00:20:54] So it's also that thing I finish off there is like, it's really [00:21:00] hard to say, like when some people say to me sometimes, oh, I'm not good at tech, I'm not good at digital. I'm like. It's not 1998 anymore. You can't really say that like everything is digital. Yeah. So when you're in an organization, it's like you can't say if you're turning around to your boss or your team and saying, I'm just not good at tech.
[00:21:16] I'm like, yeah, I dunno how that's gonna last here for you that long, to be honest, because we're all using smartphones, we're all using different platforms, so. As I say, that's being an expert. It's just being savvy. You're using all these systems, you're contributing to these systems. So it kind of makes sense to me.
[00:21:32] Like, don't you wanna learn a little bit about how those systems work and what they're doing and how you get all of these different bits. So, um, yeah, I think in some, it's, it's just that I never said to ND people, you know, you do the learning, but in a digital world, tech is such a big part of everything nowadays.
[00:21:50] So the, the boring basics, if you wanna call it that, just having that in place. Will help you go far, not just at l and d, but in your career in general.
[00:21:59] Sarah Abramson: I completely [00:22:00] agree. I think that, and it comes back to that point about psychology that,
[00:22:03] Ross Stevenson: mm, I
[00:22:03] Sarah Abramson: think a lot of people, um, and I've certainly experienced this kind of feeling when I've been starting a conference room listening to somebody talking about how AI is gonna.
[00:22:13] Take over the world and the hockey stick and all of that.
[00:22:16] Ross Stevenson: Mm. But
[00:22:16] Sarah Abramson: it feels overwhelming. And, uh, you feel like you're not sure if you're gonna have a job and where it's gonna go and how on earth you're gonna upskill yourself to be able to deal with it. Mm. And I wonder if, because you are so right, we all of course, use technology all the time, but I wonder if.
[00:22:33] What we need to do is kind of just break it down a little bit into smaller chunks. Like it feels like we talk about AI as if it is this one thing,
[00:22:41] Ross Stevenson: which of
[00:22:41] Sarah Abramson: course is nonsense. Yeah. That is somehow taking over the world and is gonna change everything that we do and take our jobs away and all of this.
[00:22:48] Whereas actually the way we've experienced technology over the last 30 years, in particular with the advent of smartphones and
[00:22:56] Ross Stevenson: mm-hmm.
[00:22:56] Sarah Abramson: The internet and all of that, is that it's kind, it feels much more like one thing at a [00:23:00] time. And if we, Hmm. Think back to how things were 30, 40 years ago, it would feel absolutely impossibly massive if we were gonna think about that as one step.
[00:23:10] Yeah. But we've experienced it as doing one thing at a time, figuring out one thing at a time, learning how to use a basic mobile phone and then a smartphone, and then all the, like you say, mobile banking apps and that kind of thing. And
[00:23:23] Ross Stevenson: yeah.
[00:23:23] Sarah Abramson: I wonder if it's just a sort of mindset and confidence around.
[00:23:28] Chunking down to what you can cope with, experimenting with that one thing without feeling like you have to understand it all.
[00:23:37] Ross Stevenson: Yeah.
[00:23:37] Sarah Abramson: How, how does that reflect your, the way that you work? Do you find ways to, to help people think a little bit more that way?
[00:23:45] Ross Stevenson: Yeah. The, the easiest way to kind of help think, or help people think about it is to relate it to everyday things they're already using.
[00:23:52] It. It is funny, right? 'cause we say the word ai, but realistically in the zone that we are talking about now and all the hype, not the hype particularly nowadays, but all of the, the [00:24:00] fanfare is about generative ai. So generative is what it says on the tin. It's about generating new content from existing data sets, whether that's images, videos, writing, but that's one subset of ai, right?
[00:24:11] AI in itself has existed since the late fifties. Yeah, in some form. And to your point of those iterations, that's how it's gone. But the problem is 'cause it's such a specialist. Background subject, general population. Don't see that. They don't see, you know, there's been predictive artificial intelligence.
[00:24:27] You know, we've had machine learning. Machine learning sits around in a prime example is Netflix. Netflix has a recommendation system that basically says you go watch a true crime drama. It then goes, all right, you must like true crime drama. So here's another 50. That's machine learning in practice, and you have machine learning on games consoles where it understands what people like, what they don't like, how they play, what they play, et cetera.
[00:24:49] So there has actually been this incremental evolution of ai. It's just that because it's not a front facing technology. People haven't seen that. But because generative AI was the kind of one [00:25:00] species of ai, if you wanna call it like that, that blew up. In the public eye. Now, it all kind of feels like this is all ai, but it's not.
[00:25:07] It's just generic. It's just we are creating stuff. But there's many things that are going on from, you know, autonomous AI and self-driving cars. As an example, even with AI agents, you know, self-driving cars in their form have existed again for like a decade plus, and no one's been having fanfare about that or going crazy.
[00:25:24] Put in mad LinkedIn posts about did you see that self-driving car and all these new releases. So it has happened. It's just happened in a strange way. Whereas like the phone example, which is a great example, we've all seen that and gone through it as kind of a general population, but the AI point we haven't.
[00:25:38] So it all feels to most people overwhelming at the moment where realistically it's kind of like, I know it's hard for people to do that. It's gonna slow down, pause, take a step back, and then look at, well, what are we actually dealing with here? And what we're actually dealing with is. Generative, which is all, like I say, all around.
[00:25:55] That's why we hear people talk about writing emails, writing documents, creating [00:26:00] cute cat images and videos and all of this kind of stuff. It's because it's very firmly generative, which is one species of ai, and it's a very, very popular at the moment because for the general population, it has the most application.
[00:26:14] 'cause we're all creators, we're all builders. So it makes sense that's gonna be more popular than, Hey, here's a self-driving car and how autonomous AI works. Or here's how predictive AI works to look at the weather patterns and that people don't care about that because they can't use it in their daily lives.
[00:26:29] So I think there's that element of which I get, it's hard, but it's like you have to kind of cut through the noise to actually get back to, well, what is actually happening here? Like, you know, you've got all people going mad on social media and et cetera. Yourself. You have to kind of do your own research to be like, well, what is actually going on?
[00:26:48] 'cause what is not happening here is Terminator or Skynet or Senting AI that we see in films. Like at all. Like I can't even get chat GBT after 20 iterations to barely create a decent [00:27:00] graphic to go on social half a time. So I'm not expecting it to take over the world or do anything at this moment in time, however.
[00:27:07] The perception in social media, which unfortunately can drive a lot of misinformation and disinformation from people who have no idea about the topic, have never researched. It causes this fanfare of, oh my God, I'm being left behind. Or, oh, this feels so overwhelming. So it's kind of like niching down to, you know, finding people who've actually been doing this stuff for like 20, 30 years and build and actually know what they're talking about, not the.
[00:27:35] Latest 23-year-old who opened up an AI agent account and decided to create a bid in workflows to write more blog posts. That's not gonna be the way you're gonna find it, but I appreciate that. That is hard, and you have to know where to go and what to look for. But what I wouldn't want people to feel like is that, you know, there's none of this kind of being left behind.
[00:27:56] 'cause we're all, as you say, on this evolutionary journey together. No one. [00:28:00] Even the people who are creating this stuff, you know, they're doing, you know, more bits in that realm, but we're all experiencing this in real time, as in how is it actually giving value? Like what's happening? We don't know. There's no one further ahead that can actually say that.
[00:28:12] So yeah, it's, um, there's no concrete answer to that, but my thing would be try and do your own research and focus on the facts rather than all of the kind of like click bait that we see online that can just take you down. Several hundred holes, which is just not gonna be beneficial to you in any way.
[00:28:31] Sarah Abramson: Yeah. 'cause one of the things that I, I wonder about, and it'd be great to get your perspective on, is whether that sort of sense of needing to keep up with things, that there's this kind of momentum you've gotta be part of. It leads to a sort of, oh, we must adopt AI kind of mentality. Yeah. So you kind of, then you end up with a solution in search of a problem.
[00:28:49] Mm-hmm. A hundred percent. Whereas, I mean actually it's like any other technology, it's a tool and we can use it. In an incredibly powerful way to solve problems better than [00:29:00] we ever have before. Yeah. But those are problems and. We need to identify that it's a problem and AI is a good solution for it. And, uh, a hundred
[00:29:10] Ross Stevenson: percent, yeah.
[00:29:10] Sarah Abramson: Would you be going to get your perspective on how, how people seem to be approaching that? No,
[00:29:14] Ross Stevenson: definitely. Yeah. I mean, I'd love for them to approach it in that way, problem first, but they don't, like, that's just human nature, right? We want instant gratification. We've got fomo and we want to be seen as, you know, we're in this thing, we are part of this group and we're not being left out.
[00:29:28] So, yeah, it's, I, I preach it to people consistently. And you've probably seen the newsletter, but. It's not really a default operating mode for most people. It's like, where are the tools? Who's using what, what's out there? Um, let's go or use that. And I spend a lot of my time trying to convince companies not to do that and focus on, well, what are the things that you are actually solving?
[00:29:51] That's even what I do in my own practice and my personal use of ai. Like I'm not, I haven't got every single AI tool. I've got very few and I use very few because I'm very much [00:30:00] looking at what are the things that I'm. Trying to get help with and what do I need for that? There may be some tools, you know, further down the line that I don't use now that I might engage with.
[00:30:10] A prime example of that is that AI avatars have been around for ages now for like three or four years, and I've never used them until about like four months ago now. And that was purely because there was a problem that had arisen and I thought actually that would be a good solution now. But I wasn't jumping in when they first came out and everyone's making terrible robotic versions of themselves that barely move look creepy and got their eyes and not really doing much.
[00:30:36] 'cause it has no tangible benefits. So the thing always is problem first tools later, however, and people will definitely identify if this will work in the same space as me. Consultant advising companies is, that's really, really difficult when senior teams are kind of really high on the Kool Aid and they just wanna make a statement and they wanna sign big deals and get things out there.
[00:30:59] But [00:31:00] in l and d, that's really no different. Like teams are used to senior leaders going out buying big tools, whether it's hrs, whether it's nmss, ORPS. Just pushing them out to the company and then saying, make it work. So it's not anything new in that, um, it's just that try, you know, if we could get people to that problem focus mindset, it would be fantastic.
[00:31:26] But of course, as I'm saying, there's a lot, a lot of cultural pull and cultural norms to fight against it, which is, you know, really, really difficult.
[00:31:34] Sarah Abramson: Yeah. And so kind of anchoring it in that. L and d space that, that you work in and, and thinking about that kind of problem based approach, what kind of examples are you coming across of, of really good ways of solving sort of l and d centered problems with ai?
[00:31:52] Ross Stevenson: Yeah. I think there's, there's ways I'd like people to do stuff. There's probably the overwhelming things that I see people do. I think the overwhelming things that I see people do is the [00:32:00] content stuff.
[00:32:00] Mm.
[00:32:02] I'd love to see that change. I think, like I said before, it's kind of a default. I think it's easy to do, so most people do it and kind of wave the, we've adopted AI flag, um, on that, the, the more innovative ways that I'm seeing people use stuff.
[00:32:15] So examples are being helping with the load, with research and reporting and organisations. So if that's through an assistant standpoint, whether that's through using AI agent as an example to. Interact with users across a company to survey them, to pull data, to then collate that data, to then create something outta that data and then give the l and d team actions to take out of those insights themselves.
[00:32:40] I think that is really, really interesting. I think the other side is, I'm still kind of on the fence with AI avatars, but I, I do think they have a place if used well. What I don't think is user them well is basically just converting your PDF or PowerPoint presentation into an AI avatar to talk [00:33:00] over it.
[00:33:00] I do think there's more interesting use cases where, as an example, I was at a learning experience and it was an online one, and actually an avatar was in the waiting room. To greet people and to talk to 'em about the experience on what was gonna happen. And I thought that was a really interesting way of using it.
[00:33:18] And I thought, okay, that's cool. So it's kind of, there was like a greeting you, getting you set up, getting you prepared for what's gonna happen. And then it came a different intervals in the session, but it was more user like this interesting breakout rooms where we join a breakout room. And you could interact with the avatar and ask questions.
[00:33:37] It would ask you about what you're focusing on right now in the actual breakout room, and it would offer some kind of anecdotes. I thought that was like a surreal experience. I thought that was really interesting. Yeah. But I've not seen many people do that. I think that's a good one. And then my final one would be, I think, re-looking at the way we assess whatever you call learning, right?
[00:33:55] I think for most companies and most education establishments, unfortunately in [00:34:00] my eyes, it's, it's more focused on recall. Not your reasoning. So it's all about quizzes and polls and it's like, what do you remember after doing three hours of this experience or this online thing? I think that's just pretty poor in this day and age to be doing that.
[00:34:15] Just to giving people multiple choice quizzes. Whereas actually, um, kind of related to, I suppose I've seen this in the workplace where once or twice, but I was doing a certification for machine learning, funnily enough with Google. And at the end of doing this online certification, you did some kind of.
[00:34:31] In person kind of interactivity as well. A lot of it was online and I thought, oh great, here comes the 50 question, you know, tick box thing that I have to do. But it wasn't. Instead what happened is I had a AI assistant that came to work with me, and what it would do is it would ask me about what have I taken away from the course?
[00:34:53] And it would ask me to articulate to it, well, what do you understand about this bit? Tell me how you would explain this to [00:35:00] someone else. Who hasn't done this course and doesn't understand any of the terms, what analogies would you use? Why would you use those analogies? And they ended up having a conversation for about 45 minutes.
[00:35:12] Going through what I'd actually been learning and then putting it into practice immediately and having to like, it really made me think of, oh my God, like this feels really pressure. 'cause I've gotta think about how would I explain this to someone else? And what analogy would I use? Why would I use that analogy?
[00:35:27] Does it make sense? And then it was, um. It was, it was assessed on that, I suppose, but it was just a really different experience where I was like, oh my God. Okay, so no questions. But I had this whole thing where it was like, okay, yeah, I understand that. And then it would also say, well, like here's maybe how you could improve that potentially, and here's some other ideas.
[00:35:46] And I was like, oh man, that's, that's really smart. Obviously I expect that from Google. But I thought that was just a really interesting way of saying when you're going through a learning experience and people are literally just, you know, pressing next and they. Hit the multiple choice questions. I [00:36:00] thought actually there was a whole conversational element there, which I took a lot away from.
[00:36:04] I actually think I learned a lot more from that experience because of that conversational element at the end. So I think those are the three that I would like to see more of. I don't see a lot of, unfortunately, I see a lot of, the more that I call it the Amazon delivery model of one click content creation, where it's like.
[00:36:22] Create me a PowerPoint or create me a course, I am not really sold on them. I'm more kind of looking at what are either features within tools or what are tools standalone that are able to help people think differently, do things differently, and also nurture kind of that human reasoning. I think more of that is coming.
[00:36:44] We're seeing, you know what I'd call proper AI coaches coming. Our AI tutors are starting to slowly make their way. Into the marketplace now. So I think over the next six months it would be really interesting to see how they land and how people take to them [00:37:00] and some of the case studies that come out of that.
[00:37:02] Sarah Abramson: That's super interesting. I think that's such a brilliant example, that third one especially about, um, finding a way to make it. That it makes you work harder, that you know, you've got to be more analytical. You've got to be questioning, you've got to have those critical thinking skills much more than regurgitation of memorized information.
[00:37:21] And that's, that's really powerful. That's much more helpful to us. And uh, it makes me think that, you know, the last few months there's been a lot. In the media, in general, public discourse about, uh, sort of loss of creativity, which
[00:37:35] Ross Stevenson: I think is
[00:37:35] Sarah Abramson: probably well-founded. Because if you do just use the one click approach that you're talking about, you know, and, and get it to right for you and you just publish it, you are skipping a step.
[00:37:46] And if that's how we started to bring it into education or to replace human creativity, then yeah, I mean that, that is something we should be questioning. But I. I struggle to believe that we are gonna allow that to happen because humans, [00:38:00] we're innately curious, we're innately creative and mm, I struggle to believe that we are gonna want to lose that.
[00:38:08] I think we're, we just do think that way. We want to find things out, we want to explore and ask questions. Um, so I, I wonder how it will develop over the next months and years with that kind of. Sort of harnessing the power of making us work harder, not work less, I dunno. Yeah, it may or you do both at the same time.
[00:38:29] Ross Stevenson: Yeah. But that's the beauty of this, isn't it? Is that we don't, we don't know. And we're seeing it evolve and everyone's building their behaviors with it. And I, you know, I fully agree with you. I think, I mean, even Sam Altman, who's the CEO of OpenAI that created chat, GPT, even he said that, you know, human writing, human experiences are now the premium.
[00:38:50] Because what's happened is that everyone now effectively has Microsoft words kind of like access to intelligence or some form, form of intelligence on demand. So they can [00:39:00] expand some of their capabilities. Doesn't necessarily mean they're building more skills, but they can expand their own capabilities for a while.
[00:39:06] I think you're already starting to see there's a little bit of backlash of, well, I want more human infused in that I want, you know, has a human reviewed this, has a human edited that, and I do agree. I think you know the one click thing. Definitely l and d. It feels very like it's everywhere now. It's rampant.
[00:39:24] Like everyone's doing everything to, to turn that around. But we'll definitely get to a point I think where it'll be like, right, how do we bring more human into this? And the prime examples of that, you already see it on social media and stuff where people can tell a mile away If you've used an AI tool unedited, like I've got no problem if you use an AI tool to write something, but you've actually.
[00:39:44] Proofread it. You've reviewed it, you've added your own bidding. And we, you know, you can tell it's not just completely AI generated. I think there is an issue if you just go to these tools and just copy and paste whatever's there because everyone's using these tools. Yeah. So they know the structure they're using.
[00:39:59] It's [00:40:00] so obvious. Yeah. From like, you know, I could be a hundred miles away and I could spot something that was a complete unedited chat GBT output. And I think, like I say, that is the issue. But I agree. I think that will change because. People will experience resistance to that. Mm-hmm. People will then look at, well, I know you've not done that, so I don't value that anymore.
[00:40:20] Yeah. So
[00:40:20] I'm not gonna give my time to that. And I think that's where the shift will happen. And it'll be the same with courses and all of that. When every course looks the same, feels the same, sounds the same. Eventually your audience is just gonna go, well, it sucks. Yeah. So like, we need something else and they'll vote with their feet.
[00:40:36] They'll go elsewhere in terms of using their own platforms or creating their own experiences with large language models. And you will just see all of that course, all those courses that feel like, oh, it's great. AI made 200 courses just fall off a cliff because no one's gonna use them. But that will come.
[00:40:54] I think it is just a, you know, it will, it will take time for that. But in some places, you know, it probably already is happening. People are [00:41:00] already saying, look, I don't want this. AI slop as it's kind of been coined nowadays. They just want, they want something that's more, more human, no problem. If it's AI involved in it, just don't be like, don't get AI to do it all and make it so obvious.
[00:41:13] Sarah Abramson: Yeah.
[00:41:13] Ross Stevenson: It's all AI because AI generated image, it's AI generated text. I mean, I hate those personally, where I see them when I go to conferences sometimes when I'm just embarrassed for people. 'cause I'm just like, that's obvious what you've done and you're literally, there's, there's nothing there of you, which is really unfortunate because.
[00:41:30] As we're talking about, you shouldn't really lose that. Like your humanity is fundamentally your biggest advantage in all of this. Mm-hmm. Like your humanity, your personality, your ability to communicate and articulate stuff. If you lose that and you just become part of a robot farm, then you're, you're in a bit of trouble, unfortunately.
[00:41:45] Yeah.
[00:41:46] Sarah Abramson: It's kind of boring and bland and soulless, isn't it? My colleague James just wrote a really nice article actually about averaging and how. That can be a place for using AI for averaging where you just want to do something that's [00:42:00] good enough, uh, gets the job done, maybe cuts costs of doing it. You need to do it, and it, it doesn't matter.
[00:42:06] In fact, it could be an advantage that it's average. But there will always be things where you don't want it to be average. You don't want your voice to be average. You don't want your creativity to be average. You wanna cut through with something that people pay attention to, or, or, or do something differently and creatively, innovatively, and, and that's where I guess we should be questioning why we'd be using a particular.
[00:42:30] If it's giving us a result that isn't, this comes back, doesn't it? To the, to the focus on problems. What are we trying to do and why? And I think if we get in the habit of asking ourselves that first before jumping into kind of thinking, we've gotta use this 'cause everyone else is.
[00:42:45] Ross Stevenson: Yeah. And that's the thing.
[00:42:46] Right? And the, the other part of that, which is really interesting and saying good enough is that as a human, you can take that good enough. And you can mold it into something better. I think it's, it's really interesting the way people talk about AI sometimes. 'cause either it's [00:43:00] perfection or nothing. It's like people don't seem to be happy in saying, you know what, that helped me do 20% of this task and it made it easier for me to do the other 80%.
[00:43:11] It's very, it's like if it doesn't do a hundred percent of it, it's rubbish and I don't want to use it. And it's like a really odd thing to be like, 'cause I'm like. I'm always think that in my, even my own work where I'm like, you know, I look at a lot of data, look at, look at research, look at a lot of reports.
[00:43:24] Of course, I'm using AI to help me with that, but I'm still using my brain and AI makes my work maybe fairly to 40% easier, but I've still gotta use that 60% of my time and my brain to then go over the bits that are interesting, to dig deeper, to make connective correlations and find evidence on that. But yeah, I just, I just find it strange where it is kind of this, and maybe it goes back to the culture in general of.
[00:43:49] Expectation of instant gratification and the one click thing, it's like, unless it does it all, and unless it does it to the quality that I want, then I'm not interested. And I just think that's such a [00:44:00] shame because there's so much power in the tools and it may, you know that the biggest thing of it, which may be that it helps you get from good enough to great.
[00:44:10] And you know, you can have a part of that process. So yeah, it's interesting what watching that happen, but we see that on all fields, like I say, L and d, sales, marketing. It's really interesting to see how people react to tools. And it's just like being happy now with maybe this is gonna be 10% of an edge.
[00:44:27] It doesn't just seem to be the thing now it's like, do all of it unless you do it for me. I don't care. And it's, yeah, that's a strange thing for me.
[00:44:34] Sarah Abramson: I wonder if that should kind of bring us back to the thinking about psychology and I guess culture really, that it should, I think, give us confidence that we can feel optimistic about, about how we use these tools.
[00:44:48] Because just as you're saying, that's saving you time, that the work couldn't be, be produced without Ross it, it would be. Mm. It would just be bland or rubbish or not very thoughtful or not bring the critical thinking [00:45:00] to it. It would be, yeah, it would be AI and not Ross. Right? So a hundred percent you're not gonna lose your job because we still need the Ross in it.
[00:45:07] And yeah. I wonder if that eventually will kind of come outta this kind of slightly panicky feeling about AI and think about how to use it.
[00:45:15] Ross Stevenson: I think it will, and you'll see much like Wiv. And again, obviously dub be speculation. I haven't got like a bloody crystal ball. I know anything. But if you think about social media and how that created all of this different kind of economy of the creator economy and people understood the technology and how to use that and how to build platforms and brands, you know, whether good or bad, it's a very similar thing with ai.
[00:45:36] It's like, you know, quite honestly, if you just expect AI to do everything for you and that's what you seek with it, you probably will be out of a job. It's not gonna help you. But if you seek to say much like a lot of these people who are, I think the definition, the industry definition is creators on these platforms that have sat there to understand how do these systems work, [00:46:00] how do I get these systems infused of my personality, and then use that as a package to reach people.
[00:46:06] That's the thing. 'cause a lot of, when you are, you know, you choose to follow certain creators or read their work or watch their videos. 'cause they've got a personality that they've infused with the technology that helps them distribute those messages and systems. I think it's the same thing. I think what we'll see is people will rise out of this.
[00:46:25] Who are the ones who will say, okay, I understand these systems and tools. How do I then infuse the things that I'm already good at and then pair it up with this so I can be better? And that can be very simple things like. Maybe, you know, you are amazing at creating PowerPoint decks or whatever, but you suck at storytelling.
[00:46:44] But then maybe AI can come into that and help you as kind of a partner or a coach to be like, well, how can we take you from really good at visualization and then add in a storytelling element onto that as well, so then you can enhance it. But then you are the one who's gonna be doing. [00:47:00] Storytelling in that as well, and vice versa.
[00:47:01] You could say you suck up presentations, but you're a great storyteller and do that. It's kind of a very generic example, but I think it's that point of the people that would do well is who understand the system that use the tools to amplify what they've already got, so amplifying their human skills, not saying, right, I can just sit on the beach in wherever now I'm gonna outsource everything to you and expect that I'm just gonna have.
[00:47:25] Some kind of standing in the job market or the world, because effectively you then have no USP. What's your USP compared to everyone else? If everyone starts automating everything, which is why it's quite funny for me anyway, to listen to a lot of these people who are like, automate your life. Here's how to automate all this stuff, Mike.
[00:47:43] I don't, I'm not really sure you wanna do that. I mean, you wanna get some help with stuff, but I don't think you wanna automate everything. You want to probably automate and get support on the stuff, which is the backend bits that stop you. From using your human skills and doing the human stuff. But like I said, [00:48:00] unfortunately, I think some of that in the cultural zeitgeist at the moment has got lost.
[00:48:02] And as you just saying before, I think that equilibrium will come back. Mm. 'cause we'll come back around to, eventually people will get over the gimmicks of, well here's loads of AI video, or here's load of AI imagery on its own. 'cause it's, that'll get boring. But using that within someone's video or an experience or a documentary.
[00:48:23] That's got human elements is completely different. Like the same with music. It's this, I think it's just adding that into the things that we do, and then people will appreciate that. But I think those things purely on their own, you know, it's to carbon copy factory.
[00:48:37] Sarah Abramson: Mm-hmm. I, I think, uh, just sort of reflecting on a personal level, I think one of the things that's.
[00:48:42] It's not really stopping me from using ai, but it's, it means I'm not using AI as much as I could be. It's, it's actually just comes down to imagination because I, you know, when you've built up habits and ways of working and particular approaches and how you typically, mm-hmm. [00:49:00] Sometimes even just on autopilot, think about.
[00:49:02] Going about doing a particular task or job or approaching whatever it is, you've probably got quite, um, ingrained ways of doing that. And this requires a bit of a shake up. Mm-hmm. Constantly if you are gonna be sort of questioning and thinking and it, it requires a bit of application of imagination to think, oh, I could be using AI for this.
[00:49:25] And that's where I think for me, it goes beyond being a tool and something that. Is a, is a way of working that I feel like, personally, I, I don't have any sort of fear around AI at all. I'm, I'm really intrigued by it. I'm completely open to using it, but I haven't quite got my head around times when I could be thinking or I just don't stop and think, yeah, all the time, oh, this would be a great use of it, I suppose maybe over the next.
[00:49:57] Period of time, however long that is. [00:50:00] Um, that as we see each other mm-hmm. Doing things, experimenting with things, we see use cases that, that kind of thing gets easier for Yeah. Everybody. So the, the question that I suppose you can get to asking you a question, are you seeing good ways of people sharing ideas and changing habits by just communicating how they're working differently with each other?
[00:50:21] Ross Stevenson: Yeah, different. That's why I try to be as much as possibly for my own work quite open. And just be like, you know, here, here's something that I do that sometimes I think no one's gonna care about that. And then they do care about that. Oh, that's really interesting. Like, oh, I didn't realize that'd be interesting to other people, but we'll share that anyway.
[00:50:37] But I see the same thing, you know? But again, going back to the point, right, again, this, you really have to kind of cut through a lot of the noise and rubbish and snake oil kind of stuff that's out there to be like, you know, who's using this in intuitive ways. And it comes in many domains, like some the small businesses.
[00:50:55] That use notebook, LAM to onboard employees in their retail [00:51:00] business or help them keep track of trends, you know, across sales in that organization now, or that small business I should say. I see little things that I think God, yeah, so interesting. Like kind of what they're doing there, or I see other bits where companies, you know, where they're taking.
[00:51:16] Big reports or they're taking big articles or research and they put them into different AI tools to kinda keep a note of all of these different trends and tracking them and how things are changing and how all of the ways we work are changing and how some of the ways we interact. And I just think, again, I, that that's, you know, that's a really smart, long, long-term and valuable way of using it.
[00:51:37] And it goes beyond the very. Still great use cases, but they're very simple things of write me an email or summarize this page that they're very simple, but I think about, oh, okay, yeah. You know, spending the time to read an article yourself and then chuck that into an AI tool and then get that AI tool to give its own view and then you giving your views [00:52:00] and then go back and forth on that.
[00:52:02] I saw someone do that many years ago, right at the beginning of gonna chat to devil God, that's really interesting. And I still do that to this day. So it's kind of little things like that where I think the fault partner in, to your point, kind of opening up that kind of like other ways of working and thinking and devil's advocate.
[00:52:18] And what I really want and what I really like about a lot of this is that in ways, if you use it. Intelligently, it can definitely surprise you. And in my opinion, I think it's made me at times think a lot more critically than I would've done, perhaps on my own. Hmm. Especially if I was just doing something in my, you know, my notebook or whatnot.
[00:52:39] And it also exposes me to diverse opinions or different thought patterns that I wouldn't have had. And I think that for me, as a learning professional is gigantic. Mm-hmm. Because behavior change. Yeah. And that's really what you want. And. The capability is there. It's just getting beyond that veil of those basic things that are helpful.[00:53:00]
[00:53:00] But like I say, it's then just picking up stuff from other people who. Um, and you know, like if, if anyone wants to use it, it's probably the, the, the best hack I found for it. Like, I literally, like I'm on Reddit, finding stuff on people. Like if you go to Reddit, artificial intelligence, like subreddit, chat ccpt, people will post loads of stuff that they do where they're like, Hey, I built this thing, or, oh, this is what I did at work to do X, Y, and Z.
[00:53:22] There's like so much stuff there where you're just like, it just inspires you all the time. And some, there's, sometimes it's rubbish. It's being honest, but there's a lot of stuff there where you think, oh my god, that's really smart. Why didn't I think of that? Or there's ways that people are working with it.
[00:53:33] So I think all of this stuff is around, it's just going to look for it. And then the most important thing, of course, is doing it yourself. Like you can go and watch videos all day long or do whatever, but if you don't go and try it yourself in your actual practice, it's, it's not really kinda much used to you.
[00:53:51] So again, it'll be like, don't bother going sit there to watch a tutorial you're on using midjourney to create loads of images. If your job is not a graphic designer [00:54:00] or a web designer or something that's not really gonna help you, it might find it fun, but it's not actually gonna help you. But you know, if you're an accountant and you're working with Microsoft 3 6 5, and actually they've now got AI inside Excel, that's something that's really interesting, probably important to you to help you on your workflow.
[00:54:16] So it's just making those really kind of like clear decisions on what do I say no to so I can focus on the stuff that's actually. Gonna help me. Yeah. In some that, I mean there's, there is inspiration and case studies everywhere. I think you just, you have to look for them in places. Maybe you won't always go and look, and I don't always think social media is the best place in that.
[00:54:37] 'cause every five minutes someone wants to tell you they've got a new AI tool that's gonna kill everyone. Or they've got a new process that ends the process they spoke about the day before. But things like, you know, I would say Reddit. Communities on Slack at other places and that are probably the best place.
[00:54:53] 'cause people are just randomly sharing stuff going, oh, I did this and it worked. You're like, man, that's, that's really smart. I'm gonna steal that and [00:55:00] try that as well and see how it goes.
[00:55:01] Sarah Abramson: Yeah, I mean, from everything you were saying, it's sort of, it really stands out that this, these key skills and ways of thinking emerging that are around being open-minded, being outward, looking for ideas and input and being willing to experiment.
[00:55:17] And I think. Those kind of key capabilities seem to be mm-hmm. Really, really crucial. And, uh, yeah, I mean, I mean, those things are great, aren't they? And I think if we can get comfortable with them,
[00:55:28] Ross Stevenson: mm, a hundred percent. Yeah.
[00:55:29] Sarah Abramson: This has been so interesting. I mean, I, I think there's obviously a ton more we could talk about.
[00:55:35] Um, but I'm really grateful for your time. Um, it's been, it's been great to talk all this around with you. Um, I always ask our podcast guests the last question
[00:55:45] Ross Stevenson: Mm.
[00:55:45] Sarah Abramson: Which is speaking to you as a human.
[00:55:48] Ross Stevenson: Mm. What's,
[00:55:48] Sarah Abramson: what's exciting you at the moment? I mean, that might be technology related or might be something completely different.
[00:55:53] What, what are you focused on? What you are looking forward to and motivated about? In or outta work? [00:56:00]
[00:56:00] Ross Stevenson: Um, that's a very deep question. I'm not even sure what the answer would be. Can you
[00:56:06] Sarah Abramson: can answer it superficially if you prefer. That's absolutely, no, I can't.
[00:56:08] Ross Stevenson: I kind of, it sounds awful. I kind of live day by day really, and just see things as they come.
[00:56:12] But um,
[00:56:13] yeah.
[00:56:16] What am I looking forward to? Oh my God, that's got me stumped now. Um, I mean, as a human it's just cheesy. It's just, it's just most things. It's just like. So it's like health and happiness and all of that cringey stuff that people were saying. I mean, that stuff, that's the main thing that I focus on and optimize for really.
[00:56:33] And then you, if we tie that into ai, like I'd, I'd love to see more innovations from an AI perspective that are less about how, you know, some talk can write loads of posts on LinkedIn for someone and more about medical care and advancing stuff there. And you know, doing stuff in the kind of that field really.
[00:56:52] I'm more, uh, looking forward to that. Yeah. But for me in general, as a human, it's just, um, yeah, just trying to be healthy and [00:57:00] happy. As cringey as that sounds, I don't think any more forehead in that.
[00:57:03] Sarah Abramson: No, it's not cringey at all. It's it's true. And it's, it's fundamental, isn't it? And uh, however clever technology gets, we.
[00:57:11] We are, um, we're all human, so it matters a hundred percent. Thank you so much, Ross. This has honestly been such a pleasure and, uh, mind expanding, but in a way that doesn't feel overwhelming. It feels like things that we can do and, and think about. Cool. So I'm really grateful to you. Um, thanks for helping us think it through.
[00:57:30] Where do people find you?
[00:57:31] Ross Stevenson: Yeah. Easiest thing to do is, um, on LinkedIn. I'm pretty easy to find on there, to be found pretty active. So just Ross Stevenson. On LinkedIn, I'll, I'll send you a link. I'm sure you'll send it out anyway. And then still these folks.com. So that is where I keep all of my work. So articles, videos, all of that good stuff that people do nowadays.
[00:57:51] You can find it on still these folks.com. Um, we've alluded to it. There's a newsletter. I'm not gonna force you to sign up to it. If you wanna sign up to it and get some, I should say [00:58:00] mostly sarcastic takes and jives on stuff and technology, then by all means, um, go and do that. 'cause I'd, I'd love to have you as part of that and to.
[00:58:08] Learn about some of the things that I'm doing as we've discussed today. But um, yeah, that's where I hang out.
[00:58:12] Sarah Abramson: That's fantastic. We'll share all of that in the show notes. Cool. And, um, thank you so much for joining us today.
[00:58:18] Ross Stevenson: Thank you for having me.
[00:58:19] Sarah Abramson: Uh, hope everyone, uh, who's been listening has really enjoyed this and got as much outta this episode as I have.
[00:58:25] Uh, please do like, subscribe and share the episode and come back for future episodes two. Uh, bye for now.