How AI will change JOBS and HIRING & how to PROTECT your work and your FUTURE job.

Speakers
Christopher Lind, Dave Erickson, Botond Seres

Dave Erickson 0:03
Trying to figure out how AI will affect your current and future job prospects? Is your head on the chopping block or are you ready for a bright new futuristic job? On this ScreamingBox Podcast, we are going to examine how AI is going to affect people's jobs as well as their motivation and emotional state. Please like our podcast and subscribe to our channel to get notified when the next podcast is released.

Afraid that AI will take away your job? Does that affect how you behave around your boss? What should you do? Welcome to the ScreamingBox technology and business rundown podcast. In this podcast I, Dave Erickson and my knowledgeable co-host Botond Seres, are going to divide and conquer which work AI should do and what should be left to the humans. With Christopher Lind, founder of learningsharks.com. On Learning Sharks, you will find the latest videos and discussions in developing skills, technology, navigating careers, corporate culture and finding harmony between personal and professional life. In addition to Learning Sharks, Chris is currently Vice President Chief Learning Officer at Chem Med, where he is leading their healthcare enterprise learning strategy to improve skills, performance and the well being of their employees. Prior to that, he was head of global digital learning at GE Healthcare, where he led the organization that was accountable for the digital transformation of learning and talent development for the global and commercial and marketing functions. Well, Christopher, your background certainly indicates that you have a lot of HR and digital learning experience. Today, we are wondering a lot about how AI is going to affect people's jobs, as well as their motivation and emotional state. Coming from the HR side, how do you see AI affecting today's job market?

Christopher Lind 2:07
So it's been interesting coming into 2024. Because 2023 was very much a watch and wait mode. I think the technology exploded, anybody who was alive in 2023 feels like holy moly, the tech was moving really quick. 2024, but a lot of companies weren't ready to take action on it yet, they weren't really sure they were kind of experimenting, seeing what was possible. I think 2024 is when we're going to start to see companies starting to make more decisions. I was involved in some research at the end of last year that was saying roughly 45% of senior leaders are anticipating disruption to jobs in their organizations in 2024. So it's a pretty sizable chunk of stuff that I think we're going to see and I don't think it's really going to slow down from here on out. But I think we're gonna make some mistakes along the way, if we're not careful.

Dave Erickson 3:05
What!? Humans make mistakes?! I can't believe it!!

Christopher Lind 3:07
I know, we have a track record of never doing that. So this will be a first in history where we make some mistakes as we try and move really quick.

Botond Seres 3:16
Yeah, so I mean, I think the original sort of idea behind the, the end goal of AI, which is an AGI, is to stop making mistakes, and just your head in a way that makes sense for everybody. But I don't suppose that's gonna happen in our lifetimes, but it may, who knows? Anyways, Christopher, in your role as chief learning officer at ChenMEDS, how do you incorporate AI into the enterprise learning strategy to ensure employees are equipped with the skills needed for the evolving workforce?

Christopher Lind 3:54
So on the employee side, we really have followed similar approaches to other organizations where, you know, this last year was about trying to help introduce people to it because I don't know what your experience has been, but it seems like there's a lot of polarity. Either people are terrified of it and they're sticking their head in the sand just staying clear away from it or they're just completely going gangbusters with it and really using it poorly. So last year was just even trying to help introduce people to this idea of what is and again, AI is a big umbrella term, as you both know. So generative AI was really kind of the introduction for folks. of, Hey, how do we, how do you use this thing and when is it appropriate? When is it not? Even just some of the basics of what kind of information should you be putting into this? I mean, that's a big concern for a lot of information security officers were trying to go, We can't have corporate Intel plopped into these, you know, online platforms. So some of it was just really more awareness. Yeah, it's a terrible idea.

Botond Seres 4:59
That's so much.

Christopher Lind 5:00
So dangerous, so dangerous. So really, last year was a lot more around awareness of what this is. Things like that, you know, as we move into the next year, in healthcare specifically, we're really trying to get our arms around it in a number of different ways. I mean, I'm integrating it with my teams around process efficiencies. But then we're also looking from a healthcare of, how can it optimize and automate some of the things and where should we not use it for that? Like, where should that human patient care really stay top of Mind?

Botond Seres 5:35
I mean, you probably know much more about this than I but I heard rumors that generative AI is actually pretty good at like, chemistry, supposedly.

Christopher Lind 5:47
Chemistry as in like, the actual chemistry piece or like interpersonal chemistry?

Botond Seres 5:53
As in the science chemistry. Yeah. So it's like, pretty decent compared to other fields.

Christopher Lind 6:03
Yeah,there's certain areas I've toyed around with it, because I'm involved in a lot of different advisory boards. Chemistry, I don't know that. I've messed with that one too much. I wouldn't ask it to do a lot of math, that'd be an area that I would be like, maybe don't ask me to do your corporate financing. Probably not the best on that one. (yeah not great). Yeah, no, but I know even at GE we were using artificial intelligence and I don't even know if technically, we would have called a generative AI at that point. But it was getting really good at reading images, to be able to detect diseases, and things like that, which again, in healthcare to a physician that can be perceived as very threatening. So again, helping people understand where does this fit, because if you tell someone who's a radiologist, who spent X number of years in school, learning to read scans, and then you go, Hey, we got this new MRI machine and basically, it's going to do your job for you. you frame it that way, you just watch people freak out and there's a lot of that.

Dave Erickson 7:08
That might be a subject we can get into for sure. I have a marketing client who's in healthcare. They've developed AI, that basically takes a lot of patient data, and uses that to predict outcomes and predict treatments. And, you know, I think that there's a lot of, I think that AI needs to be kind of broken into segments of application, right? So there's generative AI that basically is going to be developing a lot of content for people. And yes, people in the marketing industry, or people who have jobs that are related to content, are going to feel threatened by it, or they're going to feel excited by Hey, I can do a lot more. Right? Yep. And I think those are kind of the two polarities of AI, that people either feel, Hey, it's going to take away my job, or they feel, Hey, I could do a lot more. Right. And I think it depends, is kind of, the half, you know, the glass is half full or half empty, it kind of depends on the person, the person is very optimistic, they're gonna see it as Oh, this will help increase my productivity. If people are very pessimistic, I see this as a threat, and I'm gonna lose my job. So you got generative AI who's dealing with content, you got machine learning and intuitive AI, that's basically analyzing data and figuring out outcomes. And then you have AI tools, which are tools specifically built for, you know, doing specific things. So I know in the HR industry there are a lot of HR people who are feeling threatened in some ways, and others who are feeling optimistic that, Hey, there are these HR tools that can write job descriptions and read resumes and make this much more faster and more productive of finding the right people for the right job. And then again, there are HR people who see that as a complete threat, and I will have no job. Right. So it's kind of curious how, how are larger organizations or even SMBs, looking at these tools right now. How do they see it for themselves?

Christopher Lind 9:17
So what's interesting about it is a lot of the conversations because going back to what you said, like aI means a lot of different things. And so people will say like, Well, is AI coming for our jobs?" You're like, Well, I mean, I guess I don't know, like what do you mean when you say that? A lot of the time I spend internal and externally is helping people get clear on Well, what are you really trying to solve for here? Because a lot of times people aren't super clear on that. They're like, well, will AI improve our hiring? And you're like, Well, when you say improve your hiring, what do you mean like What do you mean improve your hire, improve the quality of hire, are you talking about efficiency? Are you talking about communication with potential candidates, like, are you talking about internal mobility? So a lot of times people in that's almost met with this? Well, I don't know, we were just going and looking for an HR tool to help us improve hiring and that's I think, some of the bumps in the road that we're going to run into in 2024. I did a, just a solo cast on this a couple of weeks ago, because there's a big boom. It's causing all sorts of controversy right now, because there are organizations that are now implementing AI to sniff out candidates who might have used AI in their job application or on their resume so that they can eliminate them and I'm going to what, what do you do it like, that's that? How is this improving your process in terms of some of this stuff? So that's where I really think a lot of this effort when you look at HR is a lot of challenges that organizations are facing right now is, they actually don't know much about how work gets done in their organization, they really don't know. And so they know, they need to improve things. But they haven't necessarily taken the time to dig into it to go. Okay, but how does that happen today, and honestly, one of the biggest risks I see for people is, if you throw AI at a broken process, it's just gonna break it a lot faster and make a lot more mistakes a whole lot quicker than before. And I, and I think that's some of the dangers we have to watch out for, because, again, I saw this, a company had implemented this AI to sniff out AI candidates and it actually was eliminating some of the top candidates, because they actually were really good at writing. And it was really good at this. And so the AI thought, well, this is too good to be a human. So this can't be good. And so they really actually had to go back and re-engineer the whole thing. So I think that's where 2024 is going to be a little bumpy.

Dave Erickson 11:56
We had a guest on one or two podcasts ago, and he was talking about how institution schools are using AI and it became obvious. So he was shocked because they were using AI to grade papers. And we found it quite funny that a lot of the students were using AI to write the papers. So you kind of had this AI loop of AI writing the papers, and then AI grading the papers that the AI had written? I think there's gonna be a lot more of that.

Christopher Lind 12:27
Well, and the interesting thing on that one, specifically, because I have a lot of friends in the higher education institutions, and there are a lot of people going Well, now the kids are using artificial intelligence. But what's interesting is, in some ways, it's actually improving their skills, because they aren't using it the way they're perceived abusing it, like knows, I mean, I'm sure there are just like, there's kids who bought term papers online. I mean, it's been happening forever. But most students aren't just going to write me a 10 page paper on Napoleon. And then it spits out the paper. They're using it to help in their brainstorming in the crafting and all this. And I'm like, they're actually generating better products and if you're really bothered by the fact they're using that, why don't you have them do something different to demonstrate their knowledge and skill? Instead of worrying about the paper? Because really was the learning in them sitting behind Microsoft Word for however many hours, Is that really what you cared about? So it's, it's really forcing a lot of rethinking what actually matters when we think about work, when we think about education, when we think about a lot of things?

Botond Seres 13:40
Yeah, absolutely. I mean, one of the main things is, my experience with generative AI is that most people expect it to just take over the job completely. And they see it as having been used for the past few months. It's more like, the faster, better replacement for search engines, right? Because it has immediate access to pretty much the entire internet, right? So I don't have to go to Google and say, craft the perfect search arrays and go through each hit one by one. For hours. I can just ask my question today, yeah, it spits out a summary of the first page, then I say that, even though this is not exactly what I'm looking for, I'm looking for something more like this. And boom, there it is. Yeah. Like, it saves so much time. I just told it like, Hey, I want to implement this thing in the, in the project. It's like, I don't know.

Christopher Lind 14:46
I like to say though, the analogy I use with is, it's like a really brilliant intern. So if you had a really brilliant intern, you might be like, hey, like you said Can you find me the latest on these things and all of us other stuff and, and they would craft it in a really meaningful way. And you'd be like, Oh, that's great, not quite what I was thinking, we need to redo this type of thing. But you'd never tell an intern, put together my presentation for a board meeting and not look at the thing, and then just show up to the meeting and go, Oh, no, like, that's not at all what I was planning. You wouldn't do that. And so I think you could, might not go too well for you.

Botond Seres 15:25
The people who would or the people who are afraid that AI would replaced them.

Christopher Lind 15:30
Exactly what and I think that's a really good thing. Exactly. If that's what you're doing today, then yes. Is generative AI a bigger threat than someone who is really thinking critically and making decisions and, and participating in the work? Well, then, no, then again, it's that brilliant intern that you can help have helped you with a lot of stuff.

Dave Erickson 15:51
And it's becoming pretty obvious. You can see the people who do that, because the content that AI, right, and they just publish it straight without editing. You could just read it, and you just say, Oh, God, this is just AI, there's all these mistakes.

Botond Seres 16:05
They think they're being so sneaky about it, too.

Christopher Lind 16:08
Oh, I know, the emojis. Emojis are in there and all you’re like, Who writes like this? But it is funny, because I think going back to in early 2023, a lot more people were fooled by that. And so you could, you could just say write a new white paper on this and it would spit it out. But now to your point, people's awareness, the recognition of this, we're adapting along with it. And I've seen it, you see an email from somebody and you go, you didn't write that I know you didn't write that or you see a presentation you're like, that's not at all how a human being actually would put that together type of a thing. So it is that, like I said, I like to think of it as a brilliant intern.

Dave Erickson 16:52
Yeah. And Botond said, it can really help in productivity. But I think that's, that's where a lot of the fear is coming from. Because even though they're not, even though they're expressing it in this kind of well, AI is going to take away my specific job. What, what I think the real fear is that people have is, it will make the world more productive. 10 years from now, when AI has been integrated into many things, the work that took a billion people to do could be done by half a billion people. Right? But then the question happens to come along, what happens to that half a billion people who basically weren't needed, because the people who are guiding the AI is making it so productive. Obviously, there's still another 6 billion people who are doing jobs that AI can't do, such as growing food and making things and other things. But you know, there is a large portion of our global population that is doing work that AI can make, quote, more productive. And if it made it twice as productive, you're not going to need as many people. So I guess the questions kind of, for the future is what is going to happen to the world, when it has all this productivity that AI can offer.

Christopher Lind 18:15
So what's interesting, and you hit on this earlier, and this is where what I do is so important for people because it's really about skills, and helping people develop new skills and skills for different things that they weren't doing before. And what's interesting right now is, in some ways, people are going through a bit of a professional existential identity crisis, because their professional identity has been so attached to doing whatever it was like I'm the PowerPoint person, or I'm the Excel guru, or I'm the this and that, which is interesting, because at the same time, a lot of these tasks are things if you ask people, do you like doing it, they hate it. You know, I have to write all these emails, or I have to put these presentations, well let AI do it and it's like, wait, what? No, no, it's taking my job type of thing. And you're like, but you hate. You hate doing that. And now it can do it for you. But I think what I'm finding is a lot of people are struggling to go. Right. But what do I do next? Like, what are the things I do next? So that I know and I think that's where there's a lot of uncertainty right now, because companies are still figuring out? Well, yeah, I guess what do we have them do next? But it goes back to something you said earlier, Dave, which is one of the things that I encourage people all the time to think more about is a lot of times they're just looking at how can AI help us get more efficient at what we do today? And that's great, and that's fine. But what not enough people are doing is asking the question, what are we not doing today? Because of capacity restraints, because of innovation restraints because of these other things that we couldn't be doing? If we had, say, twice the work For us that we had right now, like, let's say we could double our capacity, what would we do that we're not doing? And that's one of the things that a lot of people aren't doing. And I think there's a lot of opportunity for that. So I just even think for my own personal journey, there's a lot of stuff. I mean, I've got seven kids, a busy job, I do all this other stuff. There are things that come into the 2024, that I was able to go, you know, there's something I've always wanted to do, that I just, I didn't have capacity, because of all these other things that I'm now able to do. And I think, as more people start to think about those things of man, what are those things that you go, I really wish I had time to do X, that you go, Well, what if you could now, and I think that's where that time of, well, if all the crap we're doing today could be done by half as many people instead of just going well, those other half were screwed, saying, Well, what's this new stuff? What are these new innovations or what are these areas of untapped potential that we have just kind of left on the table?

Dave Erickson 20:59
I like to remind people, and maybe you can give your own take on this. Contrary to how it kind of appears, AI can't think. And I tell people, so that kind of seems to be the opportunity. Right? Where, you know, if you're gonna go and look at your own personal career, AI can't think so there's still gonna need to be people who think and guide AI. That may be where a lot of the job opportunities for the future may be?

Christopher Lind 21:33
Well, I think that's a big part of it. And I think when it gets to your point, AI does not have context. I was with my kids, I was watching, you know, some AI fails the other day, just because it's always interesting to see this and the self driving cars, somebody figured out if you just put a road cone on the hood, they just, they shut down because they're trained, like don't hit a road cone. And just boop, you know, you put a road cone on the hood, and they just, they can't function. Well, that's great. But because a person would look at that and go, Oh, I have context. That's a road coton Yes, but it's on my hood. Like, we can contextualize all this. AI can't it can't make sense of all those things. And to your point, that's where humans can really lean in and go, Well, yes, statistically, this is true, but with these other contextual factors. No, that's actually not the right path. I think the other one is, though, and one of the things that is consistent in seeing things is intimacy comes from interpersonal relationships, and intimacy may be a weird, professional word. But it is this, people like to interact with people, unless it's a purely transactional activity, like if I just need to go get something done, I'm fine with working with AI. But if I'm actually like, I don't want to hear a diagnosis that I got cancer from an AI bot, I don't want to, I want another person to sit down with me, and be there and support me and all these other things. And I think that's something that, interestingly, is a huge opportunity area in the skills and in going back to my healthcare background, physicians, a lot of the focus was, hey, I know you used to focus all your time on you know, get as many patients through the door as possible, do this kind of thing. You're going to need to spend your time going no, now it's really about that patient interaction. And how do you connect with them on a human level, and some of them are like, I hate dealing with people and I'm like, well, then you might not want to be a doctor in the future. Because that human touch point is going to be paramount and your ability to do that really well, that's going to be really important. So I think about that for people management. It's a lot of people managers are like, I don't have time to deal with my people, I'm too busy taking care of all these other things. And it's like, Well, in an AI world, you may be doing nothing, but spending time caring for, watching over, guiding, shaping and molding the futures of the people who are underneath you. So if you hate doing that, then again, an AI manager will probably do a better job than you because you aren't leaning into that. And I think that's another area that when I tell people I'm like where should you invest in your skills? Like for every $10 I'd put $9 of them on your interpersonal skills, because those are going to be a key differentiator for humans in an AI world.

Botond Seres 24:45
And I couldn't agree more. However, I do you think there is a place for hard skills as well in the future? Because for the moment, we know that generative AI can do math to save its life. Like no, it's, just as an example. Yeah. And I think there is, there's gonna be a shift, like, on the one side, we're gonna have people who are all soft skills, and then we're gonna have people who are hard skills in the middle, we're gonna have AI. Yeah,

Christopher Lind 25:14
I think you're spot on with that. And I think what's interesting about it, though, is on the hard skills, where I'm diving into that, you have to be able to quickly adapt. So you almost have to understand the underlying principles underneath those hard skills. So as an example, like coding, you may not be coding in Python, or whatever, because well, the language may change, or it may be faster to do this, but understanding the hard skill of computer science, and how does logic and programming work, that's a good hard skill to invest in. Because if you know how to do that, well, the programming language may change, the way we interact with AI may change, but going back to what you said before, Dave, the decision making the thinking about note, this is the critical decision, the path we need to make. That's where it's going to matter a lot versus the actual typing the code into whatever interface it's like, Well, no, we'll have AI that's doing that. But you're going to need that person who has the hard skill to assess, be able to anticipate what might go wrong with this, where might we want to take these other things into account? So I do agree that it's not like hard skills are going completely away. So it's a great point.

Botond Seres 26:32
Do you think writing is kind of gonna go away? Like, as hard skill? I mean, writing, yeah.I mean, I do think editing is going to be huge, like we're going to have, we already have so much AI generated content. But most of the writers who can write at, like superhuman speeds, like writing 500-600 pages every few months, yeah, it's absolutely astonishing to be like you and persons able to do that. But I think that that profession is going to become much, much more available, thanks to AI because it can generate all the stuff no one cares about. Let's be honest in a certain book, like in every book, there is so much filler. It's madness. Like even in the better ones, like the most successful books, like let's say Harry Potter, like, I would say it's 80% filler, with just random moments in the day the lives of wizards. And that's the kind of stuff that, that AI is going to be able to write. And someone, an editor just has to sort of massage all of that together. Same thing with programming, I suppose.

Christopher Lind 27:48
What's interesting on the writing is, I think the role of the writer, I could see it changing. Because that person who can type out 500-600 pages in a couple months, that person is wildly creative, with all this stuff and their skills have had to keep up with how quickly they can write. And now that's no longer a barrier. But even like the Harry Potter one, like I know some people, they love the narrative and the character, like all this character development, and other people, it sounds like you are kind of more like me, where it's like, get to the point like what happened? Like, what happened? How did it play out ,other? And what's interesting is before you could never ask RC scrolling, or rolling (Rowling) like you couldn't be like, Can you write a TLDR version of the Harry Potter books because, you know, my kids just want to know, like, what's up? Because they're friends with these people. They don't have time or interest in this, but they need to know kind of what's going on? And before that would have been like, Well, no, there's no real way to do that. In scale, we're now it's like, well, actually, now there is you could be like, well, we could create a long version and a short version, or we could create a version and this or that it's, it's definitely going to change what it means.

Dave Erickson 29:00
I am going to push back a little bit because I've been a writer for most of my life and I look at what AI does and I have a marketing company that does content writing. And we use AI to do the content writing. And I can tell you that the AI is much better at writing outlines and summaries and as Botond said, collecting up data. But the best stuff that is written is when a human is involved in reorganizing it, yes. But, the other thing is, is that even when I tell AI to be funny, AI is very bad at being funny. (And the context is terrible.) Right? And so, writers need to write that humor. Humor is going to be a human kind of written profession. The other aspect of it is, you know, generative AI is just taking all the words that have already been written by humans and just kind of recombining it. Yep. Right? And it's good at doing that. And humans do that all the time. Yeah. Plagiarism. It's a form of plagiarism. But why rewrite something second, the English language has very specific grammar structures, spelling rules. And so there's only certain it's not an unlimited combination, right? It has its limits of how you say something, and what words can be used. So there's all these structural things. But I believe that the best use of AI for writing is that there is a human writer who will be able to take the output and turn it into something that's actually worth reading. I,

Christopher Lind 30:40
I 100% agree with you on that.

Dave Erickson 30:44
I, I've, I've done experiments, and I have found that only half of what AI puts out is actually worth reading. And the other half isn't, and it takes a human to shape the other half into something to make it worth reading. So I believe that and I think that that's kind of this, this theme where we're talking about that AI is a supporting tool, you really need a person to think about what the output is, and massage the output and apply the output and make that output a little bit more human.

Christopher Lind 31:23
Well, and when you think about it on that one, so just as an example. So I have a GPT that I literally just as an experiment, I trained on every piece of content I've ever written, every transcript of every conversation I've ever had. So I'm like, let's see how good it is at being me. And I've asked it, right, something like me. And to your point, I look at it and go, I would never say that. I've never, I would never write it that way. You know, it's close. You're like, yeah, it's like 80%. It's 80%. They're like, you know, I gave it some promises, some guidance (and candy), right, and it's close. But it's still feels very artificial in terms of that. And I think to that point, we've never in human history, had to deconstruct our activities to that level of specificity. Because it was always just why need to write something. You didn't know what 80% of that was just robotic, mechanical, reconfiguring words into a, you know, coherent paragraph and which part of that was actually the creative spark of, well, this is my emotion, my context, my feeling, you know, my, we've never had to do that. It was just like, well, it's the same thing. And now all of a sudden, we're like, oh, we actually have to deconstruct things that we just hadn't had to do before, which I mean, we went through this, this is where it just feels like we're on change fatigue, that happened when the pandemic hit. And all of a sudden, everyone worked from home and companies went, Oh, like, what part of the work? Actually is proximity focused? Because companies never had to think about it. They're just like, well, people are here. So that's where the work happens. And then they weren't here. And they suddenly had to go, Okay, like, which parts do we need to bring people back for? And which ones do we not? And companies are still making mistakes with that going, like, hey, we need everybody back in the office. And people are going for what? And they can't really articulate it.

Dave Erickson 33:35
And their, their motivations have changed, right? You don't want to go back to you know, there are a bunch of people who do. My wife is a very good example. She loves the social interaction of being at work with a bunch of people and talking to people. She wasn't very happy working from home, right? She liked the productivity of it, she liked that she could focus a lot on her work, and that she could have a nice lifestyle and is comfortable and flexible. But after a year, she was like, you know, I really do want to get back to work. And you go, and there are people like me who I, you know, I had an office and I had people working for me and all that. And I'm much happier working from home. Yeah, right. It's just different things for different people, right?

Christopher Lind 34:20
And I think to that point, though, we've had to deconstruct that, and now start to reconstruct that in a different way and I think the same is going to happen with artificial intelligence, where it's like, we've never really had to think about the work before. It just, it just happened. And people did it. And so we just kind of thought, well, that's how it works. And now it's like, well, hmm, artificial intelligence can do some of it, but what parts can it do? What parts can I do well, what parts can I not do well? And we're having to reconstruct some of these things that just honestly, I don't think people have ever, ever thought about well,

Dave Erickson 35:00
I mean, in the HR industry? Um, you know, I assume just going to ask this as a question. I assume a lot of HR professionals and C level people are starting to have to really think about, well, what do our job titles really mean? And these job descriptions that we're trying to find people for, are they going to change? Or are they going to stay the same? What do you think is happening in that sector?

Christopher Lind 35:28
So on that one, job descriptions are a great example of where you're seeing this really hit a hat. Because as companies are going, alright, we're hiring people into this role. And, you know, before it was just go, you know, write a generic job description, find somebody who's been doing that for 20 years and, you know, we'll, we'll hire them into it. And now it's like, well, what does but what does that person really do anymore? And, again, it's forcing the deconstruction of the work where HR professionals are having to have conversations with business leaders, going, What do you need in the person though? Well, we need a software engineer. Right? But like, what are you going to have that person doing because just because someone's a software engineer doesn't necessarily tell us how well they can do certain things. And so it is a challenge. And companies are having to really rethink this. And again, it's very uncomfortable, because for a long time it is just is pretty straightforward and easy. And it was starting to get disrupted, before the AI boom hit, because of the skill crisis. And as kind of a gray tsunami, you know, people were leaving the workforce, new people were coming in, we're in the perfect storm. But it's, it's a challenge.

Botond Seres 36:47
There is definitely the reconstruction of or deconstruction of the engineering field. That's, that's been going on for the past, I think it's pretty recent development for the past few months, I would say, is when everybody really started to greenlight AI and just start figuring out what it can and what it can do. And, yeah, there are going to be massive parts of the job that are no longer necessary to be great at. The most amazing thing that I saw is AI just automatically finishes my calls. It's, it's a new feature. I can name any names or any ideas, but I was just starting to write something and just hit space. And the total completed like 10 lines. Didn't even notice I started reading the first I looked up like, huh, it's done. Yeah. So yeah, there is, it's going to be great like we have. And also, it's going to be a revolution of how, how we write things. Like there are very specific tools that are made for very specific problems. To solve those problems. Like we have Automapper, the whole point of Automapper is that we don't have to write the mappings or do the AI, it writes itself. And the code analysis is gonna be so much better without it. Because that's like the big thing. It just completely breaks static code analysis. But if AI writes it for us, like, who cares? Just hit enter? It's gonna write it for you. Yeah. Well, and I think there is going to be a change in, in architecture as well. Not only

Christopher Lind 38:42
Well, and I think on that it's interesting, because one, for a lot of people, it feels very new, but it's also not new. You know, I mean, jobs have changed and adapted and gone away. sy crap. You know, in a matter of months, it went from this to this. And I think that's where people are catching up. One of the things though, that I'm being very cautious of because I've seen some catastrophic mistakes already, is going back to, there's a bit of a disconnect between not a bit there's sometimes there's a chasm between senior leaders who are making the decisions, and the people who are actually doing the work. And that chasm becomes very dangerous in the AI space. Because, you know, I've talked to some companies who now you can hire AI employees and why hire a person, you can go put in some dials of what you want, and you'll get an AI person at a fraction of the cost type of thing. And I've seen people make this mistake, because on paper, as a senior decision maker, you go, Hey, we can cut our budget by cutting this department in half and we'll hire a bunch of AI employees. And they don't realize the implications of, okay, there's some efficiency to be gained. But if we just wipe this out, we're going to make some constant major, bad decisions along the way. And I think that's where I am trying to be a voice kind of the shouting voice in the wilderness of, hey, if it sounds too good to be true, it is. So be very careful with this. Because there's a lot of promises and AI's writing a lot of checks. And they're going to bounce. And that can be very catastrophic as a senior leader, where you go, Hey, I've got this really great idea, it's going to free up a bunch of cash, and it's going to, you know, improve our product optimization, and all of a sudden, you go, oops, we just missed, you know, a product development timeline, because that whole product team went out the door and now we took a x billion dollar hit. I mean, it's, people just need to be very thoughtful and pay attention to the details on some of this.

Botond Seres 41:00
You bring up a great point there, Christopher. Like, the thing is, as you say, so many people are thinking like, hey, I can replace an employee with AI. Instead, what they could think is that, hey, I can give this poor, overloaded, overworked, overstressed underpaid employee and assistant for 10 bucks a month. Right? Like that. That's the whole idea between GPT I feel is to give people the help they need to do their jobs, not to replace them outright.

Christopher Lind 41:36
Exactly. But you have to understand the work. So going back to your example Botond of, okay, so it finished 10 lines of code for you. If you're a senior decision maker, and you don't understand the complexity of coding and the implications of that, it'd be very easy to go, Hey, couldn't we just let AI write all the code for us and just be done with this type of thing? But you know, full well, what would happen if AI unchecked, wrote all the code and somebody just said, here, hit publish, I'm sure it'll be fine. You might not feel that pain instantly. But you are going to feel it. And it's going to be 10 times worse, because now you don't even have you who understood, hey, this is what I created, this is what I wrote, oh, that broke, okay, I have the context to go back and figure out how to reverse engineer and fix it. It's just like, the black box made a change. I don't know what it is. And I think this is, this is some of the cautionary tales. I'm trying to warn people as we go into 2024. Because it does sound really great. But it's not without its risks. And I'm fully with you Botond on the approach of Hey, rather than just thinking, how do we wipe out certain people's jobs? Why don't you think more of hey, we already have these people? How do we make them better? How do we make them more efficient? How do we make them more satisfied? Because with engagement being at like, what I think 20% of people are actually engaged in their jobs usually, is a pretty average statistic. Wouldn't you rather have a highly engaged, eager, happy, satisfied workforce? Hands down, you're going to perform better as a company, the data supports it? .

Dave Erickson 43:24
Yea, I think this motivation aspect is also important. And I don't know, if companies are paying attention, I don't hear or see a lot of articles in the tech space or HR space, that's talking about how they're going to use AI to keep their employees motivated. Right? And AI is, obviously, is a demotivating factor for people who fear it in fear that their job will be replaced. But I also think that there is a real potential to use AI to increase motivation. But I, I and maybe I'm wrong in this. But I think the key to that is gonna be some form of education. Right?

Christopher Lind 44:14
I 100% agree and I think it's education not only around understanding what AI can do, but the bigger one, and this is the motivational piece is helping people understand what are you going to do now with AI as a partner alongside you? Because having done this now for quite a while, because again, people think AI just came out in 2023, it’s like it didn't, it's been doing this for a while. It's been changing things for quite some time. But time after time after time, when I've worked with organizations and worked with employees and said, Here's where AI fits. Here's what the future looks like for you, honestly, the future ahead is actually really bright for people and 9 times out of 10. I'm like this is, isn't this the stuff you say you actually really enjoy doing? Isn't this the stuff that gets you out of bed that, why you got into the field, you're going to be able to spend more time doing those things. And less time over here doing this robotic mechanical, I mean, every once in a while you have the person that, you know, is like, well, I prefer to just fill out Excel spreadsheets, you're like, Well, okay, I mean, there's always going to be the exception to the rule. But the vast majority of people a lot of the fear and uncertainty and disengagement is, they just can't quite paint a picture of what that future looks like. And when you can paint the picture, and you can create the roadmap, which is about education and skill development, suddenly, people start going, Hey, this is pretty cool. I mean, even listening to you Botond, like the fact that was like, Hey, I finished that 10 lines of code, instead of me having to like, know, this, you know, this is what's going to come, it's like it's done. Now I can move on to the next part, I can solve that next challenge, or I can work through my next problem that I'm working through, instead of taking the time to sit and fill out, you know, what I know, I need to fill out because I just need to cross t's and dot i's.

Dave Erickson 46:16
If you were to advise the company, SMB, or enterprise, and you're talking to, you know, the leadership C suite. What advice would you give them on how they should look at changing the way their company hires and manages people in the AI age?

Christopher Lind 46:36
A lot of the advice that I'm giving people right now is, if you're not talking about this, you should, because one of the biggest threats, I think right now, especially in the AI age is we're about to walk into a trust gap. They’re a bit of a trust crisis, because if you look at AI, even, it's creating content, you know, people are wondering, is this video, even the real person, like, what is real anymore. And so they're thinking all these things. And so for a lot of senior leaders, one of your best talent strategies is to be proactive on talking about, hey, here's where we're going. But we want to invite you into that, because building that culture of trust and transparency, going back to what I said before, people, managers and senior leaders, your success in the AI world is going to be about how well you connect with people, and connect the dots between the work people are doing to the strategy and the vision of the organization. And so if you're doing that well, one, your people internally already are going to be better connected. But that has an effect on your external brand and the way you're hiring people. As for the hiring piece, though. And this isn't something like a C suite needs to necessarily do. But in some ways they need to participate in getting to know the work that needs to happen in your organization and being more focused on outcomes than the activity itself. You know, and I think that goes back to your hiring practice. If you're hiring people, think about what you need them to be accomplishing less about how they're going to go about accomplishing it, because that's probably going to change by the time they get here. You might be like, Well, we did it this way last month, but this month, we, you know, implemented this new thing. But if you're really clear on, here's what we're trying to accomplish as an organization; here's what we need to accomplish in this function. That not only makes it much more durable for kind of, the uncertainty of the future, but as you're bringing new talent in, it's a lot easier to know, is this person going to be successful at doing this? And it's a two way contract, here's what we're doing. Can you help contribute to this? No, yes, you know, type of a thing versus getting spun up in the weeds. And I again, I think that's all part of that deconstructing work that every organization, small, medium, big enterprise, everybody's going to have to go through that. And that's probably the best hiring and talent strategy anybody can do right now.
Dave Erickson 49:07
Maybe you can tell us a little bit about learning sharks and what you're doing with learning sharks.

Christopher Lind 49:11
Yeah. So you know, it's interesting, if people look up what I'm doing and all this you like, you're… I do a lot of different things. So a lot of what I do, I've got my day job. But a lot of what I'm doing right now is just trying to help people make smart decisions about what's happening with the future. So I even rebranded my podcast to future focused, because my goal is to help companies stay 10 steps ahead of things. Because everybody right now is trying to figure out what do we do with this, when we don't know what next year might bring? And so really, for me, you know, the advisory work I do, the content I create, it's really all about helping people make sense of the uncertainty around them right now. So that, that plays out in a lot of different veins.

Botond Seres 50:05
Christopher? What do you think is the future of AI?

Christopher Lind 50:10
I don't know that I even can say what I think it is, at this point because of how dynamic it is. And I think anybody who's promising, this is what the future of AI will be; I think it's foolish. I honestly think it's foolish to say, here's where it's for sure. Gonna go? Because, to be quite honest, I don't even think it's, its limit is our imagination. And so when I think of what could the future of AI be? I don't think we've even imagined quite yet what that will be. And I think there's a lot still to unpack on that and I'm looking at, you know, where are we going with robotics? Where are we going with medical science? Where are we going with some of these things? And it's going to be really interesting, some of the boundaries that we're going to break. And some of the ones that I go, should we break them? Is it wise for us? How far is too far? So at this point, I honestly I, you know, if you were to ask me, What do I think's going to happen this year? I think we're gonna see some major job disruption, short term, I think we're gonna see some major job disruption, maybe job loss, I think less job loss, and more disruption to the way it happens. But I think right now, there's just so much uncertainty that a lot of people are in a holding pattern.

Dave Erickson 51:33
There is a human question and this is a question that actually was asked back in the industrial age, subtly. But I think now it kind of needs to be asked again. Is it better to actually bring in all this AI and displace jobs? Or is it better to not have it and not be as productive so that people just keep their jobs? Right? And that's a question that society has to answer. And society is really bad at answering large questions like that, especially a capitalistic society that's based on profit margin, they are really bad at this. But that's a question that people are asking in the back of their mind, when it comes to how fast AI is moving. Is it better to just slow it down? Because, yes, I agree with you, I think there's gonna be more job disruption than there will be of job displacement. But that disruption, you know, people are going to have to learn to do new things. And they're not very good at doing that. It takes a lot of motivation. It takes a lot of work for people to change. But I mean, I forced myself to do it. I'm working with AI tools that technically, I don't need to be doing it. But I'm doing it to learn and to become better at what I do and to try to apply this. But I think people in general, they're, they don't necessarily motivate themselves to do that. (No). Kids do.

Christopher Lind 53:05
Yeah. And I think one of the things that everyone would be wise to think, long and hard about right now related to AI, I don't know that we're putting enough emphasis on the moral and ethical implications of some of the things that we're doing with AI and really asking the question, at what cost? Are we pursuing these things? And are we really thinking carefully about that? You know, in terms of the future of AI, my personal viewpoint is there is something unique and distinct about human beings. So this idea that a machine will ever be synonymous with a human I, I just don't believe it can. But I think there's going to be a lot of things that it gets to a point where it's so difficult to tell the difference, that a lot of people will struggle to be able to differentiate. And that's where I think we just do need to be careful of this. And we don't have to go down this rabbit hole. Because I've done several podcasts where I've talked about transhumanism, what does that all mean? Where do we go from here? I grew up and that's my favorite topic. Well, it is very interesting. And I grew up in a funeral home, and people chase mortality relentlessly, the desire to live forever and never. And sometimes when you care so much about something you don't think about the consequences of the decisions you're making, or the implications of that, and you start going into that space. And you're like, Well, be careful, because you might get what you want, what you thought you wanted. And then when you get it, you go, Oh, wait a minute. This is actually not actually what I wanted, but now you've crossed the barrier that you can't come back from and I don't think you know, I think this used to be the thing of science fiction. You'd see it in movies, and it was like, Oh, and 20 2200 You know, I'm like, Nah, it's not going to be 2200 before we have to start crossing and evaluating the moral and ethical implications of some of these things. We're, we're on the brink right now. And a lot of people are just so busy on the train of what's next, We're not like, you have a what are the consequences of what we're doing? Right now? I don't think anybody thought putting a like button on Facebook would lead to one of the biggest depression crises in you know, all time. So what other decisions are we making that we haven't really thought, what's around the cornee if we actually are successful in some of these things?

Dave Erickson 55:35
Well, we definitely have an interesting podcast topic who invites you back for.
Christopher Lind 55:41
There you go. We could do a whole we could do a whole one on on that topic. For sure.

Dave Erickson 55:45
Yeah. All right. Well, Chris, thank you so much.

Christopher Lind 55:49
Botond, you've got this like very soothing, just like great, you just like relaxing voice, I could just probably put you on, you know, repeat and just listen to you read a book or you can read Harry Potter. Just record yourself reading Harry Potter and I was listened to? No, this was fun. Thank you. I enjoyed the conversation. Good book. Yeah. And yeah, it's, it's an interesting, it's an interesting time, to say the least I've got seven days. I've got seven kids. And I'm like, Man, the world you're going to be a professional in because the oldest one is 12. Like, I don't really know what it's gonna be like in 10 years. Think

Botond Seres 56:33
you're gonna pay seven toChatGPT subscriptions? Let's see in the future.

Dave Erickson 56:42
Yeah, my daughter is 11. And I'm really my wife and I are thinking, what career is she gonna pick? What can she get into that has a future that she'll do well in? Right. And yeah, you gotta be thinking about that for all of your kids. Now. We're fortunate, we only have to pay for one college education, you have to pay for seven?

Christopher Lind 57:05
We'll have to, we'll have to get creative on that one, for sure. Who knows, though, thing is, on that topic on that topic, even the whole what's going to happen with the college degree is another big one that is up in the air right now of like, what really is the degree is that the accurate measure, a lot of companies are going through this right now. You know, especially in the tech industry, for tech companies. Getting a bachelor's or a master's in some ways is like, you're better off doing a bootcamp or something like that type of a thing. So it's going to be interesting to see how that plays out in other arenas as well.

Dave Erickson 57:44
Yeah, and elearning is going to go through a whole dramatic change. And AI is going to play a huge role in elearning. We had some guests on our podcast, especially talking about learning English, that really show the impact that AI can have in a positive way.

Christopher Lind 58:02
Oh, adaptive learning and personalized learning. I mean, I wish I could go to school now versus when I did where it was just like everybody, you're all going to do the same thing and that's just what you have to do. Because we gotta make it work. And now it can understand where you are, it can adapt based on your unique needs. It can help get you, you know, you're struggling in math, well, it'll personalize your thing. I mean, it's, it's a different world than it used to be.

Dave Erickson 58:33
So, Christopher, thank you so much for this great discussion on how AI is affecting the job market and working people. Well, good listeners. That's about all the time we have for this episode today. But before we go, we want you to think about this important question.

Botond Seres 58:48
How are you going to use AI to hire people? For our listeners, please subscribe and click Notifications to join us for our next ScreamingBox technology and business random podcast. Until then, try to figure out what type of work you want in the near future.

Dave Erickson 59:08
Thank you very much for taking this journey with us. Join us for our next exciting exploration of technology and business in the first week of every month. Please help us by subscribing, liking and following us on whichever platform you're listening to or watching us on. We hope you enjoyed this podcast and please let us know any subjects or topics you'd like us to discuss in our next podcast by leaving a message for us in the comment sections or sending us a Twitter DM. Till next month. Please stay happy and healthy.

Creators and Guests

Botond Seres
Host
Botond Seres
ScreamingBox developer extraordinaire.
Dave Erickson
Host
Dave Erickson
Dave Erickson has 30 years of very diverse business experience covering marketing, sales, branding, licensing, publishing, software development, contract electronics manufacturing, PR, social media, advertising, SEO, SEM, and international business. A serial entrepreneur, he has started and owned businesses in the USA and Europe, as well as doing extensive business in Asia, and even finding time to serve on the board of directors for the Association of Internet Professionals. Prior to ScreamingBox, he was a primary partner in building the Fatal1ty gaming brand and licensing program; and ran an internet marketing company he founded in 2002, whose clients include Gunthy-Ranker, Qualcomm, Goldline, and Tigertext.
How AI will change JOBS and HIRING & how to PROTECT your work and your FUTURE job.
Broadcast by