Skip to content
Featured image: Building a Competitive Edge with AI

Building a Competitive Edge with AI

Andreas Welsch, Chief AI Strategist

 "One of the ways that I see humans are creating a problem for AI is that these large models... rely at the end of the day on data that humans have created and even that humans have curated."

In this engaging episode of the Innovation Rockstars podcast, we converse with Andreas Welsch, Chief AI Strategist at Intelligence Briefing and Adjunct Lecturer at West Chester University of Pennsylvania. Together with Andreas, we explore the nuanced dynamics between humans and generative AI, emphasizing the challenges posed by human-generated data on AI's performance. Andreas sheds light on strategic AI applications in business, highlighting the necessity of aligning AI initiatives with organizational goals. The discussion extends to the future role of AI in the workplace and the concept of AI agents in business negotiations, culminating with Andreas imparting valuable career advice for aspiring professionals in the AI domain. Ready for a mind-expanding journey? Tune in to this episode and fuel your curiosity about the AI landscape's future!

Below you will find the full transcript for the episode.

Chris: This is the Innovation Rockstars podcast, where we share unfiltered conversations about all things innovation with leading minds who are shaping the future. My name is Chris Mühlroth and with me today is Andreas Welsh. Andreas is Chief AI Strategist at Intelligence Briefing. He's also an adjunct lecturer at the West Chester University of Pennsylvania. Also, if he has some time left, obviously, the VP and head of AI Marketing at SAP. And you're also a frequent speaker and guest on various occasions online, offline, you name it, you get it. Andreas, thank you so much for joining me on this episode.

Andreas: Wonderful. Thank you so much for having me. Really excited about the opportunity.

AI's Human Echo: Navigating Data and Bias

Chris: Same, same. Andreas, one of your public statements that has received increased attention is that generative AI, Gen AI, has a growing problem, and that is us, humans. Tell me, what's the problem with us?

Andreas: That's an awesome question. Look, I like to think about what are the different angles around generative AI, and what it means for us in our business lives, in our personal lives, And I'm glad we're able to have this conversation in this frame as well. So, one of the ways that I see humans are creating a problem or are becoming a problem for AI is that these large models that are underneath these new generations of software, these new generative AI tools like ChatGPT, Google Gemini, Microsoft's copilot and so on, right, they rely on, at the end of the day on data that humans have created, and even that humans have curated, because there's so much data that even any of us create on a daily basis. So multiply it by any number of people that you like that are online, for example. And that you get so many different signal points that the information that you get there might not always be what you want or what you expect. We know that companies like OpenAI and others have scraped the web for data to train these models so that the data doesn't only come from the pretty and rosy places of the internet, right? There are many corners as well, whether it's Twitter or Reddit or some other forums, where maybe there are opinions and values shared that might not be the majority's opinions, right? So we do need to have humans, and these companies do have humans in place that tweak the information that goes into the models and maybe do some content filtering and reviewing. So that's very labor-intensive, but also looking at it from a mental health perspective, right? It's very taxing. If there's information that you see, whether it's written or it's in images, and you need to decide, is this okay? Is that an issue? Do we need to do something? Do we need to block it? So it's very taxing. But then also, on the other hand, if you think of things like image generation and image classification, think of insurance cases, right? Many insurance companies today, at least in the US, take a picture of the damage on your car, and then they have a machine learning model, AI model in the background that makes a decision or recommendation. Hey, is this something we need to give to an adjuster and somebody needs to come out and assess this? Or does it look like it's just a scratch, and we can write you a check, and you go to your local body shop and have it replaced? So there are people doing task work that are reviewing these images, they're drawing bounding boxes around where the damage is there in that image. But because this task works, again, this is more anonymous, right? You might not even know who you work for, and what's the overall goal and objective of why you're being asked to do this. You can also lack a sense of purpose. So through coverage in the media at the end of last year, I think there were a number of stories where it was found that task workers are also using generative AI in their work to be faster, and more effective doing this. But that's exactly the problem, because there, that's the reason why we need humans in the first place. After all, these models are not perfect. So we need somebody to review this. So if humans, then again, use other generative AI tools to do that review, We're reintroducing the same kinds of errors, mistakes, and faults. So that's why I think this vicious cycle, in that sense, humans can be a problem.

"One of the ways that I see humans are creating a problem for AI is that these large models... rely at the end of the day on data that humans have created and even that humans have curated."

Chris: That's interesting. Just recently, and when I say recently, I mean probably two or three months ago, there was a group at X, formerly Twitter, and that group shared some of the successes of, quote, unquote, hacking OpenAI's ChatGPT, because what they ended up with is a set of instructions that OpenAI gave to ChatGPT to behave in a certain way. Um, and you know, I want to go back to the very first thing you said, which is that the data that we produce as humans, and which is obviously one of the foundational things, these little amplifiers, for example, as judges like GPT4 and its other versions rely on, are trained on. So yeah, on the one hand, you know, it's different types of data. Some data might not be, you know, you don't want it in there because it might not be ethical or maybe even legal. But at the same time, if you sense-check the data, maybe even fact-check the data, you also open the door to potential restriction or maybe even censorship. Because again, who is testing and supervising those who are actually writing those prompts, who is giving those instructions to the AI? Is that another part of the problem that we're probably going to run into when you introduce human review and manual review of how these systems are trained and what data is being used?

Andreas: I think that's a really interesting angle that you bring up. I think there's certainly some potential there, right? We talk a lot about bias when we talk about AI, whether it's the bias that each of us has that manifests itself in the data that we create that becomes part of these models, or then certainly the bias and the somewhat biased lens with which and through which people look at the information and make decisions, is that allowed? Is it not? So I think there's probably a multifaceted problem, if you will, where bias is introduced in the raw material, the raw data, and then also in the curation. But I think even last week, right, we saw in the news that companies are trying to correct for some of these biases that you see in the data, right? Just because there's a larger population of data and data points available, when you use image generation tools, for example, I tried this the other day, and I said, create a portrait of a business leader. Well, the results I got from my image generator were male, Caucasian, and of a certain age. We all know that there are business leaders who are not male, who are not Caucasian, who are younger. So just because there's so many data points, and that seems to be the common denominator, if you will, or the dominant examples of that, that's why we see these results. So companies are also trying to compensate for that and trying to introduce more diversity by creating more diverse results, right? If you had a portion of diverse leadership, you would definitely get different results. But the problem is, when you apply it to any task, any image generation, when you look at things like historical information, right, then you end up with diversity where historically there wasn't any, that's a problem, right? That's when you read about it in the news and how come they did this and they're presenting it factually inaccurate. Well, on the other hand, if they don't, then there's an outcry of why do we only see white male business leaders in their mid-50s when I ask for them? as an image that needs to be created. So I think it's a fine balance. But I think it also shows where the industry is, where the problem is, and where the challenges are in terms of capturing that data.

The AI Evolution: Hype or Hope?

Chris: It's very interesting. And of course, you know, with the speed, things are moving right now, kind of, everybody is talking about AI. And you just said it right, there is also now news coverage in all the different outlets. If we were to zoom out just a little bit, you know, from the current discussion from what's happening, what's developing on a weekly or even a monthly basis, where are we from your perspective, where are we in the grander scheme of AI development? Do you think it's overhyped right now? Do you think we're just, you know, at the beginnings, actually, probably we are, technologically, but, you know, there is a saying that, you know, once your taxi driver talks to you about a certain thing, be it a stock option or even Bitcoin, maybe there is an argument that this is a little bit overhyped right now? So where are we in all of this discussion on AI and technological disruption?

Andreas: So I'm super excited about it because before ChatGPT came out at the end of 2022, the media and analysts were talking about this next AI winter where, well, actually, we didn't see all the promises that the previous generation of machine learning was supposed to deliver and fulfill and maybe it's not that real and maybe it's not that easy and simple to do. And then ChatGPT came along and made AI, generative AI, and especially the results that you can get, so much more accessible, right? I can have it here on my phone with me at any time as long as I have an internet connection. I can ask it to do something for me and I get the results. And I don't have to be a Ph.D. or a graduate anymore. I don't have to be trained in statistics and optimization to take advantage of AI. It's become a lot easier and a lot simpler to use. I think that's huge. And that's also a big part of why we're seeing taxi drivers having these conversations with us and being more informed. That's the part that makes me super excited that we're finally able to have those conversations. In a business context, you're able to have that conversation with your C-level, even your senior manager or higher-level manager, because they've all tried it, or they've been asked, well, what are you doing with AI? What's your AI strategy? So if you're in technology, if you're even working on AI, it's opening up so many doors and opportunities that were closed or hard to open before. Well, is it overhyped? I think that's a great question. I think there's a lot of excitement in the industry, there's a lot of buzz. On the one hand, you have the venture capitalists pouring money into these new companies and startups. On the other hand, they are producing impressive results that can unlock even more of these promises, even in a short period of time. So I think in the last year we have seen a lot of experimentation, a lot of piloting, "What is generative value? What can you do with it? Maybe some problems with it? What do we need to know about it? How can we use it? I think what I see this year, at least in the business world, is that companies are starting to implement it, to roll it out, to take advantage of it. For one thing, it's relatively easy to get started. It's usually text generation summarization or translation. Like some of the low-hanging fruit. And there's so much text in the enterprise that it makes sense to go after that. But then I think business leaders are also asked, well, how do you use it? When do you use it? Gone are the days where we're stuck in a proof-of-concept pilot mode for months, right? How fast can we come up with an idea that delivers value? How quickly can we build it? And how fast can we get it into the hands of our users or customers? Again, safely, so they can take advantage of it.

"I don't have to be a Ph.D. or a graduate anymore. I don't have to be trained in statistics and optimization to take advantage of AI. It's become a lot easier and a lot simpler to use. "

AI Integration in Business: From Concept to Reality

Chris: A good friend of mine works for a large investment firm in Europe. They also fund young companies and startups. And they have made a kind of little internal game out of it. Because what they do is, whenever they get some pitches from young companies, startups, ventures, or whatever, they start keeping track of how many times AI was mentioned in those pitch decks. And they sort them in descending order by the number of times AI was mentioned in those slides, and they take the top one, and they just take it out of the discussion, and they make fun of it. So that's, you know, there's just these signals that are, you know, flying around that, you know, make sense. But of course, I share the excitement. And, you know, in the earlier days, when I was doing my Ph.D. research, we had access to early GPT models, command lines, and all that stuff. And it was super exciting, but the layer of accessibility was just missing, as you said. And of course, those models were certainly not as powerful as they are today. And the next ones will be much more powerful, of course. But I agree, it's that level of accessibility that has made it possible for a large audience. But why should companies care? How can AI give companies and businesses a competitive advantage?

Andreas: So maybe I should turn the question back to you. What is the part of your day that you enjoy the most? Is it writing proposals? Is it answering emails? Any of those? Absolutely not. Right. So again, talking about low-hanging fruit, right, these are some exciting opportunities. Think about it more on the commerce or sales side. How do you create product descriptions that are really, really good because you already have a lot of information in your business systems? Or, let's say in HR, you see this every day in your LinkedIn feed or social media feeds. Hey, how can you write a good resume using generative AI? And on the other hand, how do you identify good candidates by processing those resumes with generative AI? I think there are just a lot of opportunities today where business is so text-heavy, even document-heavy. I think it was Everest Group a few months or a year ago that said that about 80 to 90% of the data in a company is stored in documents. Typically, somebody has to open a document, read it, or copy and paste something. Now, if you can process that with generative AI and you can get to the gist of it in a couple of seconds, that's perfect. You don't have to read it. You don't have to summarize it. Meetings, meeting minutes. Things you discussed, what are the action items again? Or maybe you came in late. Can you summarize that for me? I think on the productivity side, there are already so many opportunities. And then, like I said, whether it's HR or product descriptions and definitely sales and marketing where there's a lot of copy that's written that goes on a website, that goes into a document. If you can enrich the prompt that even you, or I would put into ChatGPT and give it a lot more context about your business, about the products, and about the services that you offer, you can get a really good result in seconds. I think that's a big opportunity, especially to get that good and good enough result, the good enough design that you can work from.

Chris: So what is a good place to start? I mean, we see a lot of people jumping into the solution space these days, right? So they start with an ideation workshop. They virtually meet with a hybrid on-site, whatever it is, do a workshop, and or obviously solicit a lot of language models for ideas as well. Sort through those ideas and try to identify low-hanging fruit for use cases, for text-based analytics, or even just literally, you know, ChatGPT or custom GPTs. Is that a good place to start? Should companies start earlier or later in the process? What is considered a good place to start?

Andreas: So from my experience, the best place to start is to figure out what is the business problem that we're trying to solve. There are so many possibilities with ChatGPT and similar tools in an enterprise. But the question is, where do we actually have a need? And where does this technology fit in? I think otherwise you end up with use cases or scenarios where we create happier employees, which is nice, and it's aspirational, but it may not be a business KPI or something you can charge more for or monetize. We must have happier employees so that work becomes easier and more seamless and all of that. That's great. But I think there are more tangible business metrics and KPIs that you should be looking at to determine if there's something that we should be pursuing and how we think that a particular scenario that we're seeing is actually going to help us achieve that KPI or improve that KPI. And I think that's part of the challenge, right? Where do you start? How much knowledge do you need to have about the capabilities and the capabilities of the technology? And how much knowledge do you need about your business and your business process? And ideally, bring those two together. So I think when you bring technologists and business experts together, that's when you have the right combination, because everybody has their unique angle in between, hey, here's a problem that we run into every week, every month. And here's, you know, a toolbox of capabilities and solutions that come together and determine what are the, you know, the one, two, three things that we should be looking at, it's usually quite fruitful.

"The best place to start is to figure out what is the business problem that we're trying to solve... Otherwise you end up with use cases or scenarios where we create happier employees, which is nice, and it's aspirational, but it may not be a business KPI or something you can charge more for or monetize. "

Chris: Obviously starting with a problem is very powerful. I agree. It would probably also mean that you have to have some kind of knowledge that it could potentially be solved with this technology. So you have to, you know, at least maybe have some kind of knowledge or awareness of what this thing is capable of and what the limitations are today. Um, because I, you know, it's not the magic button for everything. It is very powerful. Um, but certainly I guess, you know, require some prior knowledge. What can it do? What is not possible? What can you even put into it? Like videos for example. Yeah, but maybe not ChatGPT, um, or, you know, other models. So you have to know the limits, I guess. Yeah.

Andreas: Yeah, I think so. Definitely. Right. So there's a little bit of leveling, maybe a little bit of empowerment or education. David: Yeah. On the one hand, by the way, when I do sessions in person or even virtually, I ask the audience, how many have used ChatGPT and similar tools before. I now see at least three, three-quarters or more raise their hands and say, yeah, I've used it. So that tells me that people have some understanding, and it's lowered the barrier of entry. We don't have to explain that, hey, these things are built on data, and they learn, and you can improve them, and all that kind of stuff. It's still important to know, but people have already seen it. They already know what they can do with it, and what they can do with just a text box and a prompt, and that's pretty powerful, right? So imagine if you had that available in your day-to-day work where you have situations where you're creating a lot of text, where you're spending a lot of time reviewing things or creating things, and maybe they're not, you know, exactly perfect. So you need to spend more time creating them yourself or something, or where you would have an intern helping you gather information, summarize information, that kind of thing. And I think people can understand those examples a lot better because they have had a higher likelihood of using those tools in their personal lives, I guess, as well. 

Chris: Yeah, yeah. And how do you get to those ideas that are, you know, a combination of desirable, feasible, viable, and maybe later on even scalable at the time? Is there some kind of process that you can follow? You know, something with a problem, sure, but how do you get to those ideas that make sense?

Andreas: I think that's the million-dollar question. That's the one that you have to get right because otherwise, you spend a lot of time going in a direction that may not turn out to be as valuable or as impactful as you initially thought. Feasibility or desirability, feasibility, viability, and scalability, are all critical. I think desirability, is there a stakeholder in the business that agrees that this is a problem, right? Maybe the subject-matter expert thinks it's a problem, maybe the IT team thinks it's a problem, and the data team thinks it's a problem. But unless they all agree that yes, this is a problem, and it's worth solving, you're going to run into problems, right? Maybe there's a lack of buy-in down the line. Maybe there's a lack of budget or funding. Maybe there's a change in leadership or something in the organization, and suddenly it's not as important. But I think if you have that alignment up front, and this is really something that we want to pursue, and we want to solve, and we need to solve, there's a higher likelihood that you're going to have that runway. I think the feasibility, I think it comes down to, do we need data. Do we have data? How good is our data? That question, fortunately, or unfortunately, is not going to go away. If you want to get good results and tangible and specific results that are aligned with your business and your process and really the problem that you are trying to solve in your business, you still need data to augment those models, to augment those prompts. They're excellent at creating the language that you and I would create and use and speak, but they're not very good at, for example, specialty chemistry or some other really narrow domain knowledge, or the travel schedule, the travel policy of your company. Feasible, I think that comes back to, can we solve this with the resources that we have in the time frame that we have? Do we need to throw more money at it? And how much money do we need to throw at the problem? And even if we throw all that money at it, is it still worth solving? And then I think where a lot of projects fail is the scalability aspect, even at the end. It works great in a lab. It was in a pilot. But once you roll it out globally to many different countries, regions, languages, whatever it might be. So you have to keep that in mind even at the beginning when you're thinking about desirability and whether is this something we want and something we could do again. So thinking along those lines will help you get closer to solving that million-dollar question.

"I think if you have that alignment up front, and this is really something that we want to pursue, and we want to solve, and we need to solve, there's a higher likelihood that you're going to have that runway." 

Chris: And are they all equally important at the same time? Because one could argue, you know, we'll care about scalability later to the business, for example, or to, you know, now the global, you know, customer base or whatever it is. But you can also make the argument and say, you know what if you build something now, but It's not going to scale for whatever reason, because there's not enough data, or there's too much data, or it's just computationally not possible right now, then why wouldn't you start this in the first place? So the question is, are they equally important at the same time?

Andreas: they are equally important because you are setting this up ideally for success, right? So you want to see how can I roll this out in a sense that it does have the largest business impact possible. But maybe again, scaling to your point means it's on a smaller scale for now. But if you just think about how do I do this on my laptop and my machine, that type of scale, I think that doesn't work, right? So you need to think about how do I maybe architect that for a certain number, again, of people, of users, of regions, of locations, already from the beginning, so that it gets a lot easier getting there. And it doesn't mean that you need to have all the resources in place and whatever, all the instances reserved and whatnot, and start paying for it. But at least with that in mind, how far do we want to take it? And what do we need to do now to get them? I feel like if you don't do that, then you're always in a situation or you're running the risk of being in a situation where it's like an interim solution. And the interim solution, as many of us know, if you've been in it, they usually stay for a long, long time long after the interns are gone and moved on. We have these little solutions and applications that nobody really knows anymore how they work. And everybody is afraid to turn them off because something disastrous might happen to the process or to the business. So that's why I think thinking with scalability in mind is really, really important. And again, all four to me are important, equally important.

Chris: In your work, and I follow your work and your publications quite frequently, you've spoken quite a bit about AI centers of excellence. Can you tell me what are those centers of excellence for AI?

Andreas: So think about a company, maybe a leader who knows, hey, I need to do something with AI, but I'm not sure exactly where to start or how to roll this out in my business and what I need to do. So AI Centers of Excellence are an organizational setup or construct where you bring together resources from across the company or maybe you're hiring them fresh into the company to help your company get more savvy on this topic. So on the one hand, what is this technology all about? Think about AI, you might have some data scientists, you might have some developers who are familiar with these technologies. It also means what are the tools, what are the guidelines that we need to put in place to maybe do our first pilots and do our first proof of concepts. Then what are the processes? How do we generate ideas? How do we identify what are good use cases and good scenarios that we want to pursue? And how do we help our business understand what the potential is? So basically a lot of the things that we cover in this episode. And so we have this as a nucleus for incubating this topic, for example, artificial intelligence. where there's a lot of uncertainty, for example. So it's about defining policies, standards, and technologies, helping the business understand where and how you can use the technology, building some pilots and proofs of concept, helping to get them into production, and then scaling this topic of AI, for example, or any other technology topic into the organization so that over time your business functions and the teams that support those business functions will be able to do the work that this nucleus has done in the beginning.

"So it's about defining policies, standards, and technologies, helping the business understand where and how you can use the technology, and then scaling this topic of AI into the organization." 

Chris: And how do the inputs get to those centers of excellence? Are they thinking of use cases by themselves? Are they interviewing the business? Is there some liaison to different business groups and business teams? How does it work? How do they get their ideas that are potentially desirable, viable, and so on, and then work on them?

Andreas: That's a great question. I think that's one of the key roles among many that the center of excellence plays because that's building the relationship with those different business functions and business stakeholders. And what I've seen work really, really well is setting up things like a community or a multiplier community. So where you invite individual subject matter experts from different parts of the business into more of a virtual project setting or team setting, if you will, and then on the one hand help them understand, hey, what is AI? What's generative AI? What can you do with it? Where do you have problems in your day-to-day? And then also use them as innovation outposts, if you will, or listening outposts, and they can bring ideas to you, and you can vet them with them and see, is that again, desirable, feasible, viable, scalable, for example. So really building that relationship, but at the same time building that trust. And what's in it for those influencers and champions is that they can be seen within their business unit as an innovator, as somebody who's leading, somebody who's at the forefront of things. So, hey, if you want to know about AI, Bob's your guy, right? He is always at the forefront of these things. So you should talk to Bob, and I know he's well-connected within the organization. So even if he doesn't know the answer to our question, he at least knows the people to reach out to. So that's great, right? Also, if you're the CEO and you have that network, it's great to be Bob because you're your friends, your colleagues look up to you and they know that you're knowledgeable and well-connected in this area and that you can help them. So I think there's a mutual win-win in all of this. enabled to do the work that that nucleus 

Chris: That sounds very exciting and for Bob, certainly for the organization, because that's how you can build awareness also across the organization. Now, in the playing field of corporate innovation, we've seen a similar thing. We've seen innovation labs rising a lot in recent years or so. You know, teams that work on new concepts, whatever they are, before handing them over to the business. And they work similarly. Either they, you know, receive challenges from the business and try to find, you know, solutions, or they come up with ideas themselves. Oftentimes it's a mix. To a certain degree, this, and to a certain degree, the other thing. But the major problem of these was always, and still is, that there was a gap from the solutions they've created, the pilots, concepts, whatever, to the business strategy, for one, and also the outcomes needed by the business. So, there was sometimes a misalignment to certain business problems if there was no communication. Or they created promising concepts, but then nobody from the business wanted to pick them up. There's no budget, there is no time, it's not as relevant, there is no inventor tier syndrome, all these things that you face, specifically in large companies. Do you feel we've faced the same risks with AI centers of excellence? And how could we help overcome them?

Andreas: I think there are certainly similar risks, right? Even if you are this nucleus and so you're part of the whatever data and AI team or IT team and you're supposed to work with all these different business functions. The challenge is unless you open yourself up, unless you do build these connections to your multipliers and unless you build that community, you're still working in a silo. You're still coming up with your ideas of what you think is best what the problem is and how we should solve it. And you're most likely going to solve it with technology in mind and for technology's sake, not for the business's sake. So I think these relationships, these stakeholder connections, they're critical. Because then you're adding value. You're part of this organization. You're part of this journey. You're not this team that looks at innovations five years, 10 years out in the future that has no connection to what we're doing today. And again, yes, it's not invented here. So why should I care about this and adopt this? It's a nice idea, right? It's an innovation, but we have business metrics, we have goals, and we have problems to solve today. If you can make it tangible and show how the work that you are doing today with today's technologies and today's business stakeholders, how that delivers tangible business value, today, ideally, or tomorrow, then I think then you have a seat at the table, you've earned the right to sit at the table. And those setups are usually much, much more successful and fruitful than some lofty innovation that might or might not materialize or some future scenario. So tangible, using tangible technology, solving tangible problems today.

Chris: And are those centers of excellence also maybe, I don't know, are they maybe a good place to prepare businesses for the integration of AIs in ways we haven't seen yet? I mean, yes, we should work in the now and deal with those challenges that we have right now. Yes, technology is moving so quickly that things we deemed impossible might be possible next year or even the next half year. But what about things where businesses want to prepare for the integration of AI in ways they haven't seen yet? There's also some experimental side to these AI centers of excellence that might be explored by those teams.

"You cannot just focus on the here and now. You have to build an innovation funnel and think about what are the things that we are going to start working on to build towards this future vision of our company, of our product, of our service, and what are the steps that we can take today to get to that vision." 

Andreas: Yeah. I think that's an excellent point. So think about more mid-sized, larger organizations, right, that have different, maybe different divisions, different business units, maybe a larger data, AI, IT organization. I think that's where COEs can add value, by the way, where you have a critical mass and also a critical mass of teams and issues to support. So from there, I think that also gives you the flexibility to work with, for example, your HR organization, which is probably a little bit larger than one or two people than in smaller organizations. So with a larger HR organization, you can define learning and development programs. What's maybe a curriculum that we want to curate, that we want to roll out to different parts of the organization to train our team members and colleagues, how can you use AI to think about what are other opportunities beyond the here and now? I think that's an opportunity that sometimes gets missed. And the other is definitely. You cannot just focus on the here and now. You have to build an innovation funnel and think about what are the things that we are going to start working on to build towards this future vision of our company, of our product, of our service, and what are the steps that we can take today to get to that vision. So you have to have a somewhat balanced portfolio, right? Not just things that you can ship in the next few months, but also building that pipeline of exploration and tying that to your company vision.

The Future Workforce: AI Agents and Human Collaboration

Chris: Now, adding to that, I mean, there is indeed, of course, a broad discussion about how Gen AI is shaping the future of work, skills, and capabilities. So from your experience, talking about those learning and development pathways or roadmaps, maybe, what do you think some of the skills are needed in the future for Gen AI beyond prompt engineering?

Andreas: I think there are three key skills that are universal. They apply to us as humans and our human relationships as much as they will apply to generative AI, and even more so. I think the first one is learning to ask better questions. That's a critical skill in all the business. We all learn in school or college not to ask questions that can be answered with yes or no. So what are those questions when you want to find out something? How do you keep them open-ended, but not so open-ended that you get a specific answer? I think the other thing is to give better instructions. If you're in a leadership role today, or you want to be in a leadership role, you're going to learn that you have to give good instructions. Clear instructions, that are tangible, that are easy to understand. You have to put some thought into how you formulate, how you articulate those instructions. Whether I'm working with a person or I'm working with an AI assistant or co-pilot or AI system, if you will, giving better instructions is the key to getting a good result a lot faster. Otherwise, I'm going to spend time repeating and checking and rephrasing things, time that I could have just as easily spent writing something myself or doing something myself. And then I think the third one is really learning to have more of that Socratic mindset and approach of learning to have a back-and-forth conversation, even with the AI. If you want to learn something, if you're curious, you should ask follow-up questions, right? Not just one question, but why do you think that's the case? What are other perspectives that I should consider or that someone should consider? It's much like the conversation that we're having in this episode. So I think those are three critical skills around asking better questions, giving better instructions, and learning to have more of a back-and-forth dialog even with your AI.

"You have to give clear instructions, that are tangible, that are easy to understand. Whether I'm working with a person or I'm working with an AI assistant... Otherwise, I'm going to spend time repeating and checking and rephrasing things, time that I could have just as easily spent writing something myself." 

Chris: Which is, as you said, it's a skill that's probably critical not only for interacting with an AI but certainly for interacting with humans, in personal life, in business life, wherever. Yeah. And now on a larger scale. So, if we think of AI as a skill or a capability in a company. It's a set of technologies, right? It's not one, but I think we can think of certain levels of maturity, right? So, you know, you start, then maybe you start augmenting some of your processes with AI. And then there's probably another end, or the other end of the spectrum, where things are completely autonomously driven by AI. And obviously, we're not there yet in most places. But if you think about that trajectory and those maturity levels, what do you think a few years from now, I don't want to talk about the timeline and the crystal ball because that's not something we can foresee, but if we're getting closer to an AI-enhanced than AI-powered and sometimes ultimately maybe even AI-driven future for companies, if that ever happens, what would that look like for companies and also for the people who work in companies?

Andreas: So I'm glad you're taking the timeline out of that perspective because I agree with you. It's really, really hard, right? It's anybody's guess how fast we're going to get there or where and when we're going to hit some bumps in the road. But again, the thing that gets me really, really excited when I think about the future and the future of business is that, yes, there will be tasks that we will be able to delegate even more to applications that we use daily, that we use as business users, but sometimes even in our personal lives where there are tasks that these tools can do for us. Maybe we're still in control, we've reviewed something, we say, yeah, that's good enough. Out of a set of choices and options that AI gives us, we can choose the best one. We might even get a recommendation and say, hey, you usually pick number one or number two here. Do you want to do that again? But then I think going forward, there's this whole discussion and notion of agents and agentic AI, where we've already seen some first glimpses at the beginning of last year, and then with ChatGPT and OpenAI releasing the GPTs, the agents or somewhat agent-like components that you can build yourself. I think there's a huge business potential as well. Agents that you can give a task or a goal to, and within defined boundaries, they can determine what are the next steps that I need to do. Maybe think about the future, booking a trip, for example, booking a business trip. Yes, today there are a lot of tools and platforms, and they're pretty well integrated and they use APIs to exchange information between the different providers. So in one booking, you can get your hotel and your car and maybe something else, your train ticket or your Uber. Now, if you can delegate that task to an AI and the AI can figure out, well, how do I technically in the background make that API call to or or Uber? And how do I set those boundaries? Because I know that you usually like to travel business class or business class, if it's a flight of a certain length, and you usually choose a mid-size car and so on. So making those decisions for you, or even before we get to that fully autonomous phase, and if we even want to get there, that's a whole other question. But to help you with these tasks that you would have to do yourself today and to have more autonomy as we delegate to software, as we delegate to AI, I think that's a huge potential. Another thing that I'm excited and curious about is there's been a concept for a long time, I think a couple of decades, around multi-agent systems. We have a lot of these agents that interact and interface with each other. And think about things like procurement, for example. You want to buy a certain amount of material. So maybe in the future, you can have an agent who represents your company and knows your procurement policy. On the other hand, you might have an agent that represents your supplier, with their policies on how they negotiate and how much leeway they have, how much discount they get, and when they can ship. So I could see scenarios where at least in the beginning there's some negotiation between systems, knowing the policies, knowing the preferences, knowing the policies of those organizations. Ideally, you know, we still want to have a human in the loop reviewing that. But again, over time, who knows how far out, maybe even some of that negotiation can happen fully autonomously between these agents if we're all confident that they're acting in good faith, that they're optimizing for the right variables, and so on.

Chris: So I, you know, one of the least favorable activities of mine is travel booking, and I need to travel quite a lot for business purposes. So I, you know, I would totally buy in an AI agent that's gonna just do the job for me. I will find the best rates, the best connections, and the best time-optimized, or sometimes even stress-optimized route, depending on the distance. This would be a nice thing to have. Because if I knew, I would buy it. Anyway, Andreas, this conversation has been incredibly, incredibly insightful so far. But before we leave, we have a new tradition on this podcast. And so the previous guest leaves a question for the next guest without knowing who my next guest is and what we're going to talk about. So my previous guest left a question for you, again, without knowing what we're going to talk about. And the question goes like this. What's one piece of advice you would give your younger self at the start of your career, knowing what you know now?

Andreas: That's awesome. I love that question. I just had a session last night with a group of students at the university where I teach, and that was one of the questions as well. So if you're a young professional, what would be the advice that you give them? I think don't be afraid to get started and to take risks. Don't be afraid to ask for things, but also don't be afraid to put in the work. So that combination of things, I think, would be my advice. It certainly takes some time to build your expertise and build your career. That's where the managed expectations and the putting in the work come in. But also, again, the part about not being afraid to get started. Now with AI, I see a lot of concerns. Especially if I'm entering the job market, isn't AI able to do the things that I can do now fresh out of college? And my challenge also yesterday was to say, well, think about where it helps you do more, where it helps you accomplish things that you would otherwise spend a lot of time on, right? Writing proposals, writing emails, and things that we might not enjoy as much. If that can be taken out of your day and it's good enough, and it's factually correct, perfect, right? It's a huge help. So there's a lot of opportunity, as well in that and seeing that opportunity, capturing that opportunity, chasing that opportunity. I think that's, that's super exciting at this time if you're starting in, in the market in the business.

Chris: So it's lots of courage and also risk-taking. Yeah, understand. Well, thank you, Andrea. Andreas, again, this has been fantastic. And that's it already for this episode. Thanks again for being my guest and sharing your experience with the audience.

Andreas: Wonderful. Thank you so much for having me. This is awesome.

Chris: All right. And that's it. So thanks for listening and see you in the next episode. Take care and bye-bye. 

About the authors

Dr. Christian Mühlroth is the host of the Innovation Rockstars podcast and CEO of ITONICS. Andreas Welsch is Chief AI Strategist at Intelligence Briefing and Adjunct Lecturer at West Chester University of Pennsylvania.

The Innovation Rockstars podcast is a production of ITONICS, provider of the world’s leading Operating System for Innovation. Do you also have an inspiring story to tell about innovation, foresight, strategy, or growth? Then shoot us a note!