Subscribe Stay Updated with Insight On
Subscribe to our podcast today to get automatic notifications for new episodes. You can find Insight On on Amazon Music, Apple Podcasts, Spotify and YouTube.
By  Insight Editor / 11 Mar 2026 / Topics: Artificial Intelligence (AI) , Research & Innovation Hubs , Generative AI , Digital transformation
Leaders everywhere are asking the same questions about AI. Where is the real value, how do we measure it, and what has to change inside the organization to see meaningful outcomes?
Insight Chief Data Officer Rob Green talks with Ethan Mollick, author of Co-Intelligence, about what leaders get wrong about AI adoption in the enterprise.
Drawing on research and real-world examples, Ethan explains how AI acts as a powerful performance multiplier for individuals, often filling critical skill gaps and enabling people to deliver more, and better, work. He emphasizes that the moderating factor is no longer individual ability or even AI capability — it is organizational structure, policy, and the way leaders choose to approach AI.
The conversation explores the jagged frontier of AI capabilities and why experimentation at the edge is essential. Ethan walks through the co-intelligence concept, including cyborg and centaur models of human–AI collaboration, and shares how emerging agentic systems are beginning to change how teams tackle complex work.
They also dig into ROI, security, and governance. Ethan challenges the idea that ROI should be the first and only lens for AI, arguing that the bigger opportunity is transformation — rethinking workflows, roles, and outcomes. He offers practical ways leaders can support safe experimentation, refresh outdated guidelines, and build a lab, a crowd, and visible experiments that move AI from isolated pilots to enterprise-wide impact.
If you liked this episode, share it with a colleague.
Have a topic you’d like us to discuss or question you want answered? Drop us a line at jillian.viner@insight.com
— Ethan Mollick, Author, Co-Intelligence
Audio transcript:
Ethan Mollick:
I mean, so the most extreme version of this, which has been floating around in sort of AI circles for a while, is when will we get the first one person, billion dollar company, right? Mm-hmm . Yeah. But we don't have to go that extreme to say, okay, well, we already know AI improves performance for individuals, but it also fills in skill gaps for them. Whatever you're worst set, the AI is probably better at that than you are. And so, as a result, those people are held back by skill gaps. You know, they're good at some stuff, not good at others. Um, productivity gains, right? I can now basically an infinite army of, of interns to manage. And as a result of that, you get very large improvements in individual ability. Again, the moderating factor is organizational structure. It's not individual ability or even AI ability at this point.
Jillian Viner:
If you're making technology decisions that impact people, budgets and outcomes, you're in the right place. Welcome to Insight On. We are delighted to bring you a really special episode today with a really incredible guest, Ethan Molik. He is the author of CO and one of the leading voices in AI insights, chief Digital Officer, Rob Green sits down with Ethan to get a reset on how we're thinking about AI and make sure that we're not going the old direction because these things change and we need to change how we're approaching it. The conversation is insightful and honestly, a little bit spicy. No matter where you are in your AI adoption journey, I guarantee you're gonna walk away with something really helpful. And if you are in an organization that's maybe not moving fast enough with ai, or you still have to convince and get some skeptics, this episode is going to give you the ammunition you need to see progress. All right, grab your coffee and let's go.
Rob Green:
Welcome to Insight on, I'm Rob Green Insight's Chief Digital Officer, and I'm thrilled to be joined today by Ethan Molik. Ethan, welcome to Insight on, uh, you could be part of our, uh, our podcast today. Um, I thought I'd just lead off you, you wrote a great book called Co Intelligence, and I, I love the title of the book in particular because it frames ai, as you know, Microsoft will love that I say this, but a copilot, um, as opposed to being, uh, a massive disruptor that's gonna destroy jobs. You kind of talk about the practical application of AI and how it can help people and business units, enterprises be, be more productive generally. So how'd you come up with the title and can you talk just a little bit about sort of the implications of co intelligence?
Ethan:
Sure. I mean, so the way we work with AI has evolved. I mean, the book came out two years ago, which is infinite time in AI world, but, um, the, the methodology still works for a lot of kinds of work. There are new ways to work with AI as well, but the idea was that you basically have, there's two kinds of ways of working with ai. Either there's a cyborg or a centor. So a cyborg is blending their work with ai. So you go to AI to do something, you go back to human work and a cent divides up the work, like the half person, half horse, where you do some things with AI and some things not. And I think that, that those models are helpful because they kind of clarify how do I wanna approach AI in this situation? Yeah,
Rob:
Makes sense. And, um, you also kind of explore how AI helps individuals augment their performance, uh, improve their productivity, improve their output, and I think that's another aspect of kind of co intelligence. The AI is really helping people do things faster, better, smarter. Um, could you just talk about some of the practical use cases you've seen where that's played out?
Ethan:
Sure. And I wanna focus, it's not just productivity. I think productivity gains become a focus, and in some ways you end up just doing more of what you're already doing, right? Yeah. So part of what is interesting about AI is that it is transformative. Transformative necessarily doesn't necessarily mean replacement, right? It means it does change and transform things. So, uh, for example, I've got some research and a lot of my colleagues do showing that AI is really good at idea generation. Yeah. So if you ever need to generate ideas, have the AI generate a hundred ideas after you've generated your own. So you have your own a lot of thoughts first. Uh, it's obviously very good at sort of the usual editing writing feedback, but it's actually quite good at strategic advice, thinking through hard problems. Obviously coding, I means it's a general purpose technology, so it does many different things. Yeah,
Rob:
Yeah. No, great point, great point. Um, you also, um, I I think you coined this term the jagged frontier, and it really encourages, um, experimentation at the edge, uh, with AI capabilities. Um, a lot of enterprises though, a lot of, I, you know, I spend time chatting with CFOs, CIOs, CISOs, um, they're looking for ROI, they're looking how to protect proprietary data. Um, uh, they're concerned about potential liability. So how would you advise sort of C-level executives on how to lean into practical experimentation at that jagged, uh, frontier versus, you know, sitting on the sidelines? So
Ethan:
There's a lot going on there. Yeah. I mean, one thing is you, we don't know the shape of the frontier. You don't know what AI is, good or bad, until you use it. I mean, open AI has no idea what this value of AI is for mid-sized Midwestern insurance company, right? Or Yeah. You know, or a, you know, or a car dealership or anything else. I mean, no one's thought about this. So there's opportunities to find use cases. There's also probably problems, things it's not good at. The only way to know is to explore. I think that there is too much worry right now about some of the sort of information security things. I mean, they're very val, it's worth worrying about, but the idea that the AI is gonna leak your data or your private secrets is less of a concern than it was.
Ethan:
All of these systems really have secured themselves a lot more than they were. They're still kind of cloud-like risks, but no longer the AI will train on your data without you knowing it and send it to someone else. Um, for enterprise systems, there are a lot more esoteric security risks appearing as a result of ai. So, you know, there, it's, it's actually quite good at hacking on its own. It's great at social engineering, IT expo, uh, it itself can be manipulated through things like prompt engineering, so, um, or prompt injection. So there's a lot of new risks, but I think the privacy concern, or that we shouldn't use AI because it might leak our secrets, um, is, is generally something that a lot of companies have figured out ways to address.
Rob:
Have you seen, um, practical use cases where, um, measurement of ROI, you know, practical metrics that really help, um, enterprises measure the impact of ai?
Ethan:
Yeah, I mean, so on the ROI front, I actually don't think this has been as much of a problem in the last few months. Um, my, my colleagues at Wharton have a LONGITUDAL survey they've been doing of, uh, companies 85% report positive ROI from their ai, generative AI efforts already. Yeah, right? I talk to companies all the time doing this. I, like eight months ago, it was a very different situation. There was a lot of uncertainty. I do worry though, the ROI focus as the first thing to do is a mistake, because this is fundamentally about transformation, about an opportunity to do different things in a different way. And if you're just trying to focus on ROI is the only outcome, you're not doing r and d work, right? So there's going to be some expensive, you know, expenses. There's gonna be some failures If you're only just trying to be like, how do we write more code? But you don't think about how we're building an organization around writing more code. I think you end up in trouble.
Rob:
Yeah. Yeah. Um, either I thought there's some parallels. I've been in the industry a a long time. I'll date myself here a little bit, but, you know, I, I've been in the industry since not everyone had a desktop computer. And so, you know, you think back to the advent of desktop computing and then, um, you know, connecting computers and you get distributed computing, they move in the internet and cloud and so forth. And at each step of the, uh, technology transformation, um, there have been, you know, meaningful productivity gains that have been driven from that, but they've not necessarily been obvious at the time. What do you think are some of the things that, you know, aren't so obvious today about how AI is going to impact how work is done as, as we look forward?
Ethan:
I mean, I think almost everything. Um, yeah, so I mean, like, let's, let's take something that we know from the research that AI has a big impact. So even before Claude Code and all the new coding apps came out, we, uh, there was a nice study of Cursor, which is another coding helping tool. When that came out, it increased the amount of code that people were writing by about 38% and did not increase error rates. Okay. Yeah. So that, and that's just like right off the bat, right? I think we have multipliers on that right now. That's fine. Except that the way we do coding tends to be built around like agile development of developing other two thousands, right? And it's based on a whole bunch of assumptions about what work people can do, how you divide up the work, how you do a two week sprint that no longer make any sense.
Ethan:
Mm. So the issue that I have is that unless you're rethinking, how do we redo coding as a process, what does it mean when people could produce a hundred times more code? Yeah. You end up being stuck, right? So it's not just about a multiplier effect on productivity. This has not gonna affect everyone in the organization. If I could produce an infinite number of PowerPoints for you, that's not actually what you want in a company. You don't want a hundred more PowerPoints, right? That's not a productivity gain. Right? Right. What you wanna do is think about what was the goal of this? Can I accomplish in a new way with ai? So part of the, what I think the exciting thing, but also a trap for companies is let's do more of what we do without thinking any about it in a different way.
Rob:
Um, I guess just leaning into that a little bit further, um, and thinking about, and putting it in sort of a technology, uh, context, um, for companies that are going through, excuse me, through enterprise transformations, um, perhaps like migrating to new, new platforms, um, new ERPs. How should companies be thinking about, um, sort of those enterprise platforms and the implications for AI on the enterprise platforms?
Ethan:
It's a really interesting question, and one that's being hotly debated right now. I talked to a lot of CTOs of, of, you know, of Fortune 100 companies who are thinking about how do I start using AI to replace some of the functions that used to be enterprise functions. Even if you're not thinking about that, you start to think about how do my agents get access to these tools when it's not just a human, but I want my agent systems to be able to use it. I think people are thinking hard about what the boundaries are. When do you wanna, you know, assign a task to, you know, outsource it to a vendor, versus when do you want to think about how do I bring in the right consultants and people to help me do this internally? Yeah. And I think that it's a lot of flux right now. Yeah.
Rob:
Um, and leaning into sort of the agent, uh, capabilities, have you seen practical examples of agents, agent to agent, uh, um, you know, systems that are, um, emerging that are really impactful?
Ethan:
Tons of them. Yeah. I mean, I think there's lots of use cases where I've been seeing people get value from agents. I don't know how many I'm allowed to talk about. So it's always hard to, hard to do. I mean, coding agents are an obvious thing, right? Like if you, if you, one of the big shifts that's happened recently is if you talk to the, uh, you know, the, the team at Open AI that's coding, they're not touching code anymore, right? Most of the time it's, they're just instructing agents to do the writing. Same thing at, at Anthropic. All of the CLO codes stuff is being written a hundred percent by Claude code at this point. Yeah. So, and I, when you talk to like senior developers, they're getting these same sort of benefits. It depends on what the company policy is about, whether you gain from that or not. Yeah. So, I mean, and seeing agentic systems being used in internal tools a lot, um, a lot of simplifying of like what used to be very complex processes or bureaucratic process with ai. Uh, really lots of change happening all over. Yeah.
Rob:
And, you know, data to some extent is the fuel for ai. Um, and you, you touched on sort of some of the security, you know, concerns that maybe are abating, but how would you advise, um, you know, data scientists, data engineers, uh, those that are really responsible for data, uh, transformation, um, to think about transforming their data to make it more useful for ai?
Ethan:
So, not to upset anyone, but I actually think that the emphasis on data is a little bit of a hangover from old school ai, which means AI from four years ago, which is every bit as valuable as it was, right? Yeah. Old school machine learning for customer insights for, you know, for, for, uh, for finance, for anything else, every, every bit as valuable as you did, right? Predictive models, random forest, whatever you're build, like, they're great. Yeah. Keep doing those things. And those need clean data. Large language models operate a little differently. Your model should be a person doing something, not a machine that's learning from patterns. So accessibility is very different. If a human can use a tool, often the AI can use it. If you have a knowledge base that's searchable humans, the AI can do a search of that knowledge base. Yeah. Yeah. The idea that you need really well formatted data before you can do anything with LLMs just doesn't make sense. It's a hangover of a previous era of, of machine learning that's everybody's important, but is a different meaning right now.
Rob:
That's interesting. So, you know, the implication there is like, you can infuse the, the structured data from an enterprise, but also the unstructured data and the data that is sort of in the, the domain of everyday use to help build agentic capabilities. That's sort of how to think about it.
Ethan:
Yeah. So I mean, some of the most important data is not in any data set, right? Right. It turns out that one of the great things we can do with ai, especially with things like skills, which are sort of these prompt libraries, is you can distill that expertise and have the AI act like an expert, right? That's not written down anywhere, but what's our process for handling this problem? If you explain that really well to a modern frontier model, it often can execute that process autonomously using tools on its own to do stuff. Like, I'm able to ask the AI just for a fun example, to, you know, build a game from scratch, and I don't have to do anything else. It will figure out how to create the graphics, post the game play, test it itself without me having to do anything else, right.
Ethan:
Or in the more serious way. I've had my students launching startups in two or three days, and they're, you know, without ever having coded before, without touching code because they could just give the AI instructions to go do this kind of work. Yeah. Yeah. So I think people overemphasize the amount of that. It depends on that kind of structured data. Now, in the space of sort of data analysis, AI does a lot of things that are useful. One of the big things is taking unstructured data and structuring it, right? You can have it read through, you know, stuff, helping you organize data, have you write code, but it's a very different thing than using LMS data to day. Yeah,
Rob:
Yeah. It makes sense. Uh, I'm gonna pivot the conversation just a little bit and, uh, go back, to go back to your book. One of the points you make in your book is that, um, AI can help, uh, people, individuals, teams, business units, potentially enterprises kind of punch above their weight. Could you elaborate a little bit on that?
Ethan:
I mean, so the most extreme version of this, which has been floating around in sort of AI circles for a while, is when will we get the first one person billion dollar company, right? Mm-hmm . Yeah. But we don't have to go that extreme to say, okay, well, we already know AI improves performance for individuals, but it also fills in skill gaps for them. Whatever you're worst at, the AI is probably better at that than you are. Yeah. And so, as a result, most people are held back by skill gaps, you know, they're good at some stuff, not good at others, um, productivity gains, right? I can now basically an infinite army of, of interns to manage. And as a result of that, you get very large improvements in individual ability. Again, the moderating factor is organizational structure. It's not individual ability or even AI ability at this point.
Rob:
Yeah. And, you know, their implications there for managing, um, agents as employees effectively going forward. Um, and there's, you know, some conversation about, and I think Satya Nadella talked about this recently, about you're gonna have sort of an, an infinite army of agents that you can deploy in your enterprise. And so how should we be thinking about agents as employees?
Ethan:
So I think agents as employees is a little less useful thinking of, you know, agents as operating in the organizational structure as a whole. Right? So, I mean, agentic work is now a real thing. Like this is not, this was something we were making up and talking about, and there's a thousand definitions of agents, and you'll still see that hangover, by the way. Yeah. Lots of companies, like an agent is a prompt that's connected to another prompt by deterministic system. That's not what agents are emerging. You can see in Claude code and the coding agents how this works. There are agents that independently plan what they're going to do, get the skills and information they need to execute it, execute on their own, and then return with a, with a product for you. Yeah. So delegating stuff to them, delegating things to subagents becomes really important. That's something you could do right now. Like, this isn't a science fictional future of like, you'll imagine a team of agents. I typically have five or six different agentic tools running at a time when I'm doing other things, building software for me and doing research and all kinds of other tools.
Rob:
Yeah. Yeah. No, really good point. Um, another thing you talk about is sort of the, the weight calculation that, that sometimes it's, you know, smarter to, to hold off and kinda let the next model iteration solve a problem. How do you kind of balance leaning into what's available today versus, Hey, let's wait and see what emerges. You know, the field is moving so fast. Um, just in the last few weeks, cloud code Opus 4.5 came out and it seems like it's a quantum leap forward in code development capability. So how do you, how do you think about sort of, let's take advantage of what's available today versus let's tap the brakes a little bit, wait and see what emerges over the next 6, 8, 10 weeks? So
Ethan:
I don't think it's either or. Yeah. What I think you need to do is you, is have people in your organization, I, I usually say you need three things to make your organization succeed with ai. You need leadership. We've been talking about that. How do you build the organization around this? How do you transform things? You need crowd, which are people with access to these tools. We've also discussed that. But the third piece is a lab. You need people who are doing 24 7 work with ai. Mm. So it may not be that the tools are all available to do what you need right now, but unless you have somebody actually trying to use these tools, you don't know that. Right? Right. So the issue is that there may be a weight calculation, you might decide, like I, I feel like a lot of people who leapt on, um, talk to our documents, rag style solutions, ended up with things that are less powerful than if they'd waited six months and built on the latest set of tools to do things.
Ethan:
And that's okay. Right. Learning happens. I think what you need to do is be close enough to the ground for some people in your organization to have a sense of like, yeah, yeah, yeah. We're not, we should wait to adopt it for another six months because this is, but like, that doesn't mean you're sitting back. Right. I think it is hard to catch up if you're completely out of the game. Yeah. Because there's a skill set that you developed that is consistent. If you've used Claude since Claude, you know, one actually Claude, you know, 4.5 opus feels like that, like you'll know how, what I know what it's gonna be better at or worse at than chatt BT because I know the model's personality.
Rob:
Right. Right. I mean, it's a little bit building the muscle, right. You've gotta really engage and, and learn and stay at that jagged frontier,
Ethan:
And you need the experiments on the jagged frontier. Like I, there are things I know that AI can't do that I come back to every time a new model comes out and one by one, those things just disappear.
Rob:
Yeah. Yeah. Yeah. Good point. On that note, you know, given how fast things are moving, you know, what's something that you believed strongly 18 months ago that, um, you know, about how organizations should adopt, adopt ai, but has kind of been revised over the last 18 months?
Ethan:
So I think that a lot of the emphasis was before was like building prompt libraries, uh, and, you know, have like, let's have a set of prompts that we've tested and validated that people can use. I just don't think that that's where the future is heading right now. Yeah. Um, I think GPTs are gonna be on their way out. It's very clear that the right approach are, because the problem with those models have always been, I need to find the prompt I need to use. Right? Or I need to assign someone to use this, or I need to build an automated workflow where this prompts always fires. The AI is now better prompting itself. So the idea of skills, which are basically prompts that the AI can pick up whenever it needs to. So if you ask Claude to create an Excel document for you, it will read the Excel skill. You can just try this right now and learn how to do Excel. Right? It's like, you know, in the matrix, like I know kung fu it's the exact same thing, right? And so that changes how things operate if the AI can pick which skills to use when you don't have to have the same human judgment. Right. Which challenges the co intelligence paradigm a little bit as well.
Rob:
Yeah. Yeah. Yeah. Very interesting. Yeah. It's intering, you know, it's kind with, I think if you rewind 18, 24 months ago, the notion was we're probably gonna have roles that are like prompt engineers. We're gonna be hiring prompt engineers, and that seems to have fade. Yeah.
Ethan:
We, we've been testing, so every technique like prompt engineering, a lot of it was like this art of making a prompt quote and you'd say things like, you know, please and thank you, or I'll, you know, or, you know, if you don't do this, I'll get in trouble. And actually, all those things used to work. Even things like chain of thought prompting, which are key to ai. Um, and we've been testing those at the, at the gene of AI lab at Wharton. None of them work anymore. Interesting. All the AI come, you know, like all those prompt techniques don't matter. Yeah.
Rob:
But, uh, you made an interesting assertion. You kinda just dropped it in there, that GPTs are on their way out. Can you elaborate a little bit on that?
Ethan:
Sure. So GPTs are open AIS attempt to sort of build a library of prompts you can use, right? They're like prompts with a little bit of tools or other, other additions. And the problem with the GPT libraries, first of all, it was never really fully built out by open ai. It was sort of partially there. And, but even in the other implementations that you've seen by lots of other companies, the issue is they're prompt libraries. And a prompt library requires you to have an expert who's good enough to be able to use no one to use the prompt and evaluate the output. They don't work for agentic systems because that's only a co intelli. It's like I pick the prompt or the GBT, I use it, I evaluate the output, then I use another one. An agentic tool just goes and does hours of tasks without asking you. Yeah. So GPTs don't make sense in that kind of environment, because I'm not triggering a single prompt where I work back and forth with the ai, I'm sending off the AI to do a job, and that requires skills and harnesses and, um, and, you know, maybe MCP servers, a lot of other little tools you can use. Yeah.
Rob:
But what you're not saying is it obviates the need for all the infrastructure that's being built out. All the billions of dollars are being poured into building data centers and, and the infrastructure support, ai,
Ethan:
Tps are very specific type of approach to working with ai. Yeah. Um, and open AI is terrible naming things, so it's easy to get confused. But GBTs, I mean, they, they have a GPT store, which is a lot of companies were adopting this idea, which is, is like we create custom prompts inside OpenAI. Yeah. And, you know, Moderna had hundreds of these, for example, from one company that's talked about publicly. Uh, you know, these sorts of prompt library based agents are not as important. I don't see any sign yet that there's an overbuild out of data centers, uh, at this point, or an expectation. You know, nobody knows what happens in with finance in the future. Yeah. Can't say whether there's a bubble or not. Uh, I haven't seen side of it, but there might be. But I think it's important to differentiate the difference between a financial bubble. Like maybe people are overspending versus a technical or use bubble. There's no doubt that the AI is gonna be absolutely transformative. Who cares in some ways, in the big picture about Yeah. Which company is the winner or loser, no offense to many partners here, but like Yeah. The, uh, but you know, for all of us though, this technology will march on. Yeah.
Rob:
Um, so it, it feels like we're moving out of the hype cycle phase of AI and into sort of the practical application. We're seeing lots of capabilities emerge, lots of startups, but also lots of incumbents that are introducing capabilities that are, that are driving productivity improvements, that are driving meaningful innovation. Uh, but for those companies, organizations that have been maybe sitting on the sidelines a little bit, how would you advise them to start to lean in and get started?
Ethan:
Ironically, by the way, one of the things I had GPT 5.2 Pro do was analyze all the data in terms of the hype cycle's not real. It's only happens like 8% of cases that things go through that Garner Heights hype cycle. Um, uh, so, you know, I think it's important to realize like, there isn't really a sitting this out. This is a big change, right? Yeah. And like, it's sort of like, you know, sitting out the internet or sending out steam power, like it's going to affect you one way or another. It's hard to imagine this doesn't have an impact on what you do. Yeah. And I, I think that there's a difference between saying we need to build a deep IT solution right now, and we need to start experimenting and figuring out what this thing could do or not do. And I think that a lot of the barriers to why people don't start experimenting are like half formed rumors.
Ethan:
Like, I'm always surprised at how much random LinkedIn rumors end up creating belief systems. Like there was a study out of MIT, uh, I, I'm, uh, for those listening, I'm doing, uh, I'm doing scare quotes in my hands, uh, that said, 95% of AI projects fail. That wasn't a study that was an interview of 52 people by one guy at a conference who made a judgment call about it. Yet it moved the market. Right? Because why, because information source are quite bad on this. Like, you have to get your hands on this, or you won't know what it does. Like, it's completely fine for you to say, Hey, you know, Ethan, I've listened to you. We've tried this. It's not working in our field. We've given an honest effort. Great. You've learned something valuable. Come back and then check whether it works for you. Yeah. I don't think that's gonna be the case for most people, though. Yeah.
Rob:
You know, if you think about back to the, the cloud and how the cloud emerged, it really seems to me it kinda started at the developer level, uh, or the departmental level got proved out and then sort of moved to the business unit level and ultimately became an enterprise, uh, level computing paradigm. Is that how AI plays out, do you think?
Ethan:
I I think that the thing about AI is good AI users are everywhere in the organization. They're often the technical people are the worst AI users. Yeah. Because they want them to be using a deterministic system. So they use like, you know, lms like a fizzy fuzzy processor to do some like light level, you know, like, oh, it'll do some categorizing for us, or it'll help sort things and do NLP. That's not what this stuff does. Yeah. So when you actually look at how this is being used in breakthrough ways in marketing, by creative people in hr, in strategy work, and, and you know, in, uh, in mar in all kinds of like different fields, you'll see that that innovation's happening everywhere. And when I talk to companies, it's often completely different pockets of the organization that are being transformed. Sometimes it's legal, sometimes, like you, you'd be surprised at how many things happen usually requires a leader who's willing to adopt a risk or accept some risk for doing this. Yeah. And who's willing to experiment and model that experimentation. Once they do that, then things start to change. Yeah.
Rob:
Um, we've touched on the, the risk or the security concerns a little bit, but, um, just talk a little bit more about the risk taker. Who is that, uh, you know, what are the, the risks that they should contemplate as they start to lean into ai?
Ethan:
So we talked a little bit about all the rumors that kind of go around, right? So one of the points about the rumors is, look, a lot of the rumors about the what a, where the dangers are of AI are completely wrong, right? So you could absolutely get an enterprise license with tons of companies that will have all of the data protections that you wanna have your data won't be trained on. You're taking the same risk as any other cloud application to work with, right? Right. Yet companies are still like really worried about this. Um, even though that's a solvable problem, partially 'cause legal departments have read these sort of horror stories that are often not true about LLM stealing data. Right? So there are concerns, but they're not, I mean, I could tell you if like JP Morgan and and Novartis are both figuring out ways to use chat, GPD you like, it's companies could figure this out.
Ethan:
Right? I think that the, the, the, the kind of wider set of issues though, is there is a lot of uncertainty here. Like, AI makes mistakes. We don't know how to integrate into systems. Something could go wrong, maybe things go right, but so somebody in the organization needs to be the one who says, look, I take responsibility for the fact that we are taking a risk here, that we are experimenting. Because if you don't do that, the risk is diffused in the organization. Nothing happens. Yeah. I, where I see transformation happening the most is where sea level people make a decision that like, this is important. Like I was talking to, uh, Nikolai Tangan, who, uh, runs a Norwegian sovereign wealth fund, the biggest pool of money on the planet. He has a, like basically a hundred percent of AI adoption in the company. And the way he did that was he overcame the risk managers and said, listen, this is important that we use it every meeting. He asked, how are people using ai? And he pushed use of it. And I've seen that happen in company after company. Walmart had the same sort of thing where c c-level kind of is like, we gotta start using these tools. This is important to transform things. So I worry when everyone sort of sits out and lets the AI committee form into 2023 make decisions, you're kind of in trouble.
Rob:
Yeah. One of the things we did at Insight very early on, we were very fortunate to have early access to, um, chat GPTs models a few years back, but we also recognized that like providing some guidance given the potential risk was really important. And so we had developed, um, AI usage policies and guidelines that we published across the enterprise. Do you see much of that happening?
Ethan:
So the problem is those have also come to become very bureaucratic in organizations Yeah. And very anchored in what AI used to do. Yeah. So like, when you're like, and also they become terrifying. So I mean, when you, when you survey people, 50% of Americans say they use AI at work, right. And they report, by the way, a three times productivity gain on tasks they use AI for. And you talk to companies like, we don't see that. And because they'd be insane to show you they're using ai, you have a policy that says you get fired if you use ai. Wrong. Right. What's wrong? Who knows? There's a giant policy about this. Yeah. Right. And like, you know, so why would you ever, like, so I, I worry a lot that those policies made a lot of sense. You know, when you first put them out, the question is are they being revisited?
Ethan:
Are they being revisited in a way that for sure actually creates huge areas where people can explore and carefully ring fences? The areas where you don't use AI as opposed to of one, which is let's, you know, here's all the ways you can be punished if AI goes wrong. Yeah, sure. I mean, the truth is, there has not, you know, I, I, you know, feel reluctant to say this because who knows what happens in the news. But given that there is a billion people using these systems, I have yet to hear of a single major corporate incident where it wasn't clearly someone's fault that they used AI wrong. Right? Yeah. Like, they turned in a hundred percent AI report, you know, instead of, instead of writing it by hand, that's their problem. I haven't seen a case yet where an AI hallucination should, that that was, you know, that should have never happened, caused a major issue for a company. Yeah. So you have to balance the risks. Yeah.
Rob:
Yeah. Really, really good point. There's nothing in the news that's been, uh, sensational about AI doing nefarious things that led to bad outcomes for, and
Ethan:
I, look, there's all kinds of stuff about what's the social impact on people. And obviously people are like, are turning in bad AI output all the time as their own output, but then they're responsible, the lawyer who turns in, um, a, a, you know, AI citations that are wrong without checking them, it's very clear who's at fault in those issues. Yeah. So like, I don't think that changes that much about how organizations work. Right. Errors are still your responsibility, right? You're still taking risks one way or another with tools. It, it's, I feel like people make this more complicated than it needs to be.
Rob:
Yeah. Yeah. You really good point. And one of the things we recognize in publishing AI guidelines was we had to make sure they stayed current. And so we revisit them every couple of months.
Ethan:
That's great. And a lot of organizations don't do that, right? They, they produce something and now every AI use case has to go in front of an AI use case committee that meets once a month. And this, you know, and the, and there's a huge backlog. And, you know, I mean, and that's just a recipe for like shadow AI use inside your organization.
Rob:
Yeah. One, and one of the things we've found, and we've got, you know, we, in, in our own internal environment, we've got, you know, thousands of agents operating. Um, but the security tools have really come a long way as well to kind of keep an eye on what's going on inside the enterprise so that it, um, mitigates some of the risk of experimentation with, with agenta capabilities.
Ethan:
Right. And that's part of what you have to do, is like, we have to redesign organizations around the risk cases and the upsides of the downsides, but like a risk first when there's a time of great change, A risk first mentality is probably a bad idea, right? Like, and I mean, it's a similar problem when people ask me all the time, well, what's, what's the use case I should copy? Because that's how it worked. You know, like it was value in being a medium speed follower in cloud, right? Let every other people sort out how this all works, right? And we'll, we'll get there when it's a well sorted solution. And I know who to pay. The problem with AI is because it is, no one can tell you how to use it in your field, and no one will tell you, right? You have to wait a long time for a software developer to tell you how, you know, how AI can help your dental, your, you know, dental chain or whatever, because there's nobody doing this work. And by the way, the people who are building the software don't have any special knowledge or ability to use AI systems either, right? I mean, in fact, when you talk to the AI companies, they're often building tools on their own AI products where they have no control of what the large language model does. Like the research labs are putting out their own lms, and, you know, so you have to be experimenting to know what these things can do.
Rob:
Yeah. Yeah. Great point. Um, Ethan, as we, um, start to wrap up here, I guess one, one last, uh, open-ended question. It's a little bit hard to answer given the rate at which things are changing. But, uh, as we sit here today and you look forward, what are, what are one or two things that get you excited about what you see on the frontier for ai?
Ethan:
So I think there's a lot of things happening. I'm surprised, um, that there hasn't been a sort of slowdown. I sort of thought maybe we'd hit some more barriers, but we haven't. Um, so I think that what I'm really interested in, in acceleration of scientific research, how do we do that? And that has all kinds of typical problems that you have in AI space. Like what does it mean to accelerate stuff? Who's gonna read it if we accelerate it, right? Like, how do we deal with publishing more research? How do we know if it's right or not? How do we absorb it? Like there's a thousand problems that are really interesting, but have to be solved if we wanna make this a real thing? So that, that's really interesting. And I'm really excited. I'm a professor of entrepreneurship, right? I'm a former entrepreneur myself. Um, my, uh, you know, I, I co-founded a a, we didn't call it SaaS at that point, 'cause that didn't exist as a name, but, uh, invented the paywall. I still feel a little bad about that . Um, and, uh, in the, in the late nineties. Um, and so as, uh, as a result of all of that, like, I'm excited about entrepreneurship too. Like entrepreneurship is all about the fact that you're bad at almost everything, but you're good at one thing and you have to hope the one thing that you're good at is better than 99 things you're bad at. Yeah. It's great that AI can fill some of those gaps. So that's exciting as well. Yeah.
Rob:
That's great. Well, Ethan, this has been a, a delightful conversation. Thank you so much for joining us at Insight On Thank you for
Ethan:
Having me.
Speaker 4:
Thanks for listening to this episode of Insight on. If today's conversation sparked an idea or raised the challenge you're facing, head to insight.com. You'll find the resources, case studies, and real world solutions to help you lead with clarity. If you found this episode to be helpful, be sure to follow insight on, leave a review and share it with a colleague. It's how we grow the conversation and help more leaders make better tech decisions. Discover more@insight.com. The views and opinions expressed in this podcast are of those of the hosts and the guests, and do not necessarily reflect on the official policy or position of insight or its affiliates. This content is for informational purposes only, should not be considered as professional or legal advice.
Subscribe to our podcast today to get automatic notifications for new episodes. You can find Insight On on Amazon Music, Apple Podcasts, Spotify and YouTube.