Co-Intelligence: An AI Masterclass with Ethan Mollick (2024)

Full Transcript

Note: Transcripts are generated by machine and lightly edited by humans. They may contain errors.

(00:02)
Ethan Mollick: The idea is you don’t know what AI is good for or bad for inside your job or your industry. Nobody knows. I think a lot of people think there’s a secret instruction manual out there. There is not.

(00:17)
Darius Teter: Welcome toGrit & Growth from Stanford Graduate School of Business, the podcast where Africa and Asia’s intrepid entrepreneurs share their trials and triumphs with insights from Stanford faculty and global experts on how to tackle challenges and grow your business. I’m your host, Darius Teter, the executive director of Stanford Seed.

It seems like no matter where you turn these days, artificial intelligence is the topic on everyone’s lips. It’s in the news daily, and the pace of development is unprecedented. But from total doom to utopic optimism, everyone has a hot take, and it’s clear that a broad transformation is underway. But what does it actually mean today? And for the purposes of this podcast, what might this revolution mean for your business? Today I’m joined by a visionary thinker, someone who’s deeply tapped into and working intensely on the intersection of artificial intelligence, business, society, and education.

Ethan Mollick: I’m Ethan Mollick, a professor of innovation and entrepreneurship at the Wharton School. I’m also the author of Co-Intelligence, which is a new book out that’s sort of a guide to thinking and working with AI.

(01:28)
Darius Teter: I’ve been following Ethan’s blogs for a long time and he is the leading thinker and doer on the applications of AI to business. In his new book, Co-Intelligence: Living and Working with AI, he prompts us to experiment and discover how AI can revolutionize our own work in our lives. So we’re going to peel back the layers of hype and fear to discuss how AI may redefine how we work, learn, and solve problems. And as an entrepreneur, we’ll ask: Can it be the breakthrough that you’ve been waiting for or perhaps the challenge that you need to prepare for? Let’s find out together. I’m very interested in the practical applications and what it particularly means for people who are trying to start a business or build a business. So what I did is, I went back and I read your book. Was it The Unicorn’s Shadow?

(02:22)
Ethan Mollick: Yes, my entrepreneurship book a while ago, yes.

(02:25)
Darius Teter: Yeah. So the question that came to me is: that was four years ago. In what ways would the advice you gave in that book change now that we have a co-intelligence that we can work with? And maybe I would start with the whole idea of ideation. What’s changed?

(02:44)
Ethan Mollick: I mean the AI idea hits most people, I mean it’s this short change. So that’s a pretty profound set of changes. I’ve taught entrepreneurship courses for over a decade, 15 years. I know a lot of those have gotten funding and I’ve seen lots of ideas, at least for starting ideas. It does a better job than most of my students. Now, the very best ideas, people generate on their own, but now it can help them come up with ideas. And we know this because we’ve actually done head-to-head comparisons with other humans, judging willingness to pay, and AI out-innovates people in most cases. So, for example, if you would just have a conversation with GPT-4 and you’re like, “Hey, I’m trying to come up with a business idea. Ask me questions, help me come up with something,” you’re going to come up with something pretty solid, right? It’s not going to be bad, at least at a starting point. So especially if you feel stuck by that, you’re like, “Hey, listen, I’m a small business owner in Kenya who has a bunch of needs. Let’s talk through it.” We already know from early results that helps. AI advice is useful for entrepreneurs. So I would just — scanning is a piece of the puzzle, but in fact it’s much more like having a co-founder that isn’t really a human. That is the way I’d be thinking about it.

(03:51)
Darius Teter: It’s interesting that you mentioned Kenya. Just for fun, I went into GPT-4 and I set up this hypothesis where I’m running a series of small clinics, rural health clinics, across Kenya. I wanted to ask GPT-4 to help me come up with a customer journey map and we went through a series of back and forths around what would be customer persona, what would be their emotions when they enter the clinic, how would they learn about the clinic, the sort of the classic customer journey map. And it did remarkably well. But I wasn’t — and this is my own lack of experience — I wasn’t clear as to how do I get it to be more critical and analytical of things that I put forth? And that’s something I was very interested in aso in your book. You talk about something that we do at Stanford called a pre-mortem, something we teach to MBAs. “Your product failed. Why did that happen?” Could we use these tools to adopt sort of a critical analytical persona to evaluate our ideas and our strategies up front?

(04:47)
Ethan Mollick: I already make my students use a GPT that does pre-mortems for them. We’ve already done that. I make them stress tested. By the way, on the Kenya thing, I want to point out this is not abstract at this point. So if you look — for example, there’s a nice study by Nicholas Otis at Berkeley and a bunch of other people where they did a controlled experiment in Kenya and the entrepreneurs got advice from GPT-4. The bottom entrepreneurs did worse for reasons we can kind of go into, it’s interesting. But the top entrepreneurs got 18 percent improvement in their profitability — 18 percent. There’s almost nothing — I mean as somebody who helps advise — nothing we do gets you 18 percent improvements. There’s not an intervention that does that. That’s insane. But that’s a robust result. That feels like an emergency to me to think about how can we use this positively. So we’re talking about, abstractly, yes, there’s tons of tools. I absolutely would recommend pre-mortem, I would recommend interviewing the AI and asking it to pretend to be a customer. I recommend asking for advice, but the low-hanging fruit is clearly there.

(05:46)
Darius Teter: As we navigate the complex intersections of AI with our lives and work, the challenge lies in identifying where and how it can be used most effectively. It’s a challenge because there’s no manual, and there’s no manual because — and this shocked me to learn — even the companies and people behind the most advanced generative AI models don’t know how they work. Let me repeat that. Google OpenAI and Anthropic can’t explain beyond general principles how their models come up with the answers they give. We just don’t know yet. So that’s where disciplined experimentation comes in. To truly understand AI’s potential in our workplace, Ethan explains the importance of getting in there and figuring it out for yourself. In his book, Ethan has laid out four essential rules to guide us. These rules aren’t just theoretical, they’re practical steps we can all take to make the most of this transformative technology. The first rule is simple yet profound. Always invite AI to the table. This means embracing AI as an active participant in your work, leveraging its capabilities to enhance creativity, efficiency, and innovation.

(06:56)
Ethan Mollick: I don’t have easy answers. We don’t know how good AI is going to get. We don’t know what it’s going to replace. We don’t know how long it’ll take systems to change. You don’t know what AI is good for or bad for inside your job or your industry. Nobody knows. I think a lot of people think there’s a secret instruction manual out there. There is not. OpenAI doesn’t have any secret knowledge you don’t have. They don’t know anything you don’t. They have never thought about how this could be useful to help Stanford Seed do its work. They haven’t thought about how to help a small business owner in Sierra Leone do a better job running their business, it has never even occurred to them. No one’s tested it. And to see how good it is to tag along or any other, we just don’t know. And so part of what you need to do is you need to figure this out yourself and the only way to figure it out is disciplined experimentation, and the only way to disciplined experimentation is just to use it a lot for everything you possibly can.

(07:47)
Darius Teter: So I guess the question is: Are more interesting and new jobs being created faster than less interesting old jobs are being destroyed?

(07:54)
Ethan Mollick: It’s too early to know, but the hope would be yes, right? Historically that has been the case or at least higher paying better jobs. We don’t know the answers to job loss. It’s not a conversation that’s actually even that useful to have because we don’t know what’s going to happen. It’ll be different. Jobs are bundles of tasks. We do many things at a job. As a professor, I teach, I do research, I go on podcasts like that, love all of that. I also fill out expense reports and grant studies and I don’t like doing those things. If the AI does those things, it takes a big part of my job away, but my job shifts and becomes better.

(08:22)
Darius Teter: I think that’s a perfect segue to the second rule for co-intelligence, which is to be the human in the loop. Now, my most basic understanding of that is that — and this is again something one of our business leaders said — they said AI makes good easy, but great is still really hard. And part of that is because of the obvious limitations at the frontier of what these models can do. So what is the role of the human in the loop?

(08:47)
Ethan Mollick: So the AI is better than a lot of people in a lot of jobs, but not at their whole job. And so whatever you’re best at, you’re almost certainly better than the AI is. So part of your question is: What do I do well and want to double down on, and how do I figure out how to give out other parts of my job to the AI as a result? If AI keeps getting better, how do I double down on what I’m good at and build expertise in that so I stay ahead of where AI is? How do I think about what I want my future and jobs to be? And I think that’s a powerful way to think about problems like this.

(09:18)
Darius Teter: So we’re not all just going to become super lazy?

(09:23)
Ethan Mollick: I think about the fact that all the evidence we have is that people have been cheating in school all the time and we just have ignored it because we haven’t had to pay attention to it. And so people are always going to be lazy because we are optimizing for it. There are things we care about and we’re intrinsically motivated for. There are things you have to extrinsically motivate people for by giving them a reward and this will shift the boundaries between those things like so many other things do.

(09:47)
Darius Teter: There’s a whole section in your book about the importance of expertise and how expertise is acquired and that there’s no shortcuts there, and staying au courant in terms of your level of expertise is important. But at the same time, you also identify that there is kind of a playing field leveler here around skills that the worst performers get the biggest boost using this co-intelligence and the greatest performers get only a marginal boost. So in that respect, it’s kind of a skill leveling.

(10:16)
Ethan Mollick: So the early results are skill leveling, but that’s the early results from AI because it basically does the work at the eighth percentile. So if you were below that, it does enough work that you get up to the eighth percentile. We don’t yet know as people get better at using these systems, whether they’ll boost everybody up to the 99th percentile, whether the best performers get a 10 times boost, whether everyone gets an equal boost. We don’t know any of those answers yet. So it’s early days on some of those questions.

(10:42)
Darius Teter: Okay. Third rule for co-intelligence, and I love this quote, working with AI is easiest if you think of it like an alien person rather than a human-built machine.

(10:51)
Ethan Mollick: Well, they’re trained on human language and they’re refined on human language and it turns out that they respond best to human speech. There’s some early evidence that coders are actually worse at using AI because they think it works in a rational kind of way and it doesn’t. But if you’re used to working with people, you can start to get a sense of what’s going on, where its mind is at, even though there’s no mind. It works like a thinking person. So practically telling it and giving it tasks like a person often gets you where you need to go.

(11:21)
Darius Teter: Say a bit more about treating that as a person. Does that include giving it a persona?

(11:25)
Ethan Mollick: AIs often need context to operate in, otherwise they produce very generic results. So a persona is an easy way to give a context. You are an expert marketing manager in India working out of Delhi, focusing on technology ventures that work with the U.S. We’ll put it in a different head space than if you say you’re a marketer or if you don’t give it any instructions at all. So it’s a nice beginning to give it that kind of context.

(11:52)
Darius Teter: It’s important to understand the fundamental nature of how these systems work. Unlike traditional software, which follows a set of deterministic rules, generative AI works on probabilities. This means that the responses it provides are based on a broad spectrum of possible answers rather than a single fixed outcome. Think of AI like a jar of marbles. Each color represents a different possible answer. So when you ask a very general question, it’s like reaching in and grabbing a handful of the most common colors. But to get a specific color, you really need to provide more detailed context guiding the AI to the right part of the jar where the more relevant or useful responses will be found.

(12:33)
Ethan Mollick: So think of the answers it can give as a massive probability space, like a cluster of points everywhere. The AI gives you stuff from the center of that cluster of points every time, the mean kind of average sort of answer. Your goal is to force it to pull from a different part of the probability space that is much more suited to your answers, right? It’s sort of like the idea that if you’re doing a Google search, you don’t want to do the Google search where it’s like “ways to improve business.” You want to say “ways to improve business India,” whatever keywords, and you’ll get results that are not just Wikipedia. Same kind of thing happens here, not technically, but roughly.

(13:07)
Darius Teter: Ethan’s fourth rule touches on just how rapidly this technology is improving. Assume this is the worst AI you will ever use and just think about that for a second. The AI we’re working with today, as impressive as it is, will be nothing compared to what’s coming in the future, whether that’s a year from now or next week.

(13:26)
Ethan Mollick: I mean we’re early days still. I mean there’s a lot of stuff still being built and I think people over index — especially startups, weirdly, don’t seem to be adjusting to technological changes quickly and they seem to be betting that they’re solving the worlds of problems of today with AI and they’re implementing RAG (retrieval augmented generation) in LangChain and all. Why is that a bet for the future?

(13:47)
Darius Teter: Just a few weeks ago, I had a fascinating conversation with Steve Ciesinski, a lecturer here at Stanford who’s focused on the challenge of scaling international businesses. We talked about how in the recent past small businesses were at a huge disadvantage compared to large firms because they couldn’t access the same high tech tools affordably. Fast forward to today and thanks to cloud-based software as a service, even a startup with modest revenue can access tools that were once exclusive to Fortune 500 companies. This shift has been a massive leveler by reducing costs while enhancing efficiency in all parts of these small businesses. So I wondered: What might AI mean for SMEs who make the investment in leveraging these tools?

(14:28)
Ethan Mollick: I often point out when I give a talk to a Goldman Sachs, I’m like, “The AI available to every kid in Mozambique is better than the AI you’re using internally in your company.” There isn’t a better model than GPT-4, maybe Claude 3, whatever, and that’s publicly available. Companies do not have secretly better models. They’re all broadly available. They’re available for free through Microsoft for many people, otherwise for relatively small amounts of money compared to other kinds of business software. And there’s no advantage. Goldman Sachs does not have an advantage in how to use AI over you. So this is a really unique time. How often do we have a complete — it reminds me of the lead the Philippines in East Africa had when mobile phones came out. They were able to jump a whole level of technology of wired to mobile and that’s why a lot of the ideas of how to do mobile payments came out of experiments in the Philippines with people using mobile cards to do work. We’re in that same boat right now and I think if organizations around the world, I think that the US companies are often sleeping on the late level of profundity of the change that just happened. If AI is the future, it’s universal.

(15:30)
Darius Teter: I’m curious, what are some of the other opportunities that can address inequality, particularly in terms of access to growth opportunities? I’m thinking here, fintech is obviously one of them, but what about democratizing education, access to health, even energy, all of the core challenges in a lot of these emerging markets. Where do you see the potential here?

(15:52)
Ethan Mollick: I think it’s broad-based, right? Part of the issue — a place like Silicon Valley thrives in part because you have a diverse set of ecosystems of mentors you can reach out to, you can hire the right people for your job. The number one, I’ve been doing a lot of work on startups for a long time and co-founders hold people back, employees. That’s your main thing — small narrow promises of expertise are getting in people’s way, right? I don’t know how to do this thing, so I give up. Yeah, I can work as a mentor in that piece. I can work as a confidant, an advisor. I feel like the equalizing of the entrepreneurial process is itself an interesting thing we’re thinking about.

(16:30)
Darius Teter: Say a bit more about what it means for the startup entrepreneur.

(16:34)
Ethan Mollick: I mean, I integrate AI into everything you do. You have a co-founder, you first-pass legal document reader. Look, you’re making trade-offs all the time. The fact that the AI is pretty accurate still beats most advice you’re going to get. I mean, I still think about how just from a very US-centric point of view, as a co-founder of my company, I was in charge of payroll. I had no idea at that point, this was sort of early days of the internet, that you could pay someone to do your payroll for you for a couple cents per paycheck, and I would spend hours in Excel doing taxes for each payroll, which was an insane thing to do. That alone would’ve been valuable. Having someone to bounce ideas off of, help create the marketing material. Entrepreneurs are asked to be jacks of all trades. They’re not equally good at everything. It is a strange situation that we have a tool that’s going to be at the 8th percentile of everything. By the way, it also simulates customers really well. It builds websites. This is what we’ve all been waiting for.

(18:02)
Darius Teter: As you said, there is no manual. An inordinate number of people end up looking for your prompts. Just to be super honest with you, even I have found your prompts and shared them around the building here. I have a startup in Mozambique. I have a startup in Tanzania. I want to find shortcuts and ways to have this sort of co-founder, advisor, co-intelligence on all the tasks you just described. Where do I start?

(18:25)
Ethan Mollick: I think you start by treating it like a person, right? Less looking for magical prompts and more literally just like, “Hey, I have a problem, help me out with it.” But I think you start by interacting with a person. You get your 10 hours in, you start to be pretty good with this thing. And then I think this is where we need communities. This is where we need places like Seed. Where are our libraries of prompts that help you out?

(18:44)
Darius Teter: Say a bit more about the 10 hours. I read that, but I want our listeners to hear what you mean.

(18:49)
Ethan Mollick: I mean, there’s a lot of reasons people stop using AI. It’s weird. It freaks them out. It gives them bad answers initially. It doesn’t feel that profound. You need to push through. There is a point of expertise with this where you start to get what it does and where it doesn’t, where you need a cliched result when you might get something interesting. You have to be doing stuff. And so my 10 hours is my loose rule of thumb for how much time you have to spend using these systems to get it. That’s not accurate. I haven’t judged it, but informally that means you pushed through those initial couple hours of resistance and you found use cases and you’re trying it out. And again, as an entrepreneur, why are you not asking it to try and write marketing material for you? Read my letter as a customer.

(19:28)
Let’s practice a negotiation beforehand. Literally treat it like a co-founder. Ask the questions you would: “What do you think about the choice I’m making here? Give me the plus and minuses.” And then you start to realize, oh, every time I ask, it seems to give very U.S. answers. It doesn’t realize I’m running a much smaller business in X country. And then you’re like, okay, this is pretty good, but it’s now answering in too formal a language. Let me help teach it. Don’t be that formal in the future and I start putting that in my prompt. You have to kind of go through a process. It is surprisingly good at providing a first pass at marketing material that I wouldn’t do. We stopped using outside marketers because the AI just did the marketing work for us better. I have to write proposals. If I have to write 10 proposals a week or whatever, then it’s worth spending the time to figure out how to do that with AI.

(20:15)
Darius Teter: In today’s digital workplace, employers and employees are struggling to establish policies for the use of AI from vague guidelines that hint at dire consequences for the misuse of AI to the complete absence of any formal policies. Everybody, including Stanford University, is still figuring it out. And those that do are going to have a real advantage because it’s not just you as an entrepreneur who needs to put in your 10 hours of experimentation. Every member of your team should be doing the same: embracing AI to unlock its full potential because the reality is many of your employers are probably already doing that, perhaps even secretly, testing its capabilities and figuring out how it can enhance their work. So encouraging open exploration and integration of AI across your organization can transform these isolated experiments into powerful collective advancements. Ethan explains why this hands-on approach is crucial and how it can drive innovation in your business.

(21:14)
Ethan Mollick: I mean, there’s tons of reasons why someone would be not willing to disclose AI use. First of all, most companies have unclear policies. Almost every company I read is like either there’s no clear policy or it’s you can use it, but you might get fired if you use it wrong. What’s wrong? Irresponsible, bad use is wrong and will get you fired. But it’s often even more vague than that, right? It’s almost like you could use it for responsible use if you disclose full use, but if you don’t disclose full use, you could be punished and fired and there’s no clear policy about what that all means. That’s level one. Also, by the way, people don’t even have access. So that’s actually even prior to that, the level zero call it. Then the second reason people don’t use it is if you are using it and people think you’re amazing, Reddit is full of people saying,

(21:54)
“People think I’m a wizard at work,” because they’re using AI and it’s faking a lot of their work and people love their stuff. Why are you going to tell people that you’re using it then if you do use it? So maybe people devalue your work afterwards. The third level is that if I show you I’m using AI and I’m already doing less work than before, why would I want anyone else to know that I’m doing less work and then I just have to do more work? Even if I’m compensating for that, maybe they realize that they don’t need as many people in my level and they fire me or they fire my friends, or maybe I’m just thinking about a startup on the side. So for all those reasons, people are hiding their AI use, especially because you could just use it on your phone super easily without ever talking to another human about it.

(22:37)
And so people are using it secretly everywhere. So you need the incentives to change for that. I think the first thing is getting people access to a frontier model, and then it’s about setting up education, reasonable policies, an incentive structure. So you have to do it organizationally. But I think the first thing I tell CEOs to do is just play with the system enough. They need to put their 10 hours in. They cannot depend on direct reports. They also can’t expect hiring a consultant will solve all their problems. It used to be that like, okay, McKinsey or Ernst & Young, they knew everything. They don’t know anything anymore. Nobody has a playbook. They might be able to help you with transitions and other sets of stuff, but they don’t know how to use AI for your systems. So you have to build the systems to make that all happen.

(23:17)
Darius Teter: Well, on that note, I want to talk a little bit about the term you use — “centaurs and cyborgs” — to define two approaches to problem solving with the co-intelligence. Help me understand the difference between those two models of interaction with an AI.

(23:32)
Ethan Mollick: Sure. We differentiate between centaurs and cyborgs, where centaurs are people, like, it’s a half person, half horse, divided, who divides up their work. They’re like, I don’t want to do the writing. I want to do the coding. You do the marketing, writing, I do the coding. The more effective way is cyborg, where you’re integrating your work with the AI. So again, it’s like, I want to finish this sentence. Give me a way to finish it, read over this email and check it over for me from the persona of my three favorite customers. You’re throwing stuff out to the AI, you’re working with it, you’re doing it interactively, and that’s usually a more powerful model. Again, experience is what gets you there.

(24:05)
Darius Teter: And in the cyborg model, you’re also — because your own professional expertise matters — you’re in some sense also the fact checker, or maybe “fact checker” isn’t the right word for the output of the model.

(24:18)
Ethan Mollick: Yes, right. I think that part of the issue is you have to be thinking about what you’re — as best you can, you’ll get a sense of whether it’s heading in the right direction or wrong direction. People don’t always fact check as they should. I wouldn’t use it for an area where you need six sigmas of accuracy, right? You’re not going to use it for that, but you will find the use cases by working with it.

(24:38)
Darius Teter: So link that back to what you describe as the jagged frontier of the model capability.

(24:45)
Ethan Mollick: So the issue is we don’t know in advance what AI is going to be good or bad at. We call this a jagged frontier. It’s very clear to me, and we talk about foreign languages, but just even in English, if you ask it to write a regular 25-word sentence, it will often fail because it doesn’t see words the way we do. It sees tokens. But if you ask it to write a sonnet, a very difficult form of poetry, it does a great sonnet. How do you deal with a system that writes a great sonnet but not a good 25-word sentence? That’s the jagged frontier. So you have to learn this for yourself. But by the way, it becomes a source of advantage. You learn the AI is really good in understanding what the local business conditions are in northern India, but bad in southern India. You understand that it’s really good at giving this kind of advice but not that kind. It will all look the same. And if you don’t know that, you will go through what’s called falling asleep at the wheel. You’ll stop paying attention to the details because the AI will seem good enough. So going in with a skeptical eye and experimenting is what teaches you the shape of the frontier.

(25:40)
Darius Teter: The “jagged frontier” came from a study that Ethan did with the Boston Consulting Group titled “Navigating the Jagged Technological Frontier.” Where previous industrial revolutions upended menial labor or enhanced everyone’s productivity — think, for example, steam power or electricity or the advent of the personal computer — AI, on the other hand, will upend the most sophisticated professions and in a fraction of the time.

(26:07)
Ethan Mollick: So we did a study at BCG where it developed 18 realistic business tasks. They were like, come up with ideas for a product in the shoe industry;, segment the market for which ones might like which ideas; design a focus group that would figure out that information for you. Write an email inviting people to the focus group. Write an email about your findings to the CEO. Come up with a series of marketing slogans. So creative ideas analysis, analyze the market in terms of user demand, come up with marketing and rollout strategies. Very, very Business 101 kinds of things, but things that the consultants agreed are very realistic for people to do. They actually use these things as part of their actual tests. And so we gave some people access to GPT-4, the plain vanilla GPT-4 available to every kid in Mozambique, the one from last year, no special fine tuning, nothing else. The others got access to just their normal brains. The GPT-4 ones had a 40 percent improvement in quality without any training or anything else — 40 percent. We did 108 different tests and regressions on quality at the individual question level categories of questions. We had human assessors, we had GPT-4 assessors, every quality thing. Which one is better? How would you rate the quality of this answer? Every measure? There was not a single measure of the 108 where the AI-enhanced human didn’t win. So …

(27:22)
Darius Teter: … 40 percent improvement, which is huge. Unbelievable.

(27:25)
Ethan Mollick: Yeah, I mean, when steam power was put into a factory in the early 1800s and improved performance by 18 to 22 percent, these are big numbers that we don’t know what to do with, right? And I was going to say also 25 percent faster, 12.5 percent more work done, and without optimization. Those are giant numbers. That is transformative sets of numbers and it’s early days.

(27:50)
Darius Teter: Given this potential, how should entrepreneurs think about AI in their own growth strategies?

(27:56)
Ethan Mollick: So my advice for startups has shifted from “build something large companies want to destroy large companies.” I mean, I spoke to a venture capitalist at MIT yesterday, and he was saying a phenomenon he’s been seeing is people have been saying, we’re never going to grow past 20 people. That’s our goal. We’ll spend your money on marketing, we’ll spend your money on other stuff, but we are not going to have more than 20 staff members. I mean, Sam Altman — people kind of assume a lot of what he’s saying is exaggeration. But I think there’s something to his idea that the next billion-dollar company will be a 10-people company. It’s entirely possible. So I will be building for that kind of future. And by the way, this works anywhere in the world. What can you do with 10,000 interns at scale? That becomes an interesting question to ask.

(28:36)
Darius Teter: Without getting super dark here. I wanted to talk a little bit about the chapter in your book called “Aligning the Alien,” and it brought to mind James Barrat’s book, Our Final Invention, which is If you have a recursively self-improving AI system that attains AGI and then maybe even attains super intelligence. And he asked: What would it tell us? Would it let us know that it had gotten there and could we even imagine the strategies that it might employ to keep us from unplugging it? And the reason I bring it up under your chapter on alignment is because at the end of the day, it’s up to us to figure out how we want these systems to evolve and whether we want them to have any guardrails. But who is us?

(29:19)
Ethan Mollick: I don’t know whether AGI is possible, and then we go on to ASI, but people are already going to be using this improperly, right? We already are seeing horrible nonconsensual images. I can create a video of anybody saying anything I want after a couple of seconds. No matter what’s happening inside governments, they are not building better LLMs than Meta is releasing for free into the world because there’s only a few companies with the right kind of computer to do this, and we know who they are, right? There is this feeling that technology is something that happens to you. And I think the thing here is organizations get to decide how to use these systems, and the cat is out of the bag in terms of there’ll be GPT-4 models available everywhere for free to freely download. Llama 3 will get there. That’s already done. So government regulation could help, but I almost wonder if the Internet’s going to be the same kind of place, whether we’ll just not answer. It’s going to be all discords in the future, where with people you know are human. We’ll adjust to that kind of world if we need to.

(30:18)
Darius Teter: So our own agency is really key here.

(30:20)
Ethan Mollick: I think that’s the absolute key, is we get to make decisions, and if we just view this as a technology that happens to us, we’re in trouble.

(30:27)
Darius Teter: Should we be worried that Sam Altman is building a doomsday bunker?

(30:31)
Ethan Mollick: I wouldn’t be as worried about the doomsday bunker because Silicon Valley people are Silicon Valley people. What I would be thinking about is very seriously that he is, and OpenAI itself, has dedicated itself to building artificial general intelligence in the next few years, and they think they could do it. So I think that I would be less worried about doomsday and AI murdering us all than I would about what happens if development keeps growing at the pace it is. And I think that’s a legitimate question to be thinking about and asking.

(30:58)
Darius Teter: Ethan, this was super, super interesting. Thank you so much for giving us some of your time.

(31:04)
Ethan Mollick: Thanks for having me. They were great questions.

(31:09)
Darius Teter: In my discussions with Ethan, I’m struck by the realization that AI isn’t just about the leaps we’re seeing in technology. It’s about how we as individuals, businesses, and society choose to integrate these tools into our everyday lives. It’s about envisioning a future where AI is as ubiquitous and as essential as electricity, and it serves not just as a tool, but as a teammate, enhancing our capabilities and transforming our own potential. As this technology continues to evolve, the importance of staying informed and proactive cannot be overstated, particularly for entrepreneurs and business leaders. But I think it’s also important for us as humans because the future of AI is not predetermined. It is shaped by the choices we make today. What worries me personally is that some of the most important choices about how AI will impact our lives will be made by a small number of people with enormous power and resources. AI needs compute and data, and that means money. Amazon, Google, Apple, Microsoft, Meta, and a few others have most of that. So to what extent does humanity feature in their goals? So I would add to Ethan’s advice that we also need to be active citizens, for surely there is a role for regulation and oversight of something that could so easily go from being a potential public good to an extreme public bad.

(32:31)
I’d like to thank Ethan Mollick for sharing his perspectives and advice, and I encourage all of you to subscribe to his Substack blog, to read his LinkedIn posts, and to buy his new book, Co-Intelligence: Living and Working with AI. On this particular topic, I’ve learned more from Ethan than from any other expert, and I especially love that he’s always sharing his experiments. Erika Amoako-Agyei and VeAnne Virgin researched and developed content for this episode. Kendra Gladych is our production coordinator, and our executive producer is Tiffany Steeves, with writing and production from Nathan Tower and sound design and mixing by Ben Crannell at Lower Street Media. I’m Darius Teter. This has been Grit & Growth. Thank you for joining us.

Co-Intelligence: An AI Masterclass with Ethan Mollick (2024)
Top Articles
Latest Posts
Article information

Author: Rev. Porsche Oberbrunner

Last Updated:

Views: 6011

Rating: 4.2 / 5 (73 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Rev. Porsche Oberbrunner

Birthday: 1994-06-25

Address: Suite 153 582 Lubowitz Walks, Port Alfredoborough, IN 72879-2838

Phone: +128413562823324

Job: IT Strategist

Hobby: Video gaming, Basketball, Web surfing, Book restoration, Jogging, Shooting, Fishing

Introduction: My name is Rev. Porsche Oberbrunner, I am a zany, graceful, talented, witty, determined, shiny, enchanting person who loves writing and wants to share my knowledge and understanding with you.