Share Podcast
Reid Hoffman on Building AI and Other Tech More Responsibly
A conversation with the venture capitalist, serial entrepreneur, and artificial intelligence enthusiast.
- Subscribe:
- Apple Podcasts
- Spotify
- RSS
As a founding board member of PayPal, cofounder of LinkedIn, and a partner at the Silicon Valley VC firm Greylock, Reid Hoffman has long been at the forefront of the U.S. tech industry, from the early days of social media to the launch of new artificial intelligence tools like ChatGPT. He acknowledges that technologists are often better at seeing the benefits of their products and services than they are at predicting the problems they might create. But he says that he and his peers are working harder than ever to understand and monitor the downstream effects of technological advancements and to minimize risks by adapting as they go. He speaks about the future of AI, what he looks for in entrepreneurs, and his hopes for the future. Hoffman is the host of the podcast Masters of Scale as well as the new show Possible.
ALISON BEARD: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Alison Beard.
When any new technology comes along people, especially those in the tech industry itself tend to get really excited about all the good it’s going to bring. Social media connects people around the world, crypto democratizes finance, generative AI supercharges productivity and so on. The evangelist crowd is loud and proud.
But as we’ve seen over the past decade, the potential downsides of the latest tech innovations don’t always get as much attention. Yes, you’ll see some skeptics warning about unintended consequences and negative externalities. But it doesn’t seem like industry insiders, the people building and deploying these new tools and the leaders overseeing that work are thinking all that hard about what challenges they might inadvertently create.
Our guest today is an unabashed techno optimist. He really does believe in the power of technology to improve our lives. But he also knows how important it is for tech companies to think more carefully and responsibly about the problems they’re trying to solve, and the products and services they’re putting out into the world.
Reid Hoffman is a founding board member of PayPal, a founder of LinkedIn, a partner of the venture capital firm, Greylock, and a director of several companies, including Microsoft. Although he recently stepped down as a board member of OpenAI. He’s also a podcaster, hosting Masters of Scale and the new show Possible. Reid, welcome.
REID HOFFMAN: Great to be here.
ALISON BEARD: Okay. First off, how do you define responsible or ethical technology?
REID HOFFMAN: So one of the illusions that are sometimes promulgated is that technology is essentially value neutral. And that doesn’t mean that it embodies values in kind of a simple way like I believe in democracy, or I believe in some other form of human organization, or kind of the various values debates we’re having within the US and other countries.
I think that the question is you say, well, how does this affect the human condition? What does it mean for different individuals? Are there bias issues? Are there things where it creates some kind of bad social impact? And you have to ask these questions. And obviously one of the challenges when you’re dealing with things of scale is it’s never all good, like 100% everything.
What you have to do is you have to make it on broad really good, and then try to make sure that you’re not disadvantaging groups that don’t have power or a voice. So for example you say, well, cars, cars are generally speaking very good. It enables transportation, enables mobility, enables people to live in different areas. On the other hand, of course in the U.S. we have 40,000 deaths per year in driving, and then of course climate and all the rest. So you have some kind of challenges and you try to shape it so that on balance is very good and you’re dynamically improving as you learn and refine.
ALISON BEARD: As someone who has been a leader in the tech industry for a really long time, what is your honest assessment of the job that you all are doing in considering not just the upsides but also the downsides? And then trying to mitigate those risks, whether that’s social media a decade ago or generative AI today?
REID HOFFMAN: Well, it’s a little bit hard to talk about the entire tech industry, because there’s some people I think who are doing pretty good jobs, and I think there’s some people who are doing pretty awful jobs.
So the story of social media as you said in the intro, is when it opens with blogs and social networks and all the rest, it’s like, oh, we’re giving voice to the people who didn’t otherwise have voices. And people who might be a minority of some sort somewhere in the world, whether it might be kind of sexual orientation or might be religious or might be a racial minority. They can discover their voice and they can connect with other people, and isn’t that awesome? And of course it is and continues to be. But then you say, well, now it becomes where everyone’s there. And then all of the issues that become part of why we have government, why we have regulation, and how we make society work together, those then come in place in full.
And for example, one of the classic things that I’ve been debating since as long as I’ve been on basically television, I think I did a 1996 firing line on this, freedom of speech, is to say, whoa, we don’t regulate freedom of speech. And it’s like, well, of course we do. We have truth in advertising. We have issues around hate speech or violence or there’s all kinds of ways we regulate speech. Many of you say, “Well, my freedom of speech allows me to say false advertising and to sell drugs that are harmful for lots of money.” You’re like, “Well, that we don’t allow as a society.”
And I think that’s what the tech industry is still coming up to speed on in terms of what is our definition of truth in collective discussion, and how do we navigate that? Now, when it gets to AI, which is obviously the thing I’ve been spending a ton of time on in the last number of years, I think the tech industry has learned from the social media side to pay more attention here.
So the questions around, well, is it biased or might there be unintended consequences in jobs or misinformation? The way that ethics starts is by asking the questions and checking as you’re building. You’re not going to get it perfect. You’re not going to launch something at scale and get it perfect. But if you’re asking the questions and you’re measuring and you’re improving, then you’ll eventually get to a very good place.
ALISON BEARD: So it seems like you’re saying that technologists today and leaders of tech companies have maybe learned from the era of social media. And that the famous move, fast and break thing era is over?
REID HOFFMAN: What I’d say is, again, I’m the author of Blitzscaling. I’m a definite move fast person. The question is what things do you break? You break your servers. Fine, no problem. You break society. No, that’s a problem. And what I’d say is some tech leaders, Satya Nadella, Sam Altman, are well on the learning curve. I think it’d be a fool’s statement to say, “Oh, we’ve learned. We’re good.” It’s like, no, part of what you’re doing is we’re exploring this new stuff and we’re building these new things and you can’t predict all of it. You’re learning as you go and you’re fixing.
ALISON BEARD: What factors are they considering when making business decisions? What does the general public maybe not see or hear about what’s going on behind the scenes both in the VC community and within the companies themselves?
REID HOFFMAN: So I’d say that every technology company that I’m a part of, that my partners at Greylock are part of, are all at least asking the questions and doing it as part of how they develop the technology. And the questions can range from are we being responsible stewards of data and people’s trust? Might there be groups that are being disadvantaged by this technology that is a structural bad thing, for example, racial disadvantage. Whereas you say, well, we’re disadvantaging criminals. Okay, that’s fine, fraudsters. As best we can are we red teaming and thinking about blind spots or things that could go wrong? Are we thinking about what happens when this gets to scale? Do we have a good theory about why this will be net really positive and how we can remediate or diminish harms? I think all of those questions in every tech company that I’m part of are central. And we go out and learn. We hire people and ask for what are the other things we should be thinking on doing here?
ALISON BEARD: Yeah. And bigger picture is the industry… And I’m sorry for keeping saying the industry as a whole, but most of the people you know are you now focusing on more important problems than perhaps you once did. There’s the famous Peter Thiel quote, “We wanted flying cars. They gave us 140 characters. And now it’s more like we want climate change solutions, but we’re getting a chatbot that can write stanzas like Shakespeare.”
REID HOFFMAN: Well, yes and complicated. So for example, flying cars, I’m on the board of Joby. We are working on flying cars. And that’s to redefine space in a climate change way that will help with gridlock and pollution and a bunch of other things and be accessible. On the other hand, the natural pattern of these things is to try to figure out what’s the easiest work to do that’s most valuable. And so that’s why people tend to do a lot of software.
And I tend to think that actually chatbots can be really valuable. They can be valuable for anything that ranges from give me some good information, to help me solve this problem, to any number of things that could play into human life. But on the other hand of course, solving hard problems like climate change, ocean de-acidification, other kinds of things are super important and people are working on those. They’re just harder because it’s a lot more expensive with the economic rewards being much more challenging. One of the things that I try to give thinking to and advice to, is how do we create an incentive system that also goes after the hard problems more?
ALISON BEARD: Yeah, absolutely. And you used the word valuable. So let me press on that a little bit. By valuable do you mean valuable to society, valuable to investors? Where is that purpose, profits, trade off or balance falling for your community now?
REID HOFFMAN: Well, in an ideal system you align them so that the high functioning of business where the product that you’re offering to the customers, it’s really good for the wellbeing of the customers and society and the stakeholders that are in. There are of course places where that gets misaligned and it’s not only within the tech industry. This is one of the challenges we tend to have with making industries work. And look, in all of society, there’s a whole bunch of people who are doing things only for money or only for profits. That’s part of how we design the alignment of society that goes all the way back to Adam Smith.
But the question is also that people will say, I want to hold my head up with my friends and my community and say I’m doing a really good thing. All the people I hang out with are focused on how is it that we’re also making the world and society better with what we’re doing? And so for example, that’s one of the questions we ask at Greylock when we’re investing, is to make sure that we are positive on those vectors and that we have to do so within the context of a strong business. But if you’re asking the question and intentionally trying to do that, then that’s at least half the game.
ALISON BEARD: And so as a VC at Greylock, what are you looking for, seeking out both in business ideas, business models and founders right now?
REID HOFFMAN: Part of the thing that’s a delight about venture investing is while you may have a very active theory of the game, so I’ve been doing generative AI for the last few years, co-founded a company called Inflection with Mustafa Suleyman. We have adept in Cresta and Snorkel and all these other companies at Greylock. And so we have a very active thesis on artificial intelligence and have had for five plus years. We’re also being surprised by the amazing things brought to us. So just to kind of illustrate what I think the quality of being surprised is, is when Brian Chesky and Nate and Joe brought Airbnb to me. I hadn’t really been thinking about a marketplace for space. A question about how you can not just travel to a place to see a monument, but to experience local culture, to enable people to transform their own economic outcomes of being able to afford their house or their space.
And yet, that’s just software and that brings all that together. So for me, in addition to AI, I also tend to look at networks that redefine our social society space. It’s part of the reason I created LinkedIn with my co-founders, things that we’ve done in various other investments at Greylock, including for example, take Roblox, which is okay, you’ve got developers building entertainment and educational things that generally speaking mostly appeal to kids, but a whole range of experiences. We’re looking to be surprised. And the question we ask is, are the customers net really benefited from this? And is the community and society that they’re in broadly also benefited? And does it have a very strong business that will transform industries? And if we see all that and we see an entrepreneur that we think is high integrity, and that we would be delighted to be in business with our entire lives, then we get really excited and join forces.
ALISON BEARD: Yeah, that high integrity piece, finding people founding teams who are absolutely trying to scale and run with their ideas and make a change. But then also will take that moment to step back and ask the questions about ethical construction, deployment, et cetera. How do you evaluate for that?
REID HOFFMAN: Well, it’s not a simple formula. But one of the things we do pretty rigorously is reference checking. You haven’t completed your reference checks until you found a negative reference check on everybody in the world. So for example, if someone was reference checking me in depth, what they would find is, oh my gosh, he’s a really great creative problem solver but he’s not particularly good at making the trains run on time. And obviously when you’re asking the integrity question, you’re asking a question of how much do you actually in fact walk the walk, not just talk the talk. How much when you’re getting in positions of stress do you make decisions, for example that say, no, no. Yeah, that would be the easy decision, but that takes risks in other people’s wellbeing. Let’s take the hard decision. Do you honor your commitments? And therefore when you’re saying, “Hey, we’re going to have a commitment to make sure that we are tracking how we impact society, and we’re going to have dashboards on it and we’re going to be improving them year by year, will you be doing that?
ALISON BEARD: Talk a little bit about the role that the tech world, the VC world, an industry that is still very much dominated by rich white men, has to play in increasing inclusivity and also decreasing socioeconomic equality.
REID HOFFMAN: One of the things that I’ve been saying for maybe a decade plus now, anytime you look at a problem you go, that’s important to solve, you go, if you’re not part of the solution then you’re part of the problem. So you need to be saying, how am I as an individual and also of course as a firm and everything else, investing in trying to solve this problem? How am I putting in sweat and blood into trying to make this happen?
And so relative to diversity and inclusion in making sure that you have a regular workflow and process by which you’re trying to recruit, you’re trying to meet entrepreneurs. We do things at Greylock like have a set of office hours that’s only for underrepresented minority entrepreneurs. We do in any recruiting thing, make sure that we are interviewing disproportionately large numbers of underrepresented minorities, including unfortunately in venture, women, which is like, well, aren’t they half the population? You’re like, yes. And doing everything you can.
And so for example, we’ve helped stand up kind of new venture firms because when they come to us and say, “Hey, we think that one of the things may just be having a venture firm that’s entirely focused on funding women entrepreneurs, it might be a good way of doing it.” Great, we’ll help you. And so you have to do all that kind of stuff. And would I want the progress to be 10x faster than it is going? Absolutely. And if someone figures out a way to make that happen, we’ll help, we’ll support.
On the economic gaps, it’s always a little tricky because it’s dynamic over time. For example, one of the things that I do which is the same thing in my philanthropy as I do in my investing, which you find an amazing entrepreneur. In this case it’s Byron Auguste who says, “Look, there’s all of this massive growth in the tech jobs and tech industry. And we want to make sure that it works for the communities of color, works for women, works for other minority groups. Let’s go make sure that a whole bunch of these people have pathways in the tech jobs and make that happen. And so they at least can begin to bring their families in, understand kind of what the tech opportunities are, have their communities begin to be able to benefit from participating in these industries.
But by the way, when you’re growing a new company, the new company makes the executives and the founders the most money. And then makes the next group of people the next most money and et cetera, is ways it works. So it doesn’t necessarily immediately cause distribution economics, but you’re trying to get everyone participating. And then you’re trying to make sure that the next generation of founders has the diversity that we have in society.
ALISON BEARD: You mentioned other industry leaders that you respect and admire who you think are modeling good leadership not only on the, I’m running a great business but also on the, I’m working to improve society front. But the poster boys for the tech industry, Elon Musk, Jeff Bezos, Mark Zuckerberg, they definitely aren’t perceived that way no matter how much money they might give to charity or how many rockets they might launch into space.
Do you get the sense that the good guys, as Kara Swisher might call you, are developing as many accolades as the people who do sort of still cling to that move fast and break things ethos?
REID HOFFMAN: Well, and I myself argued with Mark Zuckerberg about kind of freedom of speech issues and other things. But for example one of the things I do with him is the CZI Biohub, where he is trying to cure infectious disease for people all around the world and putting in a lot of money to that. And because he is such the poster boy for other intelligent criticisms, he doesn’t get as much credit neither here for all this other amazing stuff he does. And so I just kind of feel it’s important to make that gesture.
ALISON BEARD: Yeah, and there’s no question that a lot of the people who make a lot of money then do a lot of good. I guess it’s just trying to marry the two is what we’re talking about.
REID HOFFMAN: Yeah. Well, that that’s important to do too. But for example, there is a differentiation between people who go all of my economics is for my own self-glorification. And people who go, look, I’m making a bunch of economics and I’m also doing a bunch of things that I’m caring for a bunch of communities that has nothing to do with my self-glorification. And I say that in part because it’s too easy to get on the criticism bandwagon, and I just think it’s important to note. Now, I’d say that the folks who are perhaps not beating these drums is extremely tend to have less… I think the word you used was accolades. I think it’s because the principle way that you get acolytes is by defining something pretty extreme and beating that drum.
And then people who think that you’re the messiah for beating the drum in that direction, then come follow you. If you’re kind of measured and saying like the things I’ve been saying here which is, look, it’s a net benefit is the goal. I think you do have to move fast. I think you have to build things quickly. I think you will break things including things that you don’t want to break in doing it. I think you have to do it with care and attention. But I think if you don’t do it with speed, then the people who do it with speed, who don’t care about what the impact is set the rules. So I tend to think that it’s less good, call it media coverage to talk about the people who are trying to be thoughtful than the people who are being extreme.
ALISON BEARD: Do you think that Silicon Valley still sort of leads the world in terms of what the tech industry is thinking about? Or do you see sort of different ecosystems developing their own ethos around purpose and profits?
REID HOFFMAN: Well, I’d say the two areas in the world that are the most tech leading are both Silicon Valley and a set of cities in China, mostly along the coast. I try as much as I possibly can to help create other tech innovation centers in other areas of the world. I was just in Italy, France, and the UK, high principled democracies that kind of have a really good concept of what the human rights should be and so forth. I try to help as much as possible in facilitating the creation of entrepreneurial basis and tech industries. But I do think Silicon Valley continues along with the… We learn a whole bunch of stuff from China, the kind of driving drumbeat. And it’s one of the reasons why I think it’s a very good thing that the discourse is… Like I’m at dinner parties in Silicon Valley where part of the discussion is say, well, now that tech is continuing to have larger and larger impact, what is the way that we make sure that we’re doing the right thing?
ALISON BEARD: Let’s talk about China. Are those questions being asked over there also?
REID HOFFMAN: Well, not being a native Chinese speaker and not having been there for a few years, I would say, I think any group of people, if you’ve got a million people you’ve got a distributor, smart people, you’ve got a distribution of ethical people, you’ve got a whole bunch of different things. I would say that their environment is more tuned at the moment as it were, the rise of China and the success of the business. And somewhat less to, for example, what does this mean for disadvantaged minorities within society. In China, I don’t think you have any discussion in the tech and companies like what it means for the Uyghurs, or what it means for other kinds of things. I think people are people, I’m not saying anything about the quality of the people in doing that. I just think it’s the environment that they’re operating in.
ALISON BEARD: Yeah. Are there opportunities for more collaboration, interaction, knowledge sharing?
REID HOFFMAN: So for example, one of the things I’ve been highly focused on along with the OpenAI and Microsoft folks, which is AI safety and making sure that when you build these new very large, very capable systems, that the net impact is very good. That there are no really bad impacts. And you say, okay, well, how do we make sure that the work that we’re doing, even though we’ve put in a whole bunch of work and energy and cost and hiring… I think there’s hundreds of people at Microsoft who work on AI safety, how do we essentially just distribute it for free? How do we offer it to everybody including our competitors and so forth in China, in order to try to get to good places? Because that’s part of being intentional and good people.
ALISON BEARD: So I do want to turn to your new show. It’s a very interesting your sort of addition of the show Possible to your Masters of Scale franchise, because one is sort of the founders, the entrepreneurs, the leaders of companies who made it big basically. And then Possible seems to feature people behind the scenes working on these really difficult problems you mentioned earlier. So on democratizing higher education through technology, nuclear fusion to help solve some of our climate issues. So talk about why you wanted to launch the show and focus on those people as opposed to the famous corporate leaders.
REID HOFFMAN: So one of the things that I see a lot in the U.S. and see in some places in the rest of the world is what is referred to as tech lash, which is more negativity and uncertainty about what technology is bringing versus the positive sides. And I believe as a hypothesis but very strongly and can argue for, that whatever scale of a problem you’re trying to solve, whether it’s climate change, whether it’s economic justice, whether it’s criminal justice, other things, 30 to 80% of the solution is technology. What I mean by that is technology changes the scope of what’s possible. It changes cost curves. It changes what you might be able to pull off with the resources that we have.
We can help solve these problems with technology. And it isn’t technology is the only solution, part of the solution. It’s also how we organize ourselves as society, what we value, what we invest in versus other kinds of things. But technology is an essential part of making that scale solution work. And so we want to go to essentially the leaders, the innovators, the imaginers of what the world could be in this really good new way. And to talk to them and to share that sense of here is where we should row towards. And I think we can for example, solve these really big problems, climate changes, other things as ways of doing this. And oh my gosh, we could build a world that’s so much better than we are today. Let’s get to it.
ALISON BEARD: Does the new generation of founders seem excited about that, even if it means their big payday might be two decades in the future as opposed to becoming a unicorn within five years?
REID HOFFMAN: Well, again, I think some are. And more are. It won’t be all are. Some people will still be creating… I try not to throw entrepreneurs under the bus, but on various things that I go, well, that’s not a particularly great thing to create.
ALISON BEARD: Delivering liquor to your front door or something along those lines,
REID HOFFMAN: Whatever the thing might be. I guess the one I most often pick on is jewel. But creating electric cigarettes or vape things, I think are net not positive. But go and have the imagination that through entrepreneurship, through technology, through invention, you could solve these things. And there’s a super amount of very talented people in the world and we just want more of them working on these problems. And thinking about the fact they could make a difference by creating a technology, a business, a project that could focus on this and make it work. And that’s the dialogue we’re hoping to increase in the kind of applying our imagination to how we create the future.
ALISON BEARD: Yeah. We haven’t yet talked about the role of government in innovation and in regulation. So some of the greatest technologies, GPS for one, stemmed from government investment initially. So do we need more of that is part A of this question. And part B is, where do you stand on regulation for emerging technologies like generative AI? Should there have been more regulation on social media, et cetera?
REID HOFFMAN: One of the things when people say, for example, what do I believe that most Silicon Valley or a lot of Silicon Valley people don’t believe? It’s actually in fact government’s absolutely essential. It helps create a lot of things, helped create not just rule of law in a society and healthy functioning economy, but also baseline investment in universities and technologies. And so I’m a big believer in those. I also think that regulation can be an important part of that. One of the challenges of regulation is that the baseline conception of how most people tend to think of regulation is to ask for permission, not forgiveness, to tell you that you continue to do things the way you’ve done them in the past. And you’re kind of locked in with very slow change from that. And tend to be done by people who don’t necessarily understand what the innovation clock looks like.
And so the principle that I usually articulate here is when is bad regulation better than no regulation? And by the way the answer is not. That’s not a rhetorical question or never, because for example when you get to the financial system, the absolute necessity of the financial system continue and run. You say, well, actually in fact, bad regulation is better than no regulation to make sure that the banking system doesn’t break and other kinds of things. That’s because it’s just too critical otherwise.
Now when you get to a lot of technology and you say, well, you’re enshrining the past, the problem is if the actual solution is technology in the future, then a regulation that particularly slows you down or anchors you to the past will be potentially more damaging to humanity than not doing it in various ways. And says so well, do you say no regulation or something? Of course not.
But what you do is start by defining what are the outcomes that you’re looking for, and can you set those outcomes to essentially the innovators, the companies, the other things to say more of these outcomes and less of these outcomes. And we’d like to see a dashboard. We’d like to see it tracked by your auditors.
And so for example when we’d like to see less violence on video. So do you say, well, I’m going to have a regulation to say you must have a five-minute delay between uploading the video and the broadcast of it. And you say okay, well, that may not actually solve your violence and video problem because terrorists or whatever else might trick or hack the system for it. And your regulation really just created a whole bunch of processes that didn’t do anything. Whereas what you said to companies that said, well, okay, I recognize you can’t get to zero because again, large scale systems. But let’s say for the first 100 views it’s 1000 dollars fine, for the next 1000 views it’s a $10,000 fine. And for every view after that, it’s a $100,000 dollars fine. You figure out how not to show murders on video. And that’s what I mean by defining outcomes in ways and then having the innovation do that. And that’s the kind of thing that I think is the pattern that we need to apply when it gets to technology.
ALISON BEARD: So what advice do you give to people who are early in their tech careers right now? What are some of the pitfalls to watch out for and how can they become great more responsible builders of technology?
REID HOFFMAN: I think everybody needs to think about their own life path with a tool set of an entrepreneur. It doesn’t mean they have to be an entrepreneur. I think another thing is to realize that all the way back to the beginning of our discussion, that the creation of technology can be itself a great good if you’re asking the right questions.
I think that even questions when you say, well, obviously take an area that’s fraught with a whole bunch of things, genetic modification, genetic and engineering. So, oh, that could be really bad obviously, but of course it could be really good getting rid of genetic diseases in ways that just cause suffering. So if you’re asking the right questions and you’re doing it the right way and you’re thinking about how do you shape it the right way, you can have a scale impact in the world that leaves humanity much better because of your effort. And I think ask the right questions and help create the future.
ALISON BEARD: Yeah. Well, I’m glad to hear that many more people are doing that now. Reid, thank you so much for being on the show.
REID HOFFMAN: My pleasure. Thank you.
ALISON BEARD: That’s Reid Hoffman, entrepreneur, investor, and podcaster with the new show Possible.
And we have more episodes and more podcasts to help you manage your team, your organization, and your career, including an upcoming IdeaCast bonus series about how artificial intelligence will change work. Find them at hbr.org/podcasts or search HBR on Apple Podcast, Spotify, or wherever you listen.
This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox. And Hannah Bates is our audio production assistant. Thanks for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Alison Beard.