Astro Teller, director of the moonshot factory at Alphabet known simply as X, explains how he is a “culture engineer” and how he systematizes innovation by creating a work environment where employees are encouraged to be audacious. He says they are given the freedom to work on projects that inspire them and that they want to own – whether they fail or succeed.
Transcript of Teller talking systematic innovation
– I’m a culture engineer. The thing that excites me the most is not making stratospheric balloons, or self-driving cars, or working on contact lenses, or on UAVs that can deliver packages. The thing that excites me the most is trying to systematize innovation. And, when I was young, I used to think that systematizing innovation might be some combination of things you could get if you just went to all the business books, and you picked out the smartest things from each of the business books.
Hire the smartest people and fail fast, and this, and that, and the other thing. It’s kind of true, it just turns out that if you do that, you don’t actually get much innovation. What excites me is, what would it take to actually get a group of people to do the things that it says in those business books that you guys have all read. If this is the set of things they should do, and this is the set of things they do do, and for any of you who’ve been in business before, you know how big a gulf there is. Wherever you worked, I promise you, that gulf existed. There’s a reason that there’s such a big gap between the things that you want them to do, and the things they actually spend their time doing.
It’s because this is the lip service that you’re giving, but this is this paths of least resistance, emotionally, to doing those things. They don’t care what you said they should do. They’re going to follow the paths of emotional least resistance. Cultural engineering is the process of trying to get this, the paths of least resistance, to actually line up with the things you want them to do. So I’m gonna give you a few examples. The first one is, let me unpack a little bit more about the audacious goals some more. Here’s how most companies do something sort of like the audacious goals. Have you guys heard of OKRs? Objectives and key results? So this is the way objectives and key results actually works in a business. You, are gonna need to be, you report to me, and you’re gonna have to be held accountable by me, because I’m the manager. So we’re gonna start this weird haggle situation.
We’re gonna figure out what your OKRs are. You’re gonna try to haggle them as low as you can. Sandbag, sandbag,
sandbag. Because you know I’m gonna hold you accountable for whatever it is that we decide on. So then I can feel you pulling them down, so I’m gonna pull them up, up, up, up, up. I wanna haggle, haggle, haggle, you can do more, you can do more, you’re sandbagging. And we end up in this place in the middle, where now you feel like you don’t really own that OKR. It’s higher than the things that you were saying and you were making all these arguments about why that’s an unreasonable number, or metric, for me to hold you to. It’s either the wrong metric or it’s too big, relative to what you think you can actually accomplish. And I feel bad too, because I feel like it’s two-thirds or half of what I actually was trying to talk you into.
So now we have this thing that neither of us believes in, and this is the OKR. This is a stick, it’s a weapon. And my management plan for the entire quarter or year, is I’m going to beat you with the OKR stick. You’re not doing it enough, you’re not doing it enough. This is 21st century management somehow. It doesn’t work very well. This is not how to get people to be innovative. You cannot get them to do the things you really want them to do, especially if your lip service includes things like creativity, and failing fast, and being transparent, and a lot of emotionally hard things, while you’re beating them with the OKR stick. Crazy idea, what if instead, you just got to pick what you were gonna do? Let’s call it your audacious goal for the quarter.
It’s your goal, I’m not gonna haggle with you about it.
You pick it. You get up in front of all of us, once a quarter, and say I’m gonna try to get this thing done, and I know that I’m almost certain not to get it done, but I’m proud of the fact that I’m going to try to do something that sounds so crazy hard, so unlikely. The goal is to have it be something that you can accomplish about one-tenth of the time. If you’re positive you’re not gonna accomplish it, that’s not very interesting, you’re not really gonna try. On the other hand, if you’re confident you’re gonna do it, it’s not audacious, by definition. So you want it to be in that sort of 10% range. And, you’re going to end up getting held accountable by yourself, because you picked it, and by the whole community because you want everyone to be proud of you.
Now, I can be your coach and mentor, instead of having to beat you with the OKR stick. So, at X, we have audacious goals. And every quarter, every team gets up and says here is what our audacious goal for the quarter was, here’s how we did against it, and here’s what we’re gonna try to do for the next quarter.
And some teams don’t do it some quarters, and that’s actually OK, too. They don’t look as audacious when they don’t do it, but that’s fair game, because you need to be crisp about what you’re gonna do if you’re gonna try to do it.
– We’re doing premortems. A premortem is nothing other than trying to talk about the learning moment of a failure before we actually have the failure. We’re so eager to learn from our failures, we don’t want to wait till the failure happens to learn from it. It introduces a little bit of like one of those time machine movie questions where like if you actually, then address it, and then failure doesn’t happen, maybe it never would’ve happened in the first place. So you don’t get a good control experiment for these things, but actually saying to everybody in the organization, let’s talk about what’s wrong with us, not in a SurveyMonkey kind of way, but like let’s really talk about it. Tell me what you think is the biggest risk for our organization overall or for Project Loon or for the self-driving car project. Tell me why we’re gonna fail. When we fail three years from now, what will that be in your opinion? Write it down, put it up there with your name on it, which is a little bit scary because some people can feel thrown under the bus when you actually call out these Achilles’ heels that you see or that you think might be there. Then have a mechanism, which we do, so you can just vote these up or down, which causes the things that more people think are actual risks. Even if you didn’t write it down, you’d be like, yeah I agree, I agree, I agree with these and not with those, no I don’t think those problems.
It causes the stuff that’s probably the biggest risks to rise to the surface and then there’s commenting mechanisms so people can actually discuss it. Only if you get thrown under the bus, you say that, you know, Project Loon is gonna have some problem and you say what it is, and you work on Project Loon, and then people go after you about it. If our culture isn’t one that rewards you for doing that, that’s the last time you’re gonna do that. So making a mechanism like that actually isn’t the hard part. I think it’s a good thing, it’s working for us. But the hard part is relentlessly and repeatedly chasing down those moments where it’s not working. He needs a hug if he said something brave on that site. I mean a physical hug, an actual hug or a highfive or whatever. Then if he actually gets a hard time from someone for having written that down, what are we all gonna do to defend him, not just because he’s right, he’s probably wrong. Well like we don’t know.
There’s a lot of smart people on the Loon project. I’m sure they’ve already thought of it, but thank you for saying that whether or not you’re right.
– So we have a team, which is called the Rapid Eval Team. The Rapid Eval Team is supposed to take ideas. From me, from you guys, literally, from the founders at Alphabet. Anywhere they can get their hands on an idea. It doesn’t matter where the idea is. There’s an academic at Berkley, or Stanford, or Johns Hopkins, great. Every place is a legitimate place for great ideas to come from. How can we figure out as fast as possible, that that’s a bad idea? That is absolutely and explicitly the question. It sounds like that’s not gonna work. Just say everything’s a bad idea.
But if you set the tone the way I’ve just described it, people are actually interested in coming up with a real reason why it’s a bad idea. You can’t destroy the positivity that comes from saying crazy ideas. But if you say to me, “Hey, ridiculous idea, do you think we could get “the power that’s embodied in an avalanche “somehow gathered? “Maybe that’s like a way to generate energy.” The correct answer, no matter what she said, is “That’s an awesome idea.” She has to feel good about the level of creativity of her idea. I mean if she said something it’s like, actually there’s 100 companies already doing that, and you purchased something from them yesterday. Then maybe that’s not an awesome idea. But assuming that it’s really outside the box, the correct first answer, the only acceptable first answer is, “Wow, it’s beautiful the way your brain works.” Then immediately, “That’s so great. “How are we gonna figure out that that’s a bad idea? “That that’s not gonna work?” So she just got a little check mark with myself, with her peers. For having said something that was really interesting, that was innovative, that was different than what we were thinking before. And immediately she now also gets to get another check mark if she can show the intellectual rigor for why it’s a bad idea. Well okay, I guess we could try to generate avalanches.
And like how much is in an avalanche? It’s good, it’s not great. Okay, well maybe we can move the thing that’s gonna turn all that potential energy and kinetic energy into lots of stored energy. Maybe we’ll move it around so we can get the avalanches as they fall. No, that’s not really gonna work. It won’t take us but five minutes to sort out that there’s probably no practical way to do that. Good, awesome, we’ve figured out rigorously, not just in our gut that that’s not gonna work and we can move on. Because the rate limiting step to innovation is not finding smart people. You’re all plenty smart enough. It is not being creative. How many people here in this room think that you’re highly creative? Good.
The other half of you are wrong. (audience laughs) You’re all highly creative. How many of you think you were creative when you were six? Who wasn’t creative when they were six years old? I mean you don’t have a six year old if you think you weren’t creative when you were six. We just get it beaten out of us by society. I promise you, you were creative when you were six years old. We all were. We’ve just forgotten how because the context isn’t inspiring us, isn’t allowing us. It’s literally blocking us. But that’s not the problem. The problem is, how to get a huge number of ideas on the table, and then weed through them effectively
Which is not about process, it’s about creating an environment where people feel like they can be rewarded in emotional ways and financial ways for doing that. A tiny fraction of these ideas then pass through to our sort of second stage booster rocket which we call a foundry. In the first stage, most of the de-risking that we do is on the technical front. Building prototypes, verifying that it’s not some isomorphism to like a perpetual motion machine. You’d be surprised. Probably one in 100 of the ideas we get literally is an isomorphism to a perpetual motion machine. Once something gets to foundry, maybe 20, 30% of the work is still very technical. But a lot more of the work then gets applied to what is the ecosystem like? And the regulatory environment? How much would we have to invest versus how sizeable a business would this be? How much good would this really do for the world? If we didn’t do this, would the world end up with that benefit anyway for some other reason or not? Working through all those reasons, again, for the purpose of killing the project, even in foundry, which is supposed to only receive things that have been heavily weeded. The goal is to have more than half of those projects be killed. When you hit more than half, you’re clearly in a mode where the people in foundry understand.
Even though they can be very passionate about the projects they’re working on, that there’s less than a 50% chance of doing it. So they can take pride in ending the projects for the right reasons. I mean eventually, you know, for the self driving cars, I’m pretty sure cars are gonna drive themselves since cars are already driving themselves. For Project Loon, we have a lot of balloons up in the air. They’re already doing LTE to the ground. People are actually receiving phone calls. So we know it’s possible. Kind of the ship has sailed, pun intended a little bit on some of that stuff. But for a long time, the pressure is not, how can we make this work? It’s how can we discover as fast as possible this is not gonna work. So that we can get on to doing something else.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.