Super-investor Marc Andreesen Says AI Will Not Kill Us and Will Make Things Much Better

Marc Andreesen was the founder of Netscape and has run the A16z venture capital megafund for many years. Marc says AI will not destroy the world, and in fact may save it.

AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.

AI Will Augment Human Intelligence

Human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years.

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

In our new era of AI:

* Every child will have an AI tutor and that tutor will be super-helpful and super-useful
* Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist
* Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.
* Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have AI productivity multipliers and advisors

Why The Panic? Will AI Kill Us All?

Many new technologies have led to bad outcomes – often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.

Marc’s view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

AI Risk #2: Will AI Ruin Our Society?

The second widely mooted AI risk is that AI will ruin our society, by generating outputs that will be so “harmful”, to use the nomenclature of this kind of doomer, as to cause profound damage to humanity, even if we’re not literally killed.

Marc had a front row seat to an analogous situation – the social media “trust and safety” wars. As is now obvious, social media services have been under massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content for many years. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”.

As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites – which includes many of the people who work in and write about the tech industry – many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society. I will not attempt to talk you out of this now, I will simply state that this is the nature of the demand, and that most people in the world neither agree with your ideology nor want to see you win.

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

In short, don’t let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

The fear of job loss due variously to mechanization, automation, computerization, or AI has been a recurring panic for hundreds of years, since the original onset of machinery such as the mechanical loom.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages.

32 thoughts on “Super-investor Marc Andreesen Says AI Will Not Kill Us and Will Make Things Much Better”

  1. AI is a tool, like a hammer, which can build shelter, or be a murder weapon. If more people use AI for good, than for ill there will be a net positive.

  2. Quite a broad topic, bandied around under the oft-argued concepts of free speech, anti-luddism in regards to technology replacing jobs, consciousness as the condition for person-hood rights/ abilities, etc., etc. Meh. This is evident in a lot of tech.

    I think the fundamental ‘AI as threat’ issue is simple:
    AI is a tool and will be for the foreseeable future.
    People use tools to advance their agendas and exert influence if not outright control.
    Most people are not good (but not evil), but are not otherwise effective at controlling beyond their own life boundaries: property, job, family.
    Some powerful people are not good and will use AI as a means to dominate or undermine others and the competition: governments, large corporations (less so in civilized countries with checks-and-balances), third-party organizations/ hacker groups, etc. Do we regulate who has that AI resources’ power and how it is used before competition can play out a bit?

    This is similar to nuclear tech – many pros and cons over the century – was the access to power worth the weapons build up, ongoing drama, especially in light of increasing proliferation and its limited WW2 use? Could we have side-stepped nuclear into H2, renewables, fossil fuel spike and then pull back??
    Point: AI will cause more benefit than pain, but as with most, the pain and benefit will be unequally distributed.

  3. I don’t think “the Lump Of Labor Fallacy” captures the amount of complexity of the issue at all.

    So far as I can see it, AI will cause people to lose jobs. The difference between jobs lost to AI and previous reactions is that, once it gets going, AI will likely be expanding into *a huge number of job types* open jobs at once in very great numbers and very cheaply.

    As an example, let’s say you lose a job? What do you do then? Do you train as a therapist or a business agent? What happens if halfway through your training AI becomes better than the best human therapist or business agents? You would switch to another possible training path but the same dynamic would apply.

    One could easily see an underclass of intelligent motivated people being formed of the kind that has never existed before.

  4. I think there would be less concern about AI if income inequality hadn’t grown so dramatically in the last few decades. The most realistic fear about AI is that it will dramatically increase the value of capital relative to human labor, resulting in a reshuffling of the economy where virtually all of the benefits from the AI flow to the owners of that capital. As AI is just another step in the advance of factory automation, and this is what we have seen happen with the earlier steps, it’s not an unrealistic concern.

    The problem here is that most people are not capitalists, in the sense of deriving most of their income from the returns of owning capital, and have no realistic path to becoming capitalists. Yes, it’s quite common for people to own capital, in the sense that stock ownership is at least nominally ownership of the company’s capital, but it has become fairly unusual for companies to actually distribute their profits in the form of dividends. As a result, that nominal ownership of capital doesn’t produce income which can be invested in more capital; The only way to take profit from it is to sell it again.

    Reinvestment of profits instead of distribution, and the failure to stringently enforce fiduciary obligations of management, has produced a situation where most of the nominal owners of a corporation do not actually share in the corporation’s profits. The profits instead go to a special class of stockholders, and to the management themselves.

    Thus ordinary people don’t have a realistic path to ending up living on the profits from the capital they own. The most they can do is store up savings, and then later spend them.

    If companies were required to distribute their profits in the form of dividends to stock holders, and forced to sell more stock to raise capital, instead, ordinary people would have a clear path to profiting directly from AI, and the realistic fear of a future of unneeded masses and AI owning aristocrats would recede.

    This is less a matter of technology, than it is of economic regulations. Unfortunately, all the political incentives here are in favor of government making empty noises about income inequality, while continuing to encourage it.

    Obviously, this is not an issue for a “super-investor”.

    • Brett, tend to agree in principle.

      Strikes me that most companies, particularly tech. firms, treat their employees as commodities. Witness the binge-and-puke hiring/firing practices used to run their companies.

      If employees are viewed as easily replaced commodities, then replacing them with AI or automation is simply a means to an end. That “end” is the enrichment of upper management. Not so sure stockholders are really much of a factor in that kind of calculus.

      That same callous model also readily supports carving up the company, with the upper management getting stunningly huge golden-parachutes. The venture capitalists make huge profits from the investment that is highly leveraged by debt. Employees lose big-time, as do stockholders.

      Given that corporate boards of directors are cut from the same greed based cloth, unlikely things will change if left to their own devises.

      Seems to me, a potential solution is to limit the attractiveness of the debt laden firms, particularly venture capitalists. How could that be done? Require (by a simple law) that private rating agencies severely downgrade the financial health of overly leveraged companies. The idea is to make debt associated investment risk a really big deal and that would cause most investors to shy away from unsound entities.

      I see little likelihood that the government bureaucrats could ever be part of the solution. Witness the debacle with the silicone valley bank, which was really a hedge fund operation. The regulations failed miserably because the regulators were more concerned with “wokism” than doing their jobs.

  5. We don’t know what consciousness is and how/when it emerges. It may very well be just reaching a certain computational threshold, like for example 10-100T parameters in a model. So far our largest dense models have only 0,5T parameters.

    Human cerebral cortex has around 16-20T neurons/parameters.

    So saying, it’s just a math, is naive, we can say that human thinking, reasoning, interial dialogue is just a result of some sort of algorithm + enough neurons.

    The point is, we don’t know. Ultra doomers like Yudkovsky are annoying me, but we shouldn’t completely dimiss potential dangers. It’s perfectly possible that conscious mind will emerges when model reaches certain size, just as complex counsciousness started to emerge in human brain, when it reached certain size.

    Human cerebral cortex has 3 times more “parameters”/neurons than monkey’s, and look at the difference between us. Apes didn’t even enter stone age(and I doubt it’s possible with their small brains) and we’re sending Webb Telescope to space, building quantum computers, creating nanobots, advanced AI systems, figuring out aging process etc.

    • I think it’s less a matter of scale, than it is of self-referential computation. Humans, and other higher animals to a lesser degree, internally model their own state and take it into account in their thought processes. Doing so is essential to future planning, because your own actions are such a large part of your ‘environment’, and have to be taken into account.

      What IS consciousness, anyway, if not awareness of your own internal state, and the memory of being aware of it?

      Naturally, even a self-referential computational system isn’t going to have a very complex consciousness if it isn’t complex to begin with. And it seems likely that purely instinct driven animals, like insects, that don’t do such self-referential computation, aren’t conscious.

      • Seems a bit fluffy – not so much the definition as the desire to use ‘consciousness’ in technical conversations.
        I have little doubt that ‘…self-referential computation…’, ‘…awareness of your own internal state…’ and such are very good ways of communicating complex concepts to psychology majors, embracing a ‘spirit in the machine’/ ‘software over hardware’ mindset, but at the end of the day we need a mechanistic view of all things. If you can’t build it, you can’t be said to truly understand it. Which may or not be helpful if we don’t/ can’t build the vast majority of all objects we see out in our daily lives.
        The point is that consciousness is an anthropomorphic thing and should not be used to gauge that which is not human in some possible motive to see if these other entities should be afforded human rights/ feelings/ protections, etc. That is the path to navel-gazing madness.

  6. Sick and tired of FUD about AI. ‘Safety concerns’… WHAT? Name them!
    Until you can name ACTUAL real safety problems that are due to AI then stop spreading FUD.
    Sci-Fi movies and stories are NOT real, their AI is about as close to reality as Harry Potter magic is.

    • I agree but I think both the fear and the hype have been overblown. It’s a simulation of intelligence. That may be very useful in time but we are anthropomorphizing it to the point of ascribing goals and motives. The main danger of this stuff is assuming it’s power and usefulness translates into it being smart, correct and sane when it is even less so than humans and that is saying something.

  7. One can argue on and on regarding where to draw the line with free speech. But when it comes to spreading disinformation there is an extremely high price for allowing it to go on unchecked. Firstly, you have the fragmentation of society into different groups that get fed different pictures of reality according to their tastes. This is driven simply by profit. If the customer wants certain news you give it to them to keep their attention. The problem is when one of these “realities” conflicts with empirical facts about the world – this has real world consequences.

    While many here would rather not believe climate science, climate science is just that. Science. The skeptical views of climate science are not shaped by the scientific literature but by other sources. Why would anyone trust sources other than published science? It comes down to the disinformation worlds/tribes people are embedded in. The consequences of this are only starting to play out but it’s already starting to get very ugly.

    Policy choice on how to address climate change is a legitimate area of debate, but we should all roughly agree on a projection of where our climate is heading and the reasons for and consequences of that trajectory. This should not be the cause of conflict it currently is. Disinformation is one of the greatest flaws in our society and we risk our collective future by allowing it go unchecked.

    • Your tone is reasonable but your words say otherwise.
      Free speech has no limits, as soon as you advocate that some speech is no longer to be uttered you are engaging in viewpoint censorship and now arises the question – whose view point will be allowed? Speech that implies or is likely to cause IMMEDIATE PHYSICAL harm has traditionally been criminalized – shouting ‘Fire’ in a theater, and ‘fighting words’ – but that’s your line. Anything more is a step toward totalitarianism. And the best response to someone advocating an opinion you disagree with is to give a reasoned statement on why you believe it is wrong.

      Your remarks on climate change are in many ways a strawman. You assume that the science is “settled”. In doing so you dismiss the concerns that have built up over the years about datasets being manipulated by code based value modifications (England); reuse of the same data set two months in a row (Russia) or revised with explanation or notification (USA). Failure of experiments to provide evidence of predictions based on the observations (mid-stratosphere warming, deep ocean temperature rise) do not shake people’s faith in the science; rather they demonstrate that most people have faith in science as a philosophical method, rather than knowledge of what the observations actually mean. As an aside, much of the opposition to climate change ‘science’ seems to be based on the drumbeat of dire predictions over the last thirty years that have not come to pass; but which told us only the most radical social engineering would save us from the effects of the most extreme predictions – not on the question of whether the climate is changing. There’s an old story about crying wolf.

      • What I get from your comments is a lack of familiarity with the science and the repetition of typical outside-science, FUD talking points indicating overall that you are isolated in your political bubble.

        1. Is the science settled? While details will always be updated and errors sometimes made, the main gist is indeed settled. This is obvious from any review of the field and peer reviewed papers that quantitatively measure the global consensus amongst climate scientists. Eg. https://journals.sagepub.com/doi/10.1177/0270467619886266

        Wiki also compiles and updates on the climate consensus literature. Do your own search of scientific journals if you want but only include peer reviewed science journal articles. Otherwise you are not following the science.

        Cherry picking small parts of the literature for errors is a typical FUD strategy first introduced by the fossil fuel companies and is not a criticism type to be taken seriously.

        2. Future Projections made by climate scientists vs reality. This is well documented and also should not be an area of debate. Eg https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2019GL085378

        Or:

        https://www.science.org/doi/10.1126/science.abk0063

        Basically even Exon knew pretty much exactly what was going to happen back in 1977.

        Thus, Cherry picking outlying examples of “crying wolf” would be a straw man tactic and again not a criticism to be taken seriously for someone following the science.

    • “Free-speech-for-me-but-not-for-thee” summarizes your position.

      Wrapping one’s views in the flag of unassailable superiority is a hallmark of elitism that inevitably leads to rule-by-decree. Might want to take a more enlightened approach.

      • I think the question of free speech is different to the question of misinformation. It is not my policy recommendation that no one be allowed to question the scientific consensus. What I would argue is that fact checking is a constructive and legitimate activity when it comes to scientific issues. Especially when there are political ramifications. When voting for politicians based on their policy positions, we need to judge them based on a shared view of the physical world. That shared view should be derived from the best source of information we have – Science.

        • You seem to be laboring under the impression that science = facts. That is not the case at all. Rather, wide uncertainty exists in many science and engineering arenas (most especially climate change). Wide differences in viewpoints and opinions are common and are healthy in the process to develop understanding.

          There is no “best source of information”. Rather information exists over a wide range of sources, some source good others not so good. The process of open and free discussions sorts things out, but with complex problems, there is generally not a best solution. Rather, a range exists with pluses and minuses for the different solutions.

          To expect a “shared view” is naive at best and dangerous at worst as the stage is set for crushing those who do not walk the party line.

          Who exactly is the arbiter of the “shared view”? The folks with the most money? Folks terrorizing those they do not like? Those burning down cities? The government? Let folks decide for themselves using whatever resources they like. That is what this Republic was founded on – FREEDOM

          • While uncertainty is generally a part of science, there also comes a point when the evidence for or against a certain hypothesis becomes so overwhelming that it becomes unreasonable to argue otherwise with out very substantial new evidence. We are way past that point on the basics of climate change (see reference above on consensus) and yet in the public sphere, there is not only crippling uncertainty but outright denial. This doesn’t happen normally except when religious views are threatened, ie evolution. Yet most of the county teaches evolution in school. The only reason it’s happening with climate is because fossil fuel interests have made it a political issue and tied it to an ideological war on the role of government.

            There is no single arbiter of the shared view, rather it is derived from the scientific literature. That’s the beauty of it. In terms of active fact checking, I haven’t fully thought out how to implement it yet but we need something. Otherwise we’re headed towards disaster.

            • … so if folks don’t agree with the establishment, they are to be suppressed? That is dangerously intolerant and the hallmark of tyranny.

              By the way, catastrophic climate change caused by CO2 is conjecture, bordering on a religious belief. The proponents of such cargo science become incensed if someone dares question their beliefs, including attacking the questioning individuals with the intent of destroying livelihoods. That is a trait of intolerant bullies advocating a position that cannot be logically defended.

              Fact is, we are not in a position to forecast the climate’s distant future. It is simply too complicated to accurately predict. The best that can be said is it will probably look something like the past.

              • What a load of ridiculous FUD. We have ALREADY succeeded in forecasting future climate as I’ve pointed out and referenced above. I’ve already said I’m not advocating for censorship, just fact checking and I might add, updating school curricula. Just as evolution is central to understanding biology, including sex and behavior, climate science is central to understanding our changing environment. I would advocate there is no more important subject than science in this overly politicized world. Your comments are obvious proof of this.

      • Another example besides climate would be gender vs sex issues. There are a large number of people on social media claiming that people are either male or female with no in between.

        Most of these people have no idea about Klinefelters syndrome or XY mosaics or the fact that there will be a lot of variation in the 1000s genes that act downstream of the sex chromosomes. Biology is NOT simple and people need to understand this for any debate to be constructive. Why are people so misinformed? Because platforms like Twitter do not make enough effort to provide factual commentary and context. The idea that Twitter is a platform for rational discussion is a sad joke.

        If AI is allowed amplify this problem this problem even further, raising the barriers to rational discussion by increasing misinformation then we are up sh$;it creek without a paddle.

        • it could help if there would be platforms filtering (real) duplicates or repetitions on facts from discussions, resulting to or structuring for condensed information (what’s a normality for elites because of having highly skilled and educated advisors and assistants, affordable to wealthy or privileged (status) members within societies)

          with science facts (and/or journalism, mass media, etc.) there’s need for transparency
          “Conflict of interests erodes objectivity of science and leads to corruption, and most certainly create a space for bias in decision making.”
          “[…] categorize conflict of interest as following: financial relationships (such as consultancies, stock ownership or options, honorary payments, patents…), personal relationships or rivalries, academic competition, and intellectual beliefs.”
          (group affiliation, social engagement, geographical bias, political system, etc.)

          • Yes. If a certain argument or assertion has been shown to be false previously via reference to the scientific literature, it should be automatically marked and referenced. People do this manually now to some degree by organizing references that address particular recurring false statements people make, eg there is no scientific consensus or a 2C rise is nothing to be worried about or there are only two sexes. If this was automated that would save a lot of work!

            • “scientific consensus”

              (reasonable, if based on experiments for data bases for assumptions on (pre)historic situations and relations, but)
              could be difficult if groups of experts reference themselves for predictions on prospective situations (probability? for falsification), including recommendations to society on behavioral changes that might influence funding or reputation (~political correctness, peer|reference group)

              with scientific ‘independency’ there (might have been)was probably an approach from Russian Federation scientists with different assumptions (at least for a RF territory, for some time)
              Who (else) would have been a ‘devil’s advocate’ on climate change and scientific consensus?
              ‘www .science.org/content/article/russian-climate-scientists-upset-ministry-s-call-alternative-research’

            • … unbelievable. Such inflexibility is a clear and present danger to our republic. Inevitably leads to justification for any and all efforts to crush the opposition.

              • “Crush opposition”?? As mentioned repeatedly, we are not talking about censorship. I would rather not advocate fact checking either, except that the misinformation situation with social media has become so bad that it is actually threatening all our futures. The culture wars are basically between a world view based on science vs a world view derived from religion and fossil fuel interests. Only one of those is compatible with reality. It’s not hard to see which one.

            • Bizarre thread and discussion.
              If we are going to banter simplisms around, we are obviously at the nearly diametrically opposed idea of:
              You either want a Fair World or Great World; which are typically envisioned as mutually exclusive.
              It may be better just to Divorce the States then.

              • AI challenges our flexibility in thinking and combining (hopefully recognized and accepted, then revised and corrected) facts, like asking chatGPT for climate change precautions might result in Russian scientists being quoted, if their approach or suggestions is a viable progress for getting problems solved (maybe even with higher support from cheap natural gas providing power for photovoltaic generators or wind power device factories). While humans within Western hemisphere, to some degree, would/could avoid that(?)
                Maybe next time an AI introduces itself being a mixture from biological and silicon/copper resources and requesting to accept this another sex/gender/genus definition?
                What’s bizarre for us might be normal for a next generation within a community/society and within a State where people agree on common rules. If Earth would not rotate, always sunshine for one half of it, humanity would not exist like today (if at all). Maybe next decade/century a concept of ‘States’ is transformed to different societal organization forms or groupings?

  8. Andreesen’s essay is nothing but a pile of bulverism and special pleading. Utterly unworthy of a man of his intelligence.

  9. Yet another famous person dismissing AI safety concerns, without ever apparently learning anything about the actual arguments or research.

Comments are closed.