Crude Slaughterbot projection of existing drone technologies

Berkeley University a video of Slaughterbot microdrones. It is a fictional visualization of an autonomous weapons scenario. The technology to achieve what is in the video is close. It would be deployment of drone swarms which DARPA and others are working on.

People will probably get hung up on the details here, but the point is that AI has the potential to dramatically reduce the cost and risk of waging war, and that’s a bad thing if you value peace.

Nextbigfuture agrees that the killer drone swarms with facial recognition is achievable in the near term.

President Obama embraced the US drone program, overseeing more strikes in his first year than Bush carried out during his entire presidency. A total of 563 strikes, largely by drones, targeted Pakistan, Somalia and Yemen during Obama’s two terms, compared to 57 strikes under Bush. Between 384 and 807 civilians were killed in those countries, according to reports logged by the Bureau.

So the use of larger drones has already happened for over a decade.

You may have noticed that US wars did not become more inexpensive.

Drone targeting was a replacement for CIA assassinations for the US or Mossad assassinations by Israel.

Nextbigfuture has written that where drone assassination would bring new capability and make a difference would be to use sprayed goop instead of explosives. Goop could be sprayed onto the nostrils and mouth while the target is sleeping. The goop would be designed to dissolve after ten to fifteen minutes after the victim had suffocated at night. The purpose of this or other weapons would be to make an assassination look like a heat attack or stroke or some other medical situation.

The CIA or Mossad could have performed this kind of work before, but drones could make more missions successful.

The Slaughterbot video also discussed automated targeted cleansing of a population. Again this is possible now. It is a matter of not having a population vulnerable. There will be various anti-drone weapons and your side will have their own drones.

Background of the technology

In mid-2017, MIT indicated that there were working on bumble bee sized drones with computer chip brains that are 100 times more energy efficient.

Below are non-fictional drone research

Anti-drone techology

10 thoughts on “Crude Slaughterbot projection of existing drone technologies”

  1. Dissolving mouth goop is a very clumsy way of giving a natural looking death. Suitable poisons have been around for literally centuries.

  2. It’s an evolutionary improvement of “sensor fused” anti tank cluster weapons from the Cold War.

    This only increases the odds of war from the WEIRD perspective. Other cultures (and Western soldiers whose nations ban these) will continue clearing buildings of suspected enemies the old way – leveling them with explosives.

  3. “You may have noticed that US wars did not become more inexpensive.” That fits with a previous post where greater efficiency doesn’t drive down energy demand, it increases it because it drives down prices which increases demands faster than efficiency increases, which creates more demand. In this case, as long as a government organization can justify an allotment of money they can hold onto that amount, so militaries can afford more weapons as efficiency of price goes up, as apposed to having the budget decreased while firepower remains equal.

    As for the drone scenario, it seems rather realistic, but I would expect people clamoring for automated anti-drone weapons such as jammers, net launchers, lasers, and hanging netting. Also, anti-drone drones would be effective and might be reusable thanks to using a captive spike, or electrical shock system, where the kill-bot version might be single use if it is a flying bomb, unless it is a flying pistol.

    • What you say about anti-drone defenses is true – but the world in which those defenses become widely necessary is an ugly place.

      The most chilling thing is, I’m not sure that future can be avoided. I didn’t notice anything in the Berkeley video that couldn’t be done now, other than the degree of accuracy in target identification. And that’s not really needed if you just want to kill many people in a specified area – a music festival, a protest, a bar/night club, a marathon, a neighborhood/ghetto, a sidewalk, a church/mosque/synagogue, a bus/train/subway, a cult compound, etc. And that plays into the hands of those who can afford to protect themselves and want power over those who can’t.

      It’s not the autonomous killing that’s scariest – it’s the ability to do it anonymously, requiring either extreme surveillance or difficult criminal investigation to backtrack each perpetrator. A few random attacks – possibly false-flags – could easily frighten people into accepting continuous 100% surveillance. Those who protect the rest of us by speaking out, acting out, could become targets. Pro-Gun, proud arms bearer? Target. Anti-gun rally participant? Target. Rainbow warrior? Target. KKK? Target.

      • Hm… there are several things here which feel more like fear mongering, than rational thinking.

        First, I don’t think that the only reason for the (relative) lack of assassinations is the fear of being caught. I’d like to think that it also has to do with the fact that the majority of people do not believe that physical elimination of someone who thinks differently than you is a good solution. Neither do they think that socially eliminating someone is a good solution.

        Second, a lot of times we are supposed to freak out of new technologies (i.e. end-to-end encryption, AI, now drones) and are shown how these will be used by terrorists or “evil people”.
        Yet, as it was recently demonstrated, most of the times good ol’ technologies, such as trucks, or sending an sms with codewords have the same impact.

        • Not just the majority, but the vast majority. And most of the people who don’t belong to that majority are mentally damaged, too, which usually limits their effectiveness.

          The real problem isn’t individuals, it’s groups that are large enough to invoke our primate tendency to submit to ‘authority’, where sociopaths can get control at the top, and then direct the activities of large numbers of people who are just followers.

          Still, I wouldn’t totally discount the threat of individuals; While effective individuals with such motivations are rare, nothing demands that they be totally non-existent, and the capacity of individuals to cause harm is, pretty much unavoidably, ever increasing with our technology.

          It’s been suggested as one answer to the Fermi paradox: That some time before a species becomes capable of interstellar travel, individuals become too powerful, and the species ends up extinct or knocked back to the stone age by one or two nutcases. The best way to avoid this, of course, is to become an interplanetary society as early as possible.

          • What the majority of us won’t do isn’t relevant, nor is it relevant whether the attackers are individuals or groups. Either/both will instill terror far beyond the bombs and shootings we’ve seen to date. And that will result in a reaction that wipes out what remains of privacy, anonymity, and very likely the will to protest any injustice less than genocide directed at oneself.

    • Good point, but also modern wars still use lots of soldiers on the ground, which is costly. Especially in a remote place like Iraq. Apparently the A/C cost alone in Iraq during the height of the war was more than the entire budget of NASA. We aren’t fighting wars with only drones, so you can’t really compare before/after costs just yet.

    • If any technical advance is boring if you’ve read about it in science fiction previously, then everything is boring.

Comments are closed.