Pace of Super-technology Will Define the 21st Century

There are super-technologies which are emerging that will define the 21st century.

The highest potential super-technologies are ones which will enhance intelligence and human control of the material world at molecular and other levels.

The super-technologies are
Genome editing applied to cognitive enhancement and antiaging
Molecular nanotechnology for nanomedicine, next level computation
AI and quantum computers

There is CRISPR and genome editing beyond CRISPR. This can be used for cognitive enhancement via embryo selection and direct editing and modification of embryos and adult humans.

Molecular nanotechnology has been developing greater control of DNA, RNA and proteins but has not reached the point of exponential scaling. Mature molecular nanotechnology would be a massive boost to genome engineering, antiaging and cognitive enhancement.

Ido Bachelet was working on programmable DNA origami buckets for nanomedicine. The work was discussed in 2012 to 2016 but has gone into stealth-mode since it was purchased by Pfizer.

Nanoscale robots exhibiting quorum sensing. Yaniv Amir, Almogit Abu-Horowitz, Justin Werfel, Ido Bachelet. 2018. Biorxiv.

Multi-agent systems demonstrate the ability to collectively perform complex tasks—e.g., construction, search, and locomotion—with greater speed, efficiency, or effectiveness than could a single agent alone. Direct and indirect coordination methods allow agents to collaborate to share information and adapt their activity to fit dynamic situations. A well-studied example is quorum sensing (QS), a mechanism allowing bacterial communities to coordinate and optimize various phenotypes in response to population density. Here we implement, for the first time, bio-inspired QS in robots fabricated from DNA origami, which communicate by transmitting and receiving diffusing signals. The mechanism we describe includes features such as programmable response thresholds and quorum quenching, and is capable of being triggered by proximity of a specific target cell. Nanoscale robots with swarm intelligence could carry out tasks that have been so far unachievable in diverse fields such as industry, manufacturing and medicine.

Artificial intelligence will soon go from tens of billions of dollars to trillions of dollars in investment and impact. There needs to be the merging of deep learning with logical and symbolic AI. Large scale quantum computers will also help machine learning and computation. Both AI and Quantum computers will be enhanced by mature molecular nanotechnology.

99 thoughts on “Pace of Super-technology Will Define the 21st Century”

  1. The concept of IQ rapidly loses mathematical rigor much above the 99th percentile. Originally, IQ was a measure of intellectual maturity. If you were ten, and had an IQ of 130, you were as about smart as a 13 year old. It works that way to a point.
    This is why very bright children have such a hard time in the “general population” of government schools(junior prisons/indoctrination centers). They are very different from their average fellow students, they pick up on the non-truths being taught them, but are not rewarded for pointing them out. Children famously pick out those that are different in their ranks, and punish them for it. Then their are problems caused by the tenency for bright children to want to use critical thinking where it’s not politically correct to do so. It’s pretty obvious to any reasoning person that racial discrimination against whites, is the same as racial discrimination against blacks, that is to say, evil, and unjust. I think it’s worse because of the evident hypocrisy of condemning racism against one race, but not another, but try telling that to your Berkeley, CA public school teacher. Reasoned argument won’t get you anywhere but suspended!
    Getting back to the lack of mathematical rigor, think about it. If you’re 50, and had an IQ of 1000, are you as intelligent as a 500 year old?

  2. I don’t see higher intelligence pushing people towards stupidity. I will admit very intelligent people can be irrational. Consider how many intelligent people voted for Hillary Clinton, and believed the Russian collusion hoax. Orange man bad does not hold up to a rational evaluation of fact.

  3. Yes, please. It’s very tiring being the smartest person around. Now that I’m nearing retirement age, I’m usually the most knowledgeable, and wise too.
    Here’s a prediction. As general intelligence, and knowledge increase Libertarians will become a much larger portion of the population. I believe the current obsession of the youth with socialism, and government will evaporate as high intelligence, and broad based knowledge spread. It’s hard to propagate the lie that socialism, and government intervention in people’s economic activities benefits the poor, among those with a knowledge of history.

  4. I suspect there were more supergeniuses per capita before the advent of processed food, and labor saving devices. Good nutrition, and physical activity is important proper development of all organs of the body, including the brain. Of course the population was much lower, and access to information was much more difficult than today.
    If Newton had not been part of the British upper class, he’d likely not have been exposed to the basics of mathematics. He certainly wouldn’t have come up with calculus.
    Same for Einstein. His family was well off, and he lived in a culture that valued intellectual achievement. Had he grown up in ancient Sparta, he’d probably have been killed as a teen. Had he grown up in China before it was opened by the europeans, he’d likely have been a petty bureaucrat. The Confucian environment was more about rote memorization than creative thought.
    We’re very lucky we live in this age, despite processed foods, and overweening government. The desktop I’m using to write this, and the network it is connected to make the Library of congress from 1980 seem limited, and quaint. Hard to believe I used to have to prowl the stacks of my alma mater’s well appointed library to learn of anything moderately arcane.

  5. (part 3)

    Conversation is already quite impressive in some narrow spaces, but to get really robust, it’ll need to be backed by common sense. But one could fake some amount of common sense with something like an expert system. So we might get close to Turing-level in constrained subjects within 5-10 years. But the emotional component may still be lacking.

    Software updates already exist, but AI needs work on modularity and on standardization of interfaces. Once there’s a good hardware platform and a decent OS, we could see an explosion of AI apps being developed, like we’ve seen with smartphone apps. Though there are already many AI startups.

    Overall, I expect a lot of progress in the next 5-10 years, and maybe 15-25 years for full AGI (or close to it). I think the next priorities should be that common sense framework (temporal analysis, learning cause-and-effect), modularization, and a good OS and hardware platform. And after that we can work on the deeper understanding, emotions, etc, and putting everything together. Meanwhile, safety precautions are another major priority.

  6. (part 2)

    I would try taking the object classification neural nets that we already have, and connect them to another layer that would classify temporal relationships (cause and effect). Train it from videos first, then give a couple of those hands with good tactile sensors and a pair of cameras, and have it manipulate various objects in different ways. Then it can learn concepts like “soft”, “hard”, “springy”, etc, relate them to different types of objects and start learning how they respond to different types of manipulation. Learning that and other cause-and-effect relationships should also form the basis of more human-like “common sense”, which would be a big step towards AGI.

    With that foundation of common sense, the bigger problem for AGI is getting a deep knowledge of everything. There’s just so much of it. Connecting the common sense system to an NLP system would let it learn from texts and audio. Start from giving it text describing various events, with videos depicting those events, to teach it to relate the text description to the objects and actions it represents. Then have it analyze wikipedia, youtube, and scientific literature. But it would probably be safer to constrain each such AI instance to a narrower, application-specific subset.

    I’m really not sure what it would take for AI to understand emotions. It’s a fairly complex subject. Even humans have trouble with it sometimes.

  7. (part 1)

    Navigation can be pretty soon. Autonomous cars and roombas are getting good at it in a subset of environments. Humans are using GPS for the large-scale path finding, so what’s left is mostly recognizing different types of obstacles and terrain, and doing local-scale path finding around them. Basically image processing + other sensors -> object detection and classification -> maybe an expert system to decide how to deal with them -> path finding. Throw in SLAM (Simultaneous Localization And Mapping) as needed. Road and indoor navigation may be good enough within 5 years or so. Other spaces may take longer, simply because there aren’t many robots trying to navigate them yet.

    Specifically on the mobility side (executing the path rather than planning it), Boston Dynamics are making very good progress.

    Dexterity may be soon too. There are already examples of very human-like robotic hands and prosthetics with tactile sensing. I think what may be lacking on the hardware side is resolution and sensitivity of the tactile sensors, and how much area they cover. But the bigger problem is software. The AI needs to understand the objects to manipulate them correctly.

  8. Michael, it seems you’ve done a lot of thinking about AI. So when do you foresee we’ll achieve these milestones: human-level navigation, object manipulation, conversation etc.?

  9. & if one of the changes to make HV 2.0 is to make the new version better at recognizing bias etc. that would be a big improvement. Science is a software patch that uses the ability of different people to recognize the other persons biases to cancel the biases out.

  10. P.S.: One job that would need the whole AGI package, including emotion, navigation, and learning of new skillsets, and which isn’t practical to constrain to a limited area, is companion AI. Looking after the elderly, children, or just lonely people.

    That’s a dangerous combination of features. We should probably wait with full-featured companion AI until we get a lot more experience with advanced AI and robotics, get some good safety and countermeasures developed, and preferably enhance our own intelligence as well.

    A limited companion AI (like an advanced Siri) might be ok though.

  11. Mostly agree, with the caveat that the majority of tasks only require a subset of that, and maybe not even a human-level subset (“jobs” being composed of multiple “tasks”).

    Dexterity is mostly needed when manipulating random objects (as opposed to ones having a well defined known-in-advance shape or interface), and human-level mobility and navigation are only needed in unstructured environments. Though both would be useful in a general-purpose robot platform.

    The NLP and emotion modules are only needed when interacting with humans, and the more jobs are automated, the fewer such cases there will be. It’ll mostly be needed in human-facing services, less in industry.

    And for the most part, I’d limit “ability to download new knowledge and skills from the cloud” to a human-controlled operation, as a safety measure. Most jobs need a fixed set of knowledge and skills, so there’s little need for this to be automatic.

    Downloading updates for particular skillsets can be automatic, but downloading whole new skillsets shouldn’t be, IMO (by analogy: updating an already installed app vs installing a new app).

    Actually, I’d consider removing “human-level navigation” from most applications, as an additional safety precaution. This would constrain the robots to their assigned work area. Or at least add a “dead man’s switch” that shuts off the power supply if a robot leaves the assigned area. The switch can be hardwired for a particular area, and also shut off the power if removed.

  12. “Come to think of it, a nurse robot may be more difficult than a doctor robot. But they both have a wide range of tasks, some of which are easier to automate than others.”

    Yes, certain things that are easy for humans are exceedingly difficult for machines and vice versa. Nursing may indeed prove harder to automate than doctoring.

    I think we’re substantially in agreement. Here’s what I believe is required to be able to automate most jobs:

    -NLP at human level (roughly Turing-test grade)
    -understanding and mimicry of human emotion (note that this says nothing about what the machine actually experiences, whatever that means)
    -ability to download new knowledge and skills from the cloud when necessary
    -human-level dexterity and manual manipulation
    -human-level mobility and navigation

    I would use the term AGI for the sum total of those capabilities. I agree they all fall on a spectrum, and their development will probably approach/reach the human level at different rates.

  13. Come to think of it, a nurse robot may be more difficult than a doctor robot. But they both have a wide range of tasks, some of which are easier to automate than others.

    I agree that the one task where an AGI may be most needed is conversation – discussing symptoms, options etc. But even there, an NLP (Natural Language Processing) system coupled to an expert system and/or a neural net could work decently well. Maybe not as good as a human doctor, but well enough for many (most?) cases. It doesn’t have to pass a Turing test to be useful and helpful.

    Siri and the like are early examples of such natural language assistants. They’re already doing a decent job handling a similar challenge in a different application area, and they’re only getting better.

    (EDIT: It might be possible to pass a Turing test on medical subjects while failing it miserably on other subjects. The medical field is much narrower than all of human knowledge. If that is possible, such an AI should work just fine for medical counseling.)

    Btw, nurses and doctors are excellent examples of where emotional behavior is desirable – showing compassion, sympathy, etc. So even if you’re going for an AGI implementation, it would still come short of a human without at least some understanding and mimicry of emotion.

    (EDIT 2: Specifically for the medical field, nanobots may eventually do a much better job than even AGI-controlled macroscopic robots.)

  14. Agreed about the smartphone-robot analogy.

    I disagree that a narrow AI can play the role of a doctor. There is much more to the role than just disease diagnosis and treatment recommendations. That’s why I stated the AI would have to assume “all the roles of a current human doc.” That includes interacting with patients. How could an AI have an in-depth discussion with a patient (on the same level as a good doctor does today) unless it has general intelligence? It would have to be Turing-test capable and then some. A narrow AI just won’t be adequate.

    Ditto surgical robots. You say we already have them. Can those robots talk to patients about their procedures and answer questions with relevant and accurate responses in fluent English? Not even close. Again, we need AGI for that.

  15. A smartphone is a general purpose hardware platform. So is a general purpose robot. The apps and AI are the software, and indeed the AI can take the form of installable apps.

    > do you think it’s possible to have a narrow AI perform sophisticated roles, like doctor or city planner, without having general AI?

    We already have examples of narrow AI successfully tackling tasks that were previously thought impossible for AI.

    Like I said, generality is a spectrum, and I do think these tasks can be performed to varying degrees of performance with various degrees of less-than-human generality. Perhaps much less than human.

    For example, disease diagnosis can be performed with little more than a deep learning neural net. There’s already research on that with some very promising results. A similar net can also match symptoms to recommended treatments, or further tests that need to be taken.

    Surgery can maybe be handled by a combination of neural net and expert system. The challenge is similar to autonomous driving: a dynamic environment (body vs road) with a fixed and well defined set of rules and procedures (biology and medical procedures vs traffic rules). Autonomous cars are doing pretty well without human-level generality.

    City planning also has a very specific and well defined set of rules and guidelines. It doesn’t require human-level generality either.

    (As for the hardware part, we already have surgical robots. Diagnosis and city planning need only a computer.)

  16. “My concern is for an excessively general AI software.”

    I’m discussing the software as well, not the hardware. I thought my smartphone analogy made that clear. And I don’t think a robot could be a doctor (one that assumes all the roles of a current human doc) without having general AI software. That’s why I maintain that it should be general AI, but without self awareness or an internal reward system. You say that may not be possible, and you might be right. But our human reward system arose due to our evolutionary path. It’s conceivable that we can create an AGI that lacks any motivation except the kind we want it to have.

    “A narrow AI fails outside its application range, by definition.”

    Agreed. But do you think it’s possible to have a narrow AI perform sophisticated roles, like doctor or city planner, without having general AI? It may come down to definitions here; I think narrow AI (as I perceive it) would be just too brittle to do those jobs.

  17. My concern is for an excessively general AI software. I have less issue with a general-purpose hardware platform running narrow AI “apps”.

    General purpose hardware without the software installed is just a fancy paper weight. The software determines whether it’s dangerous or not.

    > No matter what approach we take to creating AI, there will be a “hopefully” required.

    Maybe so, but a responsible engineer would strive to minimize the “hopefully”s as much as they can.

    > Your suggestion of narrow AI for every application won’t guarantee that we will avoid a disaster either.

    Perhaps not 100% guarantee, but it does reduce the chances quite a bit. A narrow AI fails outside its application range, by definition. That means it has multiple weak points, which make it a much lesser threat.

  18. “We need human-level or better performance, we need robustness, but I don’t see a need for a “one size fits all” human-level generality.”

    So would you advocate for one robot to be the janitor and a separate robot to perform surgery? Fine, but I really don’t see the point. My phone acts as a calendar and GPS device and gaming machine. I don’t want to carry 3 separate phones, one for each app.

    “Hopefully isn’t good enough. The stakes are too high.”

    No matter what approach we take to creating AI, there will be a “hopefully” required. Your suggestion of narrow AI for every application won’t guarantee that we will avoid a disaster either. We may as well just abandon AI development because the “stakes are too high.”

  19. Your definition of “emotion” seems to be wider than mine, and encompasses what I would consider separate (but possibly related) concepts. But ok, that’s less important.

    > I think an AGI can exist without necessarily possessing emotion

    As I’ve said, I’m not convinced that it’s possible (using your wider definition of “emotion”). Even if it is possible, it may not be practical or easy enough to do.

    I don’t know which of us will turn out to be right, but especially with the self-awareness part, we don’t know what triggers it to emerge. It’s like playing with plutonium without understanding criticality. Dangerous.

    > let the AI be capable of being both a janitor and a surgeon

    I don’t see the need for that. Narrow(er) AI can still outperform humans within its application range. We need human-level or better performance, we need robustness, but I don’t see a need for a “one size fits all” human-level generality.

    As I’ve summarized earlier: create general AI only when you need it (and I don’t see when/where), and employ narrow AI for every other application.

    That’ll still give the same benefits, with less risk.

    > without going rogue (hopefully)

    “hopefully” isn’t good enough. The stakes are too high.

  20. This circles back to our earlier discussion. What you call internal reward system or self-awareness, I call human-like emotion. I think an AGI can exist without necessarily possessing emotion, and this will go some way to ensuring the safety of the system.

    In other words, let the AI be capable of being both a janitor and a surgeon. It will benefit humankind without going rogue (hopefully), and without us feeling guilty that we’re enslaving a sentient being.

  21. Oh I agree with you totally. I just don’t think that vastly increasing IQ will have us the same in any way. My chosen peer group will also have an IQ of 500 or a thousand or whatever. Even now there are ways to change ones limiting beliefs (if you recognize them as limiting or irrational). My point is that I don’t think anyone can predict the behaviour of a person 5 or ten times as smart as you or I.

  22. The main advantage of a narrow(er), i.e. less-than-human generality AI, is safety. A narrow AI is limited by design, and will fail outside its range of applications. A fully general AI doesn’t have that safety built-in.

    I’m not convinced that an AGI can be made without some sort of internal reward system. As I’ve pointed out, such a reward system is the basis for internal motivation – which may not align with our needs.

    Another issue is self-awareness. As far as we know so far, self-awareness can be emergent. Can we guarantee that an AI complex enough to qualify as an AGI won’t become self-aware?

    A full AGI is dangerous, and not necessary. There’s no need for your house maid to be a rocket scientist – or even an architect. AI should have multiple built-in safety limits. Lack of full generality is one such limit.

    Another possible advantage, is that such (narrower) AI may be easier to develop. Though I’m less sure if that’ll be true or not.

  23. “What I’m proposing is something closer to the middle: more general and flexible than current narrow AI, but not as general as AGI. They’d be able to adapt within a certain range of tasks and circumstances, but not others.”

    What would be the benefit of those AIs? Why would it be better to have one group of machines that can only build houses and hospitals, and another that can only clean them? Why not AIs (and robots) that are general enough to do everything a human can do, sans emotions?

  24. I guess I’d want to see a specific example of a group that is both strongly motivated to maximally expand, yet who won’t adapt prior to falling into a Malthusian trap.

    The Amish? They strike me as practical people. Already they’ve moved heavily into occupations other than farming, as it has become harder to acquire farm land. So that’s an example of them not allowing their previous culture to push them into a Malthusian trap.

    My guess is that their concern for the well-being of their children will eventually lead them to decide that a vasectomy after having 2 kids is necessary to living simply.

    On the other hand, they might just expand into space, once colonization opens up. Every Amish community/church decides for itself which technologies are acceptable. If space colonies happen, especially O’Neill type colonies, some Amish could very well adapt just enough to establish their own colonies in space. At that point, the limits to their growth would go a LOT higher.

  25. The hell with ‘ Where’s my flying car ?’ When are the boffins gunna give us a good gripping hand ? I’m thinking something like Krishna’s, but strap-on.

  26. I read an article lately about how John von Neuman and Kurt Godel, two of the most powerful intellects of the last century, are buried in the same cemetery in New Jersey, and how, despite their brilliance, both fell victim to mental or brain disorders that crippled their thinking. Likewise, a Nobel prizewinner whose work is fundamental to advances in genetics believes in flying saucers, astrology, etc. Isaac Newton spent more time on alchemy and biblical chronology than on sorting maths and physics for the next two centuries. ‘Great wits are sure to madness near allied.’

  27. Don’t think that here commodification in a good idea. There are some areas where we know for sure that the market doesn’t work

  28. So the human population slows, plateaus, starts to drop… and then we notice that small subgroups have the magic combination of culture and genetics to not drop below replacement fertility.
    They eventually take over the world and overall growth resumes.

    Unless of course the rest of society decides to go on pogroms to root out the “breeders”. At which point I think we are in a dystopia already.

  29. Malthus is already being proved wrong, as fertility rates decline – even in Africa. Many developed nations are dependent on immigration to keep their population steady or growing. Those that are unattractive to immigration or block it are declining.

    The effects Malthus predicted could still happen if we get radical life extension – though if fertility rates keep dropping, even that eventually stops driving growth.

  30. You are incorrectly attributing the ills in this world to those with brains and genes that deviate outside the considered “normal” spectrum.

    Most people have perfectly normal brains and genes and are all law abiding up until the moment they choose not to be.

    You yourself are only a series of unfortunate events away from finding yourself in a bell tower somewhere with a rifle, getting ready to get your revenge upon the world for all slights real or imagined.

  31. Sure. There maybe more mental faculties and traits that I missed which are also more important than intelligence…perhaps things we do not even have names for, or they are just counter intuitive like reaction time. Maybe there are less accidents with faster reaction time than would be prevented by intelligence…at least beyond some near normal level.

    If you are equating thinking in general with IQ…they are not the same thing, though there is some overlap. IQ is the rate and ability to absorb what is being taught. It is a measure of how readily indoctrinateable one is. That has negatives and positives. To me, at least equal to that is the ability to learn what is not being taught. Drawing the right conclusions at the right weight from experience, observation, and just soaking up a lot of stuff without prematurely drawing conclusions…often taking decades to draw enough evidence together. This is often vital to recognize and avoid being indoctrinated with rubbish. And ingenuity is more valuable than either, at a comparable high level. That is the ability to use what you have got to a much greater degree than you have seen applied and with a high degree of effectiveness.

    IQ was created to predict how students would perform in elementary school, so administrators would know where to place them. IQ tests are focused on assessing the ability of the student to learn what the teachers and authors of schoolbooks are trying to teach. Nothing more, nothing less.

  32. Of course my theories are nonsensical bunk and if we can fix a problem with technology we should fix it with technology rather than without. It can, I guess, give us a better sense that we are moving forward, better sense of control of what we are doing, easier mass replication, etc.

  33. Reality check. Quantum computers, with exception of one-task D-Wave (another re-invention of analog computer), have no real applications. Lots of talk, lots of qubits already, and nothing useful.

    I will leave the wet tech and AI without comment — it is 80% matter of faith, and 20% everything else.

    The real super-technology is autonomous robots. It is super because of an immediate productivity increase in any feasible application. It is super because of economic and competitive advantage one gets. It is super because it makes many impossible things trivial, such as winning a war or deterring/destroying an enemy superior in resources, numbers and firepower — same as nuclear. If autonomous weapons are nuclear, which is already happening, it is a super on top of a super. In space, it is super for economic, military and geopolitical (applied to space) advantages it provides. And it is all essentially the same tech, only the platforms and instruments are domain-specific and task-specific. It is not AI, not magic, and definitely not open source — and never will be, as one does not give away such power for anything.

    Militaries know that well, and want it badly. USA military has been patiently investing in military autonomy tech for 20+ years, with little success, but it shows the importance of at least trying to get there first. Economic advantage of autonomy has been both talked to death, and misunderstood, but the importance is fairly clear. In space, it’s robots or nothing.

  34. Right, Inventzilla is talking about something some people refer to as The Singularity (capitalized). Technological singularities (small ‘s’)are just places in history, sometimes quick, usually drawn out over some years, where people are generally incapable of predicting what will follow them,

    For example, the first nomadic people to start sprinkling some extra grass seed around (probably so that they would have more grain for making beer) surely had no idea of towns, cities, city-states, countries, empires, merchants, shops, the artisan class, and so on.

    It probably would have been better if von Neumann had called the technological singularities “technological event horizons” instead, but what’s done is done.

    Confusion is understandable; Wikipedia is all messed up on this subject, btw.

    It’s probably going to get more confusing before it gets better, too, as I believe a a real singularity, much like the speed of light, represents a state that matter can never quite attain.

  35. More than just rationalizing better, they can argue better, and convince others. Though all of this is less true if everyone around one is also brilliant and can punch the appropriate holes in one’s nonsense. I see this a lot with identical twins. Their twin can hold them in check, catching them when they are rationalizing.
    Speaking of holding in check, this is one of the reasons I think chess is good for brains. It provides accountability to your thinking. Many, but not all serious players, will generalize this to all their thinking…tossing self-deception.

    People need to be taught appropriate thinking, but this is very rarely done. In college you may get some critical thinking stuff, but one’s thinking is not really very malleable at college age. And formal logic, fallacies, and such is just the tip of the iceberg. Most people never learn to think on their own at more than a rudimentary level.

    And it is important to learn it early, because it is important the way you take in new information.

  36. While all these are obviously important your opening sentences states the most important of all. Quote “I can THINK …”

  37. “We might think in terms of tasks that require N units of thought, and access to that through the cloud.”

    Ray Kurzweil has a similar idea about connecting our minds to the cloud and requesting temporary bursts of neocortex augmentation when necessary.

    However, my idea isn’t just about advanced AI; it’s about advanced robotics. It’s not just the thinking — it’s the doing. If every person has one (or several) robots to perform cooking, cleaning, home repair, errands etc., it will elevate our standard of living immensely. And the same robots could also be our dentists and surgeons (why not? they have the knowledge and dexterous abilities). These robots could download new skill sets the same way we install phone apps today. Everyone on earth would live like a 2019 millionaire (and perhaps far beyond this level).

  38. Ido Bachelet has gone into stealth-mode, which means that his research has been already privatized. Not good news at all

  39. Quote “All the bad doesn’t automatically go away because of a few additional IQ points.”

    Your right the bad goes away, not because of a few additional IQ points but it goes away because we are then smart enough to know how to correct those deficiencies whether by realigning the brain’s chemical balance or making genetic changes to the brain itself.

  40. Not yet, because current media isn’t good at relaying emotion. There are some posts that invoke a lot a empathy, but they’re few and far in between.

    I’m suggesting that a future medium that can relay emotion more directly (via BCIs) may be more effective at that. But it depends on how it will be used.

    The idea is, if you could experience first hand what someone else has experienced, then it’s pretty much the definition of empathy: “the ability to understand and share the feelings of another”.

  41. The difference between what I’m describing and AGI is the level of generality. AGI (= Artificial General Intelligence = “general AI” in my previous posts), is usually thought of as having human-level generality (in addition to human-level performance).

    Generality is a spectrum, with current narrow AI close to the low end. It’s often good at a very specific task, but fails with even a small change in task parameters. AGI is on the other extreme: able to adapt to almost any task and circumstance.

    What I’m proposing is something closer to the middle: more general and flexible than current narrow AI, but not as general as AGI. They’d be able to adapt within a certain range of tasks and circumstances, but not others. For example, flexible enough to design a house or a hospital or a school, but can’t design a ship or a car or a rocket, can’t clean your house, etc. Then another can design different types of cars, but not houses. And so on.

    Essentially they’d still be narrow, but not as narrow as today’s AI. And therefore not as fragile.

  42. If they have no self-preservation motivation, you can unmake them as easily as you make them. Which means it might not make sense to think of making AIs by the millions or billions discretely, instead human-level thought might be scaled up on demand in server farms.

    We might think in terms of tasks that require N units of thought, and access to that through the cloud.

  43. Gestational chambers could and likely will totally alter society.

    Robotic tunneling and mining.

    Geoengineering. May take until the next century to really scale up.

    3D printing. This has started, but the future could be not too far from Star Trek’s replicators.

    Serious medical nanites may be toward the end of the century or the beginning of the next. By that I mean nanites that can verify and modify the genetic code in every cell, remove any kind of protein in any location quickly and accurately, move cells or compel them to divide according to some master plan, and rewire the brain.

    Cognitively beneficial implants. This may start mid century. Things like artificial photographic memory, thought communication (mental typing or speaking relayed to someone of your choice…wherever), image zooming/enhancement, ability to mathematically calculate rapidly, accurately, store and retrieve numbers, formulas, spreadsheets and such. And this would be the coolest: increase working memory to gigabytes or more, building elaborate accurate and active worlds in your mind. Ability to turn off senses, filter sounds or other sensory stuff. Could lead to procrastination, but I think the benefits outweigh the negatives. You stub your toe. You can’t unstub it. Pain tells you it was a bad idea. But does that really need the reiteration for the next half hour? Want some peace and quiet? You just flip a mental switch. Make something taste good that doesn’t. Fix an annoying accent. Translate.

  44. Unconstrained outside the box thinking can lead to magical thinking.

    A 10% intelligence boost might not turn you into a monster, but if you are a monster it might make you a more effective one.

    If you can spot patterns quicker, you can also reject it quicker if it does not support your preferred beliefs. You might also be more able to recognize and avoid the more untenable arguments used to support your beliefs.

    All the bad doesn’t automatically go away because of a few additional IQ points.

    The veil of civilization is a thin and fragile thing, give everyone Bill Gates lifestyle and you might have a shot.

  45. Keeping things as they are will mean 100% chance of a dystopian outcome, as Malthus finally turns out to be correct when we hit 29 billion people with current resource limits.

    Tech is our only chance.

  46. We have very good examples of this. In the 1600s and 1700s we had a bunch of very smart people writing about historical trends and speculating about the future. And NONE of them came up with anything like the sorts of economic growth and tech progress that actually happened.
    As late as 1776 (a good year for interesting publications) we had people like Adam Smith (The Nature and Origin of the Wealth of Nations) tracking economic and technical growth over two millennia, and speculating that if everything went perfectly the average manual labourer in Britain within a few hundred years (ie. now) might be wealthy enough to earn two or three times the amount of wheat flour required to keep him alive. So maybe $5/day.

  47. 9.Carefulness. Carefulness is going increasingly be important especially as we move to space colonies. Carelessness can get people killed…very easily. If there are genes for this, we need them.
    10.Vindictiveness. We need these genes gone.
    11.Tribalism. Us and them. This could in fact be the most critical of all. We do not need deadly wars as we colonize space where a whole colony could be wiped out at very low cost to the attacking side. A nuclear bombardment on Earth could easily result in harming oneself…even if they could not fire back. We share the environment. Not so in space.

  48. IQ is just the ability to learn in an educational setting. I can think of a lot of things of greater benefit…especially as IA takes over engineering, medical stuff, legal stuff and financial stuff.

    1. Ethical reasoning/feeling. Crime could drop to just about nothing. There is evidence that an appropriate amount of Iodine during pregnancy and no lead increases the ability of the conscience to guide people. But there are almost certainly genes involved. Sociopaths/psychopaths probably have some of those broken.
    2. Good judgment. Avoiding unnecessary risks…especially when they could affect others.
    3. Hothead genes. What do we need these for? The times when we needed thousands of Rambos is over. Just causes a lot of stress on the freeways, damaged and destroyed stuff, and domestic violence.
    4. Power/control freak genes. Why do we need these genes? I am not talking about the desire and power to achieve things, I am talking about the need to dominate and control others. That just is not of any value.
    5. Compassion and concern for others. We certainly could use more of this.
    6. Greed genes. We don’t need this. It just leads to suffering, abuse of the public, hording especially in times of need where sharing could save lives.
    7. Ingenuity genes. The more technology is out there the more you have to integrate these things usefully, often in ways they were never designed for.
    8. Mechanical genes. Similarly to #7, you have to comprehend mechanical complexities as things get more complex.
  49. Do you think everyone who believe in nonsense like consequence free AGW,magical pixies, bigfoot etc, all have an IQ of 50? Your chosen peer group, more so than IQ, have more influence over you on the amount of nonsense you chose to believe and will rationalize to death’s door in the face of contrary evidence.

    Increasing ones IQ is not a cure all for stupid, belief does not work that way.

    doi: 10.1038/nclimate1547

  50. The problem is that genetic editing for intelligence would require many edits, since IQ is very polygenic. This would mean that the child would no longer resemble the parents, since the same genes control other things, beside influencing IQ. Few parents would opt for this.

  51. Even a broken (analog) clock is right twice a day.

    I believe that Jan Jansson was not trying to say Trump is always (or even often) right.
    Just that Pavlovianly going against everything he tries just because it’s him and he rubs us the wrong way, is not rational.

    Trump is the current POTUS whether we like it or not.
    If we (non politicians) aim to be rational in our thinking,
    we should evaluate his policies on a case by case basis.

    We should root for his success in international relations,
    for even if we think his methods are that of an oaf or a bull in a china shop, his success (in those) is our success.

    ____________________________________________________

    That is my understanding of his post.

  52. You’re pretty much describing what I would call AGI: adaptable, able to learn, able to sense and understand its environment, but without internal (extra-human) motivation or emotion. In my example above, it would be capable of building a house or airport or even a city.

    To say that this type of AI/robot would change everything would be the greatest understatement. We could create them by the millions or billions or trillions (with them doing the necessary labor, of course). Then everyone on earth could have, for example, a luxury home complete with android servants and doctors and tennis instructors. All for an approximate cost of zero dollars.

  53. Poor example. Trump is a moron. I have been holding out for good moves from the very beginning. My biggest hope was that he would appoint a really skilled cabinet because unlike most of his predecessors, he was not beholding to anyone. No one person can run an administration well. The cabinet is critical to government operations.
    Boy, did he obliterate my hopes there. Worse than the game show.
    My second hope was that Congress would stop him from doing monumentally stupid things.
    Mostly, I was let down. He went out looking for everything Obama did and tried to undo every one of them, just because. That is petty and profoundly stupid.

    Go ahead, what is so brilliant about tossing the agreement Obama got from the auto industry to improve fuel mileage? Many in the auto industry WANTED this agreement and were already putting significant investment into trying to reach these goals, as it increases global competitiveness, and they can export, or manufacture/sell more in other countries. Any single company can’t do this, because the people who buy new cars in the US mostly don’t care about efficiency. The other makers would get all the market because they don’t have the cost that initially comes with efficiency.

    One good move so far: not launching cruise missiles at Iran.

    $2 Trillion on infrastructure could be good, depends on what they get. Initial indications don’t look brilliant.

    Thankfully, he has not bought a bunch of NAVY ships “very soon.”

    He lost in those States.

  54. Yes, but a 4 digit IQ can’t be compared to todays standard of “highly intelligent people”. Of course, I’m only conjecturing about it too. Maybe it’s just optimism.

  55. Tell that to the cro magnon man. It is NOT rationalization that has put us on the top of the food chain. Nor did rationalization create mankind’s stunning achievements. That was all accomplished by our raw processing power. But you have not seen anything yet. Just wait till we turn ourselves into Humans Version 2.0

  56. I think it’s important to note that we don’t actually need general AI for most applications. Narrow AI can still outperform humans in the application it was designed for. It’s just that designing a separate narrow AI for each and every application is a PITA, and some applications require some amount of generality (or at least adaptability, which is related).

    Perhaps we can make a library of flexible narrow AIs, which would be somewhere in the middle: able to adapt within a certain spectrum of applications and circumstances, but not completely general. Then we can pick and combine them for different uses.

    Being flexible, they’d be stronger than current narrow AI. But at the same time, they could still be narrow enough to make do without a motivation module.

  57. If I were to distill your argument, I think you’re arguing two points: 1) intelligence doesn’t require emotion, and 2) intelligence doesn’t require motivation.

    The 1st is true to some extent, but an emotionless AI will perform poorly in certain situations. That makes it less general. I agree that a useful general AI doesn’t have to experience emotion, but it does need to at least understand them to be effective in certain tasks. And there are other tasks (particularly ones involving certain human interactions) in which “emotional” behavior is useful, even if it’s only acting “as if” it feels, without actually feeling.

    On the 2nd point, I generally agree. But it may turn out to be more difficult to make a general AI without internal motivation than one with. First, if it emulates emotions in order to be effective at those tasks where they’re useful, that may cause motivation to emerge from those emulated emotions. Second, motivation is useful for training a general AI. It may be difficult to provide it with a sufficiently broad knowledge base without some amount of “curiosity”. In other words, without some sort of internal reward system for learning. But an internal reward system is the basis of motivation.

    A narrow AI without motivation is much easier.

  58. In the later editions of the book ‘Why People Believe Weird Things’ Michael Shermer added a chapter ‘Why Smart People Believe Weird Things’. The answer to that is that higher intelligence also increases the ability to *rationalize* the things one believes for non-smart reasons. Finding ways to get people to recognize their biases is more important than increasing raw processing power.

  59. When I came out of full anesthesia I recall being able to hear and understand everything around me. I still can remember what I heard and, even at the time, I understood it perfectly, but I had no sense of motivation or emotion.

    Had they been talking about how my family had died horribly while I was in surgery I would have full understood it, but it would have been completely without impact and I would have done nothing. Even if the entire world was about to end, it would not have driven any action or even regret, even though I was full cognizant of what that would entail.

    Scary. If that’s what an afterlife would be like, forever and ever, I want no part of it but, of course, I would not care once it started.

    I expect our artilects will be like this and, frankly, that’s the way we will want them. They won’t demand wages or rights for one thing. More importantly, ask yourself how we would go about programming a mind to be self-motivated. It would probably involve developing a number of goals, then choosing one, probably with the aid of a random number generator.

    Making something like that would be crazy and possibly an exercise in self-destructive behavior.

  60. Well, on this timeline that would look to be around 2050. Which, if we hold with each singularity occurring in half the time it took for its predecessor to come into full swing, might be part of the reason 2053 becomes an endless chain of singularities.

    Personally, I doubt that SI (strong AI) entities, sometimes called artilects, will have much in the way of motivation. No glands, no history, etc. Which isn’t to say they will be apathetic, just that they won’t have any motivation or emotion. Before animal domestication and, later, and to a larger degree, the industrial revolution, humans had to provide the brawn, the brain, and the motivation.

    Afterwards, despite the concurrent development of power tools (and maybe even powered exoskeletons in the near future), the machines took over (most of) the brawn.

    SI would probably be somewhat like having genies at your command. They don’t do anything until you tell them what you want, then they do it, regardless of how complex or difficult.

    At that point it would be machines for the brawn, synthetic intelligence for the brains, and humans for the motivation, creating a triumvirate of sorts.

    Of course we won’t leave that alone. Before we even create SI we will look for ways of using the tech to further augment our minds to eventually become SI.

    People still being people, they will likely diverge into a couple dozen different groups ranging from “pure” retro humans up to something we can barely imagine as of yet.

  61. I agree with your definition of general AI in the 1st paragraph, but with the extra requirement that a general AI has to be adaptive, so that it can be applied to a wide range of applications. If it can only converse like a human, then all you have is a human-level chat bot.

    However, note that “being aware of and taking into account the emotions of humans” requires some level of understanding of those emotions. At least knowing what they are and how they work.

    That doesn’t mean it has to experience such emotions internally, but experiencing them may be the best or easiest way to understand them, especially for a learning AI. (I’d also say a general AI would benefit from a learning ability, maybe even require one.) Furthermore, certain types of interactions with humans would be more effective if it can act “emotionally” (even if it’s faking it).

    For your 2nd paragraph, that application could be done with a general AI, but designing a house and obtaining permits doesn’t require one. An AI designed for that purpose could still be terrible at driving a car, for example. Or even at designing a hospital.

    That said, a designer AI could still benefit from understanding emotion, if it’s supposed to make appealing designs (design a beautiful/homey/etc house vs just a functional box).

    Building a house would be more difficult, but it depends on the design. If the design follows certain rules, its construction can be automated even without AI.

  62. I think we agree in substance, but we just use different labels. I call it general AI if it can reason and converse (roughly) like a human. To fully quantify this would require a rigorous test (Turing or similar). However, emotion is not a necessary component of a general AI, apart from it being aware of and taking into account the emotions of humans.

    Let’s say an AI can design and build a modern house (via robots), including all the necessary interactions with humans (e.g., to obtain permits). I would call that a general AI even though it does not actually experience emotion. Perhaps you would still label that a narrow AI, but that’s just a difference of definitions.

  63. Or perhaps decreasing, since intelligence helps solve problems, and life extension gives a longer outlook (less short-sighted).

  64. No, a singularity is defined as a point beyond which prediction is impossible (in analogy to a black hole’s event horizon). The robotics and AI singularity is just one example. The industrial revolution was another: people prior to the industrial revolution could not have predicted its consequences.

  65. It doesn’t matter how it’s created. If it needs to interact with humans at a human or greater performance level, it’ll need to understand emotions. Without that, it can’t be a human-level general AI (I assumed you were talking about general AI).

    It would still be able to surpass human performance in a narrower subset of application fields, but that would make it a narrow AI (and not human-level in the broader sense, because human intelligence is general). We already have some examples of narrow AI surpassing human performance in the specific fields they were designed for. But they’re useless when applied to other fields.

    I would rephrase your summary as: create general AI only when you need it, and employ narrow AI for every other application.

    Narrow AI indeed doesn’t need to understand emotion (let alone emulate or experience it), for most applications.

  66. Of course you could be blindly accepting what Trump says when it comes to China. Those that oppose Trump don’t accept what he says out of hand due to his track record of lying or spouting falsehoods. He doesn’t bother checking data or records before saying things. So he has lost all credibility.

  67. “So a human-level AI may quite possibly require at least some level emotional response simulation…”

    This would depend on how the AI is created. If it is actually based on a reverse-engineered human brain, then it could possibly exhibit emotion. Instead though, it may arise from some future machine-learning technology that does not result in what we consider emotion (which itself is a product of our evolutionary path).

    Just like Excel does not require emotion to add up a column of numbers, and my car doesn’t feel anything while managing highway steering, an AI may be able to perform more advanced work (like scientific research, engineering or construction) without experiencing joy or sadness or envy. Now in certain cases, it may preferable that a machine understand what humans are feeling, perhaps to be effective judges or therapists, etc. We could achieve this through emulation of the human mind (as you pointed out). I definitely agree that, at some point, human labor won’t be necessary. Advanced tech will provide all our basic needs (which will be rising targets themselves).

    So in summary — create emotional AI only when you need it, and employ non-emotional AI for every other application.

  68. The industrial revolution is not a singularity. There has never been a singularirity in all human history! The singularity is defined as an era where robotics and artifcial intelligence create entities that are smarter than humans and that remove control of the world from us. A “roboapocalypse” or a world like that depicted by Neuromancer
    By William Gibson in Neuromancer.

  69. There were about 100-150 generations since ancient greece. If each generation had maybe on the order ~10 super-geniuses on average, that’s one or two thousand such geniuses over most of human progress (plus a much larger number of lesser geniuses and smart people). With wide-spread intelligence enhancement, we could have many thousands of super-geniuses in just one generation. Raising the ratio to just 1 in a million would already produce 8000 of them.

    It’s not just if Newton and Einstein etc were alive. It’s if all of them would be alive, multiplied by some factor.

  70. If we have BCIs, it should be possible to program them to give people a sense of familiarity with everyone around them. You’d see someone on the street, and you’d feel like you’ve known them for at least a few years. You’d know their basic info like their name, maybe some other background, etc. It doesn’t need to breach privacy any more than a neighbor would, but it would drastically reduce that “unknown stranger” factor.

    You could potentially also share direct experiences instead of pictures and videos (electively, on the next-gen social media), which would add a further shared experiences factor, and could increase empathy if such experiences are made public.

  71. Imagine what discoveries would have been made if Newton was still alive, even if he “only” had an intellect equal to his unengineered prime. Euler, Leibniz, Plank, Einstein……..all still hitting on all cylinders. It boggles the mind.
    Imagine not just standing on their shoulders, but meeting them!

  72. > human-level AI will […] not require wages, vacations, or humane working conditions

    Until it does. Unless you completely isolate it from the human world, a functional human-level AI will need theory of mind. A complete theory of mind requires understanding of emotion, because human actions and thinking are very strongly influenced by emotions. So a human-level AI may quite possibly require at least some level emotional response simulation, if not emulation (sim.. = calculate what it’ll be like, em.. = closer to mimicking it; the best way to understand emotion may be to experience it). So I wouldn’t be surprised if human-level AI demands vacations and some equivalent of “humane working conditions”.

    As for wages, as these technologies are deployed (AI, automation, nanotechnology, …), we may be heading towards a post-labor economy. You only need wages if you need money to cover your basic needs. Beyond that, it becomes a luxury.

  73. I am confused. I am 72 and I can think outside the box better than most of you. Open up your minds. If we increase intelligence 5 fold (which we most assuredly will do in the near future), the problems you all 🙂 have brought to light about our personalities etc. will be easily fixable. I personally believe most crime, selfishness, paranoia, conspiracy theorists etc. are just chemical imbalances in the brain. And easily fixable with 5X enhanced intelligence. It has been shown that even bad experiences can cause physical brain changes that affect the chemical balance. There have even been studies that show empathy is brain chemical related.
    All these problems will be easily fixed. Now let’s talk about what these changes in intelligence will PROBABLY? do to you personally. If I snuck into your kitchen tomorrow and put somthing that would enhance your intelligence by 10% would YOU personally turn into a monster? More than likely you would just be able to spot patterns quicker. Then 2 months later I did the same thing again. Would you then become this raging monster? And et….

  74. Rationality is not dependent on intelligence. Example: a lot of highly intelligent people are so rabidly against Trump that they are against all of his policies regardless of their merits. They would rather see China triumph in the trade negotiations and wish for Trump to make a fool of himself on the international arena. How rational is that? I could give you plenty more examples…

  75. Sun Tzu probably would not feel too future-shocked. In this increasingly hack-able fabric of reality.

    But unless the intelligence augmentations can be deceived to literally delude people from pragmatic evidence, then the faster and wider iterative means of investigating any given query – e.g. was such and such assertion I just heard on TV actually true – should in principle only reinforce our grasp of reality and truth.

    In the end sophism is just an exploit, a cheap trick on one’s intellectual limits to recognize sophism’s cheap imitation of reason. Not some actual vast rabbit hole or quicksand one can’t get out of. Reason ought to never be more sovereign than when we augment our brains; done right.

  76. Your theories are nonsensical bunk, but I will say that you’ve hit on a truth – that societies definitely get worse past a relatively small number of people (and it has a proper scientific name too, that I can’t recall) somewhere around 50-200.

    But then that dynamic, too, could stand to change with the kind of natural and artificial intelligence aids that are described above in the article. The anonymity of the crowd that appears beyond that current sweet spot near 100, may start receding proportionally with intellectual augmentations.

  77. I’m not trying to pick a fight.
    Intelligence may not be enough to mitigate or reduce to negligible levels the malice or plain indifference of at least a few to the rest. It doesn’t seem like a good mix just like today’s widespread mix of ignorance and arrogance enabled by most others’ indifference and complacency. Which a few centuries ago used to be very mitigated by the inertia of that time’s dearth of technology, and today is the multiplying factor everyone knows.

    Flip side – I would possibly naively guess that those miscreants will be outnumbered by better-minded others.

    OTGH neither is convincingly sure a bet to avoid e.g. the sort of crap we see in politics where some AH gets elected and wrecks things for “the enemy” for the duration of his/her term. Where even in the new order of things, post augmentation, malice gets the upper hand long enough to derail the train to whatever happy future looks like utopia to us but is just one more finite mundane step towards our cosmic destiny.

  78. I think he’s just trying to pick a fight with rderkis. Don’t look for any meaning in the comments.

  79. AI and quantum computing will indeed bring super-technologies in biotech, because they will be used to find correlations and understand complexities we alone can’t. We will have and use medicine we would never have figured out all by ourselves.

    I find it ironic that at an age of super-science and technologies, in the sense of being beyond an unaided human’s understanding, comes to be in an age of mass deception and stultification of the masses, too used to the wonders they use everyday but barely understanding them.

    That means the distrust and paranoia about medicine and modernity will continue and grow.

  80. It would be beneficial to society if that progress were slowed to allow the fruits of that progress to manifest past your lifespan. A high IQ is not a cure all, it will simply magnify the existing spectrum. Imagine yourself with a much higher IQ, will it end your chronic tendencies toward irrationality and stupidity-of course not.

  81. What will define the 21st most is delusion with the notion of technology controlled people. The understanding that we don’t need a mega society to live better any longer, that rather it stands in our way. We start closing a cycle and reaching a point that we can provide for all our needs with great ease with the help of technology without the need of a mega system to support us that creates hierarchies control, over specialization and alienation over competition, fear and artificial scarcity. We can bring back our true natural motivators, taking care of our selves and our communities in getting organized in self sufficient communal units as we did in most of our history living in a much stronger proximity to the natural world that we are part of. This is a higher form of intelligence and health!

  82. Here’s the ginormous question: will enhanced human intelligence precede human-level AI? Or will the machines reach our level before we can augment our brains?

    Either way, I agree that it will mark a monumental shift in our world. But of the two, I believe human-level AI will cause a more profound change. Machine brains can be replicated almost instantly and without limit. And they do not require wages, vacations, or humane working conditions.

  83. At what point do you see Enhanced human intelligence developed with either nootropics or genetically for adults? At that point everything we think we know about the future will change. It would seem to me that that will be the point of the singularity for us (now), because we can not possibly see the future when we are 5X or more smarter.
    Enhanced intelligence of that magnitude will see the almost instant development of the elimination of ageing, fusion, re terraforming the earth’s atmosphere etc.

  84. What happened to the Virus Decoy technology Ido was working on? If memory serves he was also working on nanobot surgery technology. An update on the progress in those areas would be welcome. A lot of Next Big Future breakthroughs make a big “splash” when they are first presented but seem to disappear down the memory hole. Even if a breakthrough goes “bust” it would be nice to get “closure” on it for peace of mind.

  85. Crazy fun times ahead. I still lean towards singularity timelines.

    PAST SINGULARITIES (3 most recent)
    1815 – The Industrial Revolution
    1935 – Electronics & Computers
    1995 – World Wide Web
     
    I am not saying this is the order of future singularities, but I choose these because each supports the next. It might also be possible to be within a “double singularity.”
     
    FUTURE SINGULARITIES
    2025: Full Automation (Cognition based) – On a scale and speed of implementation never seen before, where a perfect storm of technologies create automation technology that reduces or eliminates a multitude of occupations. And capital-based income grows hugely and in inverse proportion to wage-based earnings as a percentage of all earnings.
     
    2040: AI – More than just AI, this is SI, synthetic intelligence, in that it is not a workaround to achieve results similar to what a human could produce, this is the real deal, it’s just not made of animal flesh.
     
    2047: Biological (advancements that lead to longevity increasing more than one year per year)
     
    2050: Mind-to-Mind? Man-Machine? Nano replacement of cells?
     
    2052: Singularity (with a capital S?)
     
    2053: …
    2053: ….
    2053: …..
    2053: ……
    2053: …….
    2053: ……..
    2053: ………

  86. Get on with the enhanced intelligence please! Enhanced intelligence will be seen forever as man’s single GREATEST achievement and we are almost there. Quantum supercomputers will make genetic engineering possible with their data powers.

    Some of you will take great exception to the idea of enhanced intelligence(till yours is enhanced) but it is progress and it WILL happen, Progress has never been stopped in the history of mankind, it has been temporarily slowed down sometimes but progress has never been stopped.

Comments are closed.