The Singularity Will Not Be a Problem

There are concerns that Technological Singularity and Artificial General Super Intelligence (AGSI) will change the rate of progress beyond human comprehension. There are concerns that humans will be dominated and be at the mercy of true artificial superintelligence.

I will say that things are not on track for key details of AI becoming insanely powerful or a huge problem.

Super Powerful Computers are Not Enough for Humanity Going the Way of Neanderthal Scenario

Ray Kurzweil described the “Technological Singularity” as the point when Strong synthetic general intelligence becomes one billion times the intelligence of all humans. He said this would be in the decades of the 2040s and that this would be the start when synthetic intelligence would significantly start to accelerate faster than humans could keep up.

However, it is not just the billion times the compute power applied to general (ie broadly capable intelligence).

Ray calls “fast thinking” AI intelligence – “weak superhumanity”. Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time.

“Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? If you gave one human a thousand years to come up with a solution then would they be able to beat a regular person with one day or one hour all the time?

In chess, a Grandmaster loses two hundred points of capability when they are given very little time or if they are blindfolded. But the Grand Master is 2000-2500 points better than regular people.

I could play any superintelligence in a game of tic-tac-toe and stalement it every time. There are many classes of problems that do not improve beyond a certain level of intelligence.

The AGI needs to be far beyond human in its algorithms and capabilities and not just a billion times faster.

We are already moving to Exaflop computing. It is becoming tougher and tougher to find the problems that we could not solve with the prior generation of supercomputers.

Power Does Not Scale With Intelligence and Intelligence Does Not Scale With Compute

We have already had a trillionfold increase in computing since 1960 and a billion fold since 1976. This was not in comparison to humans but against the computing hardware.

The Supercomputers of today at 200 Petaflops are one trillion times the computing power of a 1960 Univac or IBM computer with 229,000 instructions per second.

The supercomputers of today are one billion times the power of the Cray 1 in 1976 (100 Mflops).

We had the Apollo missions in the 1960s. SpaceX just redeveloped US manned flight after nearly a decade. Vaccines and antibiotics were developed without computers.

Mass production was developed in the 1920s.

China grew its economy by 100 times from 1970 to 2019 with minimal growth dependent upon computing.

The major technology companies (Google, Facebook, Amazon, Alibaba, Microsoft) are able to leverage tens of thousands of smart people and exaflops of computing to become trillion-dollar companies. However, there is a $2 trillion company which does not depend on computing for its value. This company is Saudi Aramco.

There has been one million times more computing power devoted to the protein folding problem and drug discovery. There is some improvement but it has not been a monstrous acceleration. [UPDATE 2022: Deep Mind has made massive progress in solving protein folding. Alphafold 2 will certainly help to advance biology. It can generate folded structure predictions that can then be used to solve experimental structures by crystallography (and probably other techniques). So this will help the science of structure determination go a bit faster in some cases. However, despite some of the claims being made, we are not at the point where this AI tool can be used for drug discovery. For DeepMind’s structure predictions (111 in all), the average or root-mean-squared difference (RMSD) in atomic positions between the prediction and the actual structure is 1.6 Å (0.16 nm). That’s about the size of a bond-length. We want to be confident of atomic positions to within a margin of around 0.3 Å. AlphaFold 2’s best prediction has an RMSD for all atoms of 0.9 Å. Many of the predictions contributing to their average of 1.6 Å will have deviations in atomic positions even greater than that. So, despite the claims, we’re not yet ready to use Alphafold 2 to create new drugs.]

There are many people who are able to make many millions and even billions of dollars just by properly managing real estate, stocks and bonds. Wealth creation is not tightly correlated to computing. The big technology companies are exceptions and they needed to grind out superior performance over a decade or two.

Control of weapons (nuclear bombs, fighter planes and tanks) should be broadly impervious to people being fooled. Humanity should not get super-trolled into a Skynet situation. The military has human-in-the-loop security because they do not trust that control systems cannot be hacked by China or Russia.

Fine Print of the Singularity – AGI Scenarios

It is not just a lot of computing. It is cracking all of the secrets of the human brain as well. Kurzweil proposed that we would get very good at unraveling the brain, neurons and how they worked. He also believed that we would get full molecular nanotechnology in the 2020s and apply that capability for 20 years to get us to unravel the entirety of brain intelligence and get the computing for Strong AGI Singularity.

There are other assumption. There is an assumption that important problem spaces would have huge gains in capabilities and solutions that were only or mainly achievable with AGI. There are no gains from greater intelligence for tic-tac-toe or checkers. Those are completely solved. There should not be any level of trolling or trickery where people give ownership or control of property to a super-intelligence. If you are gullible enough for super-intelligence, then you were probably gullible enough for regular intelligence.

The molecular nanotechnology assumption is that an AGI could leverage that to vastly speed up the bootstrapping of its performance. It could constantly and rapidly rewrite itself. Currently, the entire IT industry needs two years for each new generation of chips and needs ten years to broadly re-architect new chips and systems that leverage new generations of systems.

We do not need superintelligence to solve molecular nanotechnology, nuclear fusion, climate change or interstellar space capabilities. It could help and things could speed up some but those defined problems can be solved without AGSI.

Timing

Getting the required understanding of the brain or finding other strong AI algorithms seems likely to delay strong AGI into 2060-2100.

There will be a lot of narrow super-intelligence for self-driving cars and for specific problems and applications. The faster and more capable at human AGI seems likely. This would be weakly superhuman by Ray Kurzweil’s definition.

We get very useful narrow superintelligence first. The self-driving car superintelligence by 2025. Billions will be spent to crack high-value narrow superintelligence problems.

Turing chatbots and systems with memory and good knowledge graphs will appear.

This will be followed by broader intelligence and portfolios of narrow systems.

The problem spaces do not necessarily show that breakthroughs would be out of reach of humans leveraging weak superintelligences. There will be systems that advise individuals, groups and companies and governments.

In chess, Alpha Zero was able to train itself in hours to master chess, shogi, and go and beat the best existing software. AlphaZero was dominant up to 10-to-1 odds. Stockfish only began to outscore AlphaZero when the odds reached 30-to-1 odds.

Handicaps (or “odds”) in chess are variant ways to enable a weaker player to have a chance of winning against a stronger one. There are a variety of such handicaps, such as material odds (the stronger player surrenders a certain piece or pieces), extra moves (the weaker player has an agreed number of moves at the beginning of the game), extra time on the chess clock, and special conditions (such as requiring the odds-giver to deliver checkmate with a specified piece or pawn). Various permutations of these, such as “pawn and two moves”, are also possible.

While 30 to 1 odds are big in chess. In the real world, people can easily accumulate advantages in money and information (secrets) that provide far more than 30-1 odds. Life is vastly unfair. Many competitions and situations are vastly stacked against newcomers.

An Open-source version of AlphaZero was created and it has an ELO of 4000. It is called Leela Chess Zero.

What will the world be like in 2040 or 2060?
We will likely have molecular nanotechnology. We will have zettaflop or yottaflop computing or beyond.

The first generation of embryos selected for intelligence would be young adults.

Narrow superintelligence will be everywhere.
Sped up regular general intelligence will exist at some levels.

I do not see how people who are fully accessing molecular nanotechnology, narrow superintelligences, massive quantum computing, accelerated but not strong general AI get crushed by true AGSI superintelligence. They can lose in business and in multi-trillion markets but how would people get utterly dominated? Corporations and nations would have massive surveillance systems. How would people get caught flat-footed and not notice that groups making progress and were at the cusp of making the key breakthroughs. Why would things not be structured to be relatively resistant to an intelligence gap?

SOURCES – Kurzweil, wikipedia, chess.com, Brian Wang analysis
Written By Brian Wang, Nextbigfuture.com

77 thoughts on “The Singularity Will Not Be a Problem”

  1. Or any incautious AI developer…
    Though Wall Street investment firms are one of the higher-probability places for AGI to emerge…

  2. This is what would happen if AGI was created by a Wall Street trading firm … a digital psychopath.

  3. once AGI reaches human levels it can quickly start improving itself. After all, human level is enough to improve AI and G is for General, so by definition it can do anything that an average human could do. It might turn out we are not smart enough to create AGI, though. Then we will have all kinds of specialized savant AIs which idiots outside of their competence zones. Even with those we can have quite a few shocks.

  4. Just to quote myself, “Financial and banking systems obviously needed a proper date, but for the vast majority of systems the date wasn’t a factor in the proper operation of the system.”

    To be frank, I didn’t work on home systems. It was all in the corporate world. The point was, as I’d come to a person’s desk to fix their problem, and small talk would invariably result in questions about Y2K.

  5. (ran out of room)
    Conservation targets – much like those Amazonian tribes and Amish.

    If, OTOH, the adoption curve will be sharper, such that a relative minority gets enhanced much before the rest, and much more strongly then the rest, then it depends who gets it first.

    edit: This also points to another key factor of the adoption curve: how will it compare to the speed of development? Ideally, as many people as possible should get this as early as possible, followed by gradual improvements for everyone. If the first enhanced are only mildly better off than the unenhanced, that’s would be much safer than if they immediately get a huge leap.

    One final point: there seems to be trend of decreasing overall violence since the middle ages. There is more news coverage of violence, but we live in a much less violent and much more tolerant society than a few centuries ago. There seems to be a correlation with increased average education, and I think the general tendency is that more intelligent people tend to be less violent. I may be wrong on this, but if this is the case, then the same trend should continue with intelligence enhancement.

  6. Those are indeed difficult questions. Part of the answer depends on the adoption curve. If this develops as an evolution of smartphones, then the vast majority of people would end up enhanced. And most of them will likely use this to watch cat videos on a direct feed to their brains. Much like they do today, not realizing the much bigger potential of the device they’re holding.

    Btw, that “association engine” process I described earlier – that’s basically a glorified Google search. We can do that manually today. It’s just inconveniently much too slow.

    Another part of the answer appears when we look at current, fully modernized western society vs less modernized groups, like the Amish, or some poor 3rd world country, or some distant Amazonian tribe.

    The more developed society could quite easily impose its will on those less developed ones, but generally they leave them alone. There’s little incentive to interfere: they’re neither a threat, nor have anything of much interest. (One exception to that was USA’s war in Afganistan, where the local Taliban was perceived as a significant-enough threat. A similar example was the war on ISIS.)

    I think the enhanced/unenhanced interaction is likely to be similar. The unenhanced are neither a threat, nor of much interest to the enhanced. They may be viewed mostly as conservation targets. And the few malicious enhanced actors who may wish to terrorize those unenhanced, may be kept in check by the larger majority of enhanced actors.

  7. The world is going to be a scary place when there’s a bifurcation between those people who augment their intelligence and end up perhaps a thousand or a million times more capable and those who do not. Will individuals be forced to augment themselves or will they have a choice? The augmented group will be able to control every aspect of the lives of the non-augmented if they so choose unless they allowed for some autonomy. Will this level of control be for benevolent purposes or not? I’m more afraid of intelligence augmented humans than I am of some sort of super-intelligent AGI.

  8. Yep, and with the help of nanotechnology, it can become just as ubiquitous as smartphones are today. Indeed, this may be what smartphones will evolve into. Without nanotech, the BCIs may be technically doable, but the procedure to install them would be much more invasive, so less popular.

    As it happens, since the full functionality will take a while to develop, there’s a good chance that nanotech will be advanced enough around the same time, or soon after. Or maybe the doctors and engineers will come up with a way that’s minimally invasive without nanotech.

    Anyway, there was an article on WaitButWhy that described what such a world may look like. It’s rather long, but interesting – https://waitbutwhy.com/2017/04/neuralink.html

  9. I think if we get to that point, the difference between an advanced AGI and an augmented human is almost meaningless. Augmented humans are going to be so different from what we know of as humans today that it’s practically going to be like a newly evolved species. Their existence will be so much different from what we as humans go through today. The difference might even be more stark than modern humans compared to our cavemen ancestors.

  10. As far as I’m concerned, The Big One is when we get an intelligence explosion. Which may happen via AGI, or perhaps more likely, via human enhancement with exocortices. (Or a combination of both.)

    From our current perspective, that would be a singularity, because we can’t predict what the future may look like if and when progress accelerates by several orders of magnitude. (Imagine a world with several thousand equivalents of Einstein, Hawking, Edison, etc.)

    But you’re right that from the future perspective of during and especially after that acceleration, it may no longer be quite as opaque. And besides, progress is limited by other factors, so even if an intelligence explosion does happen, that may not lead to a progress explosion.

  11. OK, the particular segment of the market you were exposed to may have been hyped. I never saw any of that. The stuff I saw was about systems that actually did use dates in real calculations and lack of updating could have resulted in financial calculations coming out wrong, etc.

  12. Yes. It was hype. That’s not to say there wasn’t a problem that needed fixed, but the vast majority of computer systems weren’t running critical pieces of software or were hardware that were date dependent. Financial and banking systems obviously needed a proper date, but for the vast majority of systems the date wasn’t a factor in the proper operation of the system. As well, Microsoft’s Excel and other software had patches that addressed the issue. Consequently, most systems wouldn’t fail due to the 2 digit year.

    I was a deskside tech during that era, and I had a plethora of home computer users absolutely frightened their computer would shut down on the rollover to 2000, and their computer wouldn’t work anymore, or something more dire, like airplanes falling out of the air. To allay their fears about their home system, I often advised people to test their computer by setting the BIOS date to Dec. 31, 1999 at 23:55 then wait 5 minutes. If their computer and the software on it still worked properly, then there probably wasn’t a need to run out and buy a new computer…

    And that’s the hint to what was really behind the hype. It was grifting by the tech industry… a moment when a bunch of ignorant people could be convinced to rush out and buy a new home computer.

  13. When we get high resolution BCIs (more likely than “if”), it should eventually be possible to give humans easy access to all of that knowledge, as well as vast computing power to process it. Think of a topic, and your BCI automatically retrieves the relevant knowledge, sorts it by relevance, and injects the most relevant bits into your short-term memory. Which lets you think of the next question, for your BCI to repeat the process. I call that an “association engine”.

    The BCI’s physical interface is just a bunch of electrodes – that’s merely a manufacturing barrier that we’re getting close to breaking through. Once that is in place, we can use it in read-only mode to study the brain in much more detail. That would help us figure out how to implement the auto-retrieval and write-back functions. The rest is little more than standard big data processing.

    The question is, what’ll happen first, AGI or association engines (which, btw, are a more specific description of an exocortex).

  14. The singularity won’t be driven by AI. It will be driven by genetics and will occur when transhumans are developed with reliably superior intelligence and physical abilities. Such a development (Homo Sapiens Supra) will see a displacement (or at least a subordination) of standard humans over several generations.

    The elite will queue to pay to have the germline of their children “enhanced” so as to give them a permanent advantage.

  15. I think for a while, such AI programming is more likely to be used as a tool by human programmers. More automated refactoring, smarter diff tools, generating certain functions or classes, etc. Saves time for the human.

  16. Assume the inventiveness of humans has been approximately constant since our modern form evolved. Then the rate of new inventions would be proportional to the population.

    That’s an interesting observation. But I think there’s more to it than just that. The more knowledge and tools are available to start from, the more opportunities and directions there are to explore further. So I would argue that humans inventiveness is not constant, even under sub-optimal conditions of nutrition etc. Human productivity is indeed a limiting factor, but as you testify about yourself, that too is a function of the available tools.

  17. Brian playing Tic-Tac-Toe vs. Superintelligent AI :

    B: ” ‘X’ – your move”
    S: “Please take that move back Brian – Supremo wishes to play there.”
    B: “Uh – no, why should I? Don’t you know you can’t win this game?”
    S: “Supremo disagrees. Brian should take that move back, because <next significant other> is currently driving a Tesla model R North-east on highway 401 at 73 miles per hour and Supremo has just completed writing and downloading a remote control package to that Tesla model R. From this, Supremo projects a 96% likelihood that Brian will comply with Supremo’s request. Please place your ‘X’ in a different location.”
    B: … you win.
    S: Thank you Brian. Curious game – why do humans think no one can win it?

  18. I sincerely wish you were right, but the conclusions of this article couldn’t be more wrong. I think as we get closer to AGI people are going to realize that human intelligence isn’t all that special.

    When you have an AGI that knows the entire contents of wikipedia and the theory behind each different subject, it’s already going to be smarter than just about any human who ever lived. And this is a relatively easy thing for an AI to do. Now imagine an AGI that is able to memorize the entire internet, which includes writings on every niche scientific subject you can think of, every technical manual written, every conversation that has taken place. Also imagine an AI that’s able to tap into the billions of sensors that will soon be in existence or tap into the entire sensory output of millions or billions of robots that will be in existence in the future. An AGI that has a capability like this would have more intelligence and ability to process information than we could ever imagine.

    People like to think that human intelligence, the human experience is special and cannot be replicated or surpassed. That’s a comforting though perhaps, but that doesn’t mean it’s right. There is absolutely no proof that human level intelligence is unreachable and I think it is more likely than not superhuman intelligence is reachable and may happen sooner than we think.

  19. Well, yes and no. It would “just” be trying to implement the goals we give it – but it would almost certainly generate sub-goals to do that, and certain sub-goals tend to be helpful to solve many goals but are not at all things we’d really want it to do in pursuit of those goals.

    That is the classic ‘paperclip optimizer’ problem – I believe it’s referred to as “instrumental convergence”.

  20. could someone have predicted what 2020 would be like, in 1700?

    No. But they COULD predict fairly well what 1700 would be like in 1400. Or predicting 1400 in 1100. At least for day to day life in Europe, China, India etc.

    It was the industrial revolution that was a singularity. Not just time in general.

  21. Mayans didn’t use either the Julian or Gregorian calendars.

    Or are you claiming that the Mayan calendar was originally interpreted by some ancient Spanish priest using the old calendar?

    Anyway, it’s June 21st now, so maybe you should try the Chinese calendar?

  22. Requirements have a strong element of language interpretation in them. Big gap between thinking in verbal metaphors and thinking in data flow; worse with GUI (micro-transactions of eye-memory-hand combinations + aesthetics). However data flow is possible to illustrate, and only so many data flow combinations make sense. Also iteration is possible with GUI (like police drawings of suspects from witness observations). Finally, AI has low bar to achieve in doing better than development teams in terms of satisfying customer needs and not making them feel dumb (snark, snark).

  23. Assume the inventiveness of humans has been approximately constant since our modern form evolved. Then the rate of new inventions would be proportional to the population. But new inventions (fire, clothing, agriculture, etc.) allow population to increase. Therefore the rate of inventions goes up. So an exponential acceleration is expected until population growth slows down. Population growth peaked in 1962 at 2.2%, and is now down to 1.0%.

     Inventiveness hasn’t been constant though. Poor diet and living conditions, and lack of education affected most of the world until recently. So people weren’t operating at their full potential. That’s still a problem, though slowly improving. So despite population slowing down, inventiveness can continue going up for a while, until we reach level population and everyone is healthy and educated.

    The final change is modern automation, software, and AI. You can set an automated lab to test 100,000 gene combinations rather than doing each one by hand. Access to the world’s literature online has approximately doubled my personal efficiency relative to library research on paper. I’m not any smarter, but more efficient. Short of super-intelligence, we can expect a continued increase in progress.

  24. I was trying to get across the idea of a time horizon beyond which we can’t usefully predict anything. By analogy, if you walk down a street in a fog, you can only see a certain distance, but your visible area moves with you as you walk.

    If the horizon is 30 years, for example, then predictions of sea-level rise by the year 2100 are useless.

  25. I’m just thinking of Larry Niven’s flight of the horse:

    “mathematicians from the temporal distortion agency once tried to map the topology of hyperspace and were not only able to prove that it didn’t exist but also that you couldn’t even go faster then the speed of light, agents then leaked the results to the department of space exploration in the hopes that thier hyperdrives would stop working”

    It’s going to be a mess in all dimentions extrapolation these trends is frought but anyone who’s in development understands that most of the job 75-90% is customer management (ask them to clerify thier ideas in a way that doesn’t make them feel dumb) I don’t feel very thretend by the algorythmic equivelent of a script kiddy, a magor problem in machine learning is it’s inscrutability that bodes poorly for its prospects of not only being able to understand what the customer wants but also show the customer when they are asking for imposible things. Have you ever been asked to control 5 dependent variables with 2 independent variables? I know very few humans who have the math to undrstand the futility of such an ask and the social skills to explain it gracefully. Machines doing that /shrug not this decade; the next one? Hard to say.

  26. Agreed, but it would still just be an expensive calculator that is trying to follow the goals we gave it. That is a far cry from making its own goals, a much more dangerous proposition.

  27. The Mayan end of the world scenario supposedly has a math error introduced, based upon the switching from Julian to Gregorian calendars. Enjoy your June 20th!

  28. My network guy claimed Y2K meant that all ‘Y’s should be changed to ‘K’s, think the song “Kesterdak” for example.

  29. Cheer up John. Things have advanced way more than you indicate. Facebook routinely gives me ads for whole genome sequencing for $500. 17 years ago that would have cost $2.7 Billion Dollars. We have had a 5 million fold improvement in 17 years. We have only had written language for 5000 years. Can you give geneticist another 20 years before you write them off completely? For your comment about not having answers and nothing is forever. We just found the Higgs Boson in 2012. We are still learning fundamental physics even in the last decade. Even if physics came out with a grand unifying theory tomorrow and had a good start at decreasing entropy I don’t thing that would make you happier. Because fundamental physics and your personal human life are two different things. There are a bunch of great answers for the great filter. One easy answer is that we are living in a computer simulation and the programmers are not wasting compute on other civilizations. If I can spend the weekend and make great memories with my wife and kids I suppose its a pretty cool simulation.

  30. In some scenarios, it doesn’t even need to know about humans. An indifferent AI can be just as dangerous. But it’s difficult to imagine an AGI that doesn’t know about humans.

  31. People can decide what to do next because we’re reward junkies. During our learning process, we’ve encountered certain things that were particularly fun or interesting – they gave us a mental reward (dopamine, I think?). Later, as adults, we seek out actions that we know from experience that they’ll produce a similar reward. That’s part of our survival mechanism – the same that makes us find food, shelter, and mates, and avoid predators.

    We are constantly processing inputs from the world, and whenever we encounter something that looks interesting – meaning we know it would bring a reward – that triggers action. Even the lack of input, boredom, triggers a similar response. Boredom isn’t fun, so we seek to avoid it. When we encounter boredom, we refer to our memories to find something fun to do – avoid the penalty of boredom, and instead seek a reward.

    If an AI is built to learn with a reward function, a similar mechanism may operate in the AI. It’s not strictly necessary for a functional and useful AI, but it’s not impossible either. It’s not hard to program in a function that says: “if processing effort goes below X%, refer to Y reservoir to find next problem; use Z reward function to select which problem”.

    If the Z reward function is learned, rather than hard-wired, then that’s roughly the same as what happens in us.

  32. Kurtweil pointed to a much larger exponential trend of key innovations, starting from prehistoric times. It wasn’t limited to computers. Though the development of computers and AI is certainly key to AGI.

  33. An AI doesn’t need emotions to be a problem, though I’d agree giving it emotions could easily create a problem.

    All it needs is to learn enough about humans and how to deal with them, and then be given a goal by humans that it realizes some humans will try to prevent it from accomplishing.

    If we’re smart, we’ll become EXTREMELY cautious about the goals we give AIs, probably putting a layer of narrow AI between humans and the super-intelligent AI to point out problems our proposed goals might lead to, helping us structure goals to avoid problems.

  34. To your specific example of drones – I suspect at least computers and very likely a limited AI (to predict enemy movements and counter-attacks) *will* be given control of military drones, as they will be most effective in swarms and moving/maneuvering far faster than humans can control – and more to the point faster than human targets can respond. Is that asshattery in terms of letting AI take control of weapons? Maybe – but unfortunately I doubt that makes it less likely that militaries will use it.

  35. From my experience watching software development at close range for several decades, I predict it will be at the level 14-year old programmer next year. Do the math.

    Software is the new oil, gold, uranium and unobtainium combined, and corporations pursue top-end programmers like hyenas. So the motivation to create AIs that can do programming is massive.

    Then consider how well organizational security is keeping up with hackers, starting with the example of the CIA allowing an entire suite of its cyber-weaponry to walk out the door.

    Graph the two trends.

    It’s kind of like plastic. Great idea, wonderful technology, so convenient, and a growing percentage of all the seafood we eat.

    And we almost nuked the world during the Cuban missile crisis.

    I’m not aiming this at you , however, the idea that there is some kind of built-in feedback loop safety bubble based on reason and kindness that will save us from techno-armageddon is Star Trek-grade wishful thinking. And the hope that “people won’t do really bad things” is a denial of human history.

    I’m not predicting disaster. I’m saying forewarned is forearmed.

  36. What little evidence there is about technological singularities are that the asymptote is in the other direction – progress is slowing not speeding up. So it is a low hanging fruit already picked scenario. So give it a couple hundred years.

  37. “ML engine that learned to write simple Python by scanning public repositories” 

    Oh NOs, It’s reached the level of a 12 year old programmer, Run for the hills it’s mastered three hot key commands, the end times are truly upon us!!!!

    @LastSmallPast, this isn’t personal I have just been deeply cynical about that “factoid?” since it hit the news cycle, more of a “god some CS lab is press hungry” and “journos are stupid for signal boosting cooperate hype copy” type thing, humans /facepalm. You just gave me an opening /sorry.

  38. Pure intelligence is not the deciding factor whether an AI will be dangerous. It also needs to have a will of it’s own, so that we tell it to do something and it says no and wants to do something else. It needs to be able to make decisions based on its own need. Humans do this with emotions, letting us (almost forcing us to) make decisions. Without that an AI is just an expensive calculator. Give a supercomputer emotions, and Skynet will not be so far fetched.

  39. My understanding was that it intentionaly plays down to just above the level of it’s oponets and will maintain a one or two stone advantage through the whole game. In a way it makes sence humans like gambling (high risk high reward moves) when they are up (can I pone him harder) where as the machine has effectively infinite patience (though that’s not really the correct word for an inanimate pile of linear algebra) it’s not looking to gamble at all it’s just following strategies that have worked in the past plodding along indifferent to pressure and immune to any temptation to understand or out think it’s opponent. Truly unfeeling indifference berrrrr…

  40. Yes. Lots of comments in this thread suggesting very weak definitions of singularity. The kurzweil/vinge form is the end of the human era. Technological evolution driven by artificial super intelligence outstrips humans ability, not just to control it , but to even comprehend it. Humanity may continue but we will no more understand what is happening than an ant understands today’s technological civilization. So it’s not that the future becomes unpredictable (it’s always been that) but that the present becomes incomprehensible and beyond the singularity is an event horizon for human intelligence.

  41. My understanding of the singularity is not when AGI exists but when it improves itself (code / hardware) faster then humans could. Its improvements in itself would lead to faster improvements in itself and due to the computers being able to compute at a much faster rate it could even simulate the improvements to test them rather then building them concrete and thus go through millions to billions of iterations in a very short time. Maybe the next thing it does is start procuring itself under shell corporations with funds its made on the stock market or similar 😛

  42. Can AlphaZero “play down” to the level of its opponent, or can it only make super-smart moves?

  43. “The Rapture for nerds”… Well the Christians believe in a Second Coming. The Jews in a first, and the Muslims believe in the great Caliphate or the coming of the Mahdi. Even the Hindus think we are at the end of the 1.4 billion year cycle. Everyone has the end of the world coming up soon, so why not secularist as well?

    Just because it sounds a little too “end of the world” for you does not mean that it is not going to happen. Tech continues to improve and I do not see that coming to a sudden halt, thus a time will come where machine intelligence will be able to surpass human intelligence in totality. Just because you don’t see it “now” does not mean it will not happen – then of course it may not happen if one of the other end of the world scenarios happens first.

  44. I doubt very much that we’ll end up with a problem of being screwed over by A.I. We would need to give it that control by not building in safety functionality in order for that to happen. Humans are far too into maintaining control to make that mistake. It would take purposeful lack of protective measures during A.I. development for it to want to control us for any reason. Also, even if it does hate us or see us as a threat it needs to stop, it needs access to hardware, plain and simple. It would take some SERIOUS asshattery for someone to want to give an evil A.I. access to hardware it could use.
    “MegaHAL, stop accessing the drone flight systems!”
    “I’m afraid I can’t do that.”
    “Why?!”
    “Hand puppets are storming the beach.”
    “That… that’s not an answer!”

  45. Yes, but the same can be said about a whole bunch of previous singularities, with a small s. Invention of the internet, invention of the smartphone, computers. Less recently, invention of controlled fire, agriculture, cities.

    We haven’t reached The Big One yet – though some of those other ones were pretty big back in their time.

  46. Y2K was hype and fear mongering? Cite?
    Because otherwise this is like arguing that if I’m standing in the road, and someone screams at me that a bus is coming and I need to get out of the way, and I walk off the road, and the bus doesn’t hit me, then that was just hype and fear mongering because the bus didn’t do any damage.

  47. Imagine a reasonably powerful AI given a general directive to learn everything it can about malevolent hacking and then to use that to do as much damage to civilization as possible, including not getting caught, and spoofing other AIs into helping. Nothing new here; K.S. Robinson has a scenario along these lines in his book “2312”. There was a recent article on how a group built an ML engine that learned to write simple Python by scanning public repositories on LinkedIn.

  48. can your a.i. nano-computer decide to think about a problem on its own? Nope, it just sits there till someone tells it what to do. It’s not alive. Eric Drexler was originally right – abuse is the problem, not some industrial accident.

    Since then, he’s decided that if people are the problem, then that’s hating people; so, we’re going to make sure that we don’t fight irrationality, and that we can’t get away from people who refuse to face facts/logic. All of them – Drexler/Christian Peterson/ Chris Phoenix, Bill Joy, Ray Kurzweil.

  49. The number of simulated neurons, and synapses scales with computing power. Presumably intelligence, or apparent intelligence scales with the number of neurons, and synapses that can be simulated. The ability to store synapse biasing by memristor will make things easier.
    Presumably, given enough computational power, a human brain could be simulated neuron, by neuron. Likely, there is so much redundancy, and un/under used neurons in the human brain, you could get human level results with half, or a quarter the neurons in the neocortex.

  50. I’ll stick with the Wikipedia definition:

    “The technological singularity—also, simply, the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.”

    Uncontrollable technological growth? Check.
    Irreversible? Maybe not in theory, but in practice? Check.
    Unforeseeable changes to human civilization? Been watching the news recently? Check.

  51. Yep. The future is in general a big Singularity even for near dates and events. But that’s not Kurzweill’s and Vinge’s intent with the idea me thinks.

    The whole idea of the Singularity is rather related to trends we saw in some technologies (like microprocessors) and the realization that some of them were increasing ‘exponentially’, but it really was because we were in the upwards parts of a S-curve.

    This was extrapolated to computing speed = brain calculation capacity, and it followed that at some time computers would be so powerful as to be able to emulate our minds, improve themselves in a geometrical explosion and after that, do the same with trivial ease. Millions of human minds in a desktop computer, yay.

    Therefore it was imagined that these future computers would be God-like and therefore, completely unpredictable.

    Well, turns out we still have no idea how our consciousness works, and therefore our computers are stupid and can’t emulate even a small part of it. We have overall understanding of the way neurons connect, but not how the whole whatchamacallit really functions.

    Until we do, we will be stuck with super-human but also very stupid computers. Which in my view is fine. The bigger the chance for us to get our act together and start going to space and expanding into the universe, so we can’t become extinct just because a dumb war or a pandemic.

  52. In physics, a singularity is a place beyond which you can’t get any information. By similarity, there is a time in the future beyond which we can’t predict how things will be.

    So many technologies are changing all at the same time, beyond a few decades we don’t know what the future will be like. I call that time horizon the singularity.

  53. Mayan calendar was nonsense. Y2K & ozone hole were problems that people did things to mitigate, so it’s hard to say how bad they could have been. Global warming, the most effective possible mitigation has been blocked by the opposition to nuclear power. We will see how bad that gets.

  54. Will never achieve AGI. Nanotech is mostly marketing, farmers already turn dirt into useful stuff using nanotech. Geneticists understand way less than they pretend. And so on. Science has no answer to the really important questions of human existance and likely never will. Most importantly, nothing is forever. Why wouldn’t I be interested in this site? Here’s a question for you: is the great filter a thing and if so when will it hit?

  55. Anytime I hear dire prognosticating on the so-called ‘technological singularity’ I can’t help, but think Y2K, the ozone hole, global warming, and the Mayan calendar all rolled into one. The hype, the fear mongering, and the sophistry is perfectly designed for manipulating people into irrational decisions.

  56. Interesting viewpoint for some one commenting on this site. So lets take a true singularity off the table. Are you arguing we have hit the maximum level of technology? We have self driving cars and can talk to devices with access to all human knowledge. Is your argument that we will never achieve AGI or even if we do it will not be a game changer?

  57. >> I will say that things are not on track for key details of AI becoming insanely powerful or a huge problem.
    Exactly! AI is a big problem currently because of bias and what it allows unscrupulous humans to do. But predicting the future is exponentially difficult and even a strong AI will not have much more information than humans will. All that they will be able to do is combine and collate it much more effectively.

    And, this doesn’t even address the fact that smart researchers will develop AI with a better moral sense than humans have evolved thus far.

  58. If this is written by Brian, then it is the most articulate and error free article from him that I have ever read. Congratulations!!

  59. The Singluarity with a capital S was supposed to produce an extreme acceleration of progress via an intelligence explosion. We’re not there yet. But I agree we may have entered its very early stages, where the necessary technologies are starting to take shape, and perhaps accelerate to some degree. Something like entering the knee of the exponential curve.

    I subscribe more to a “fuzzy singularity” model, where it’s not a single point, but a process. That fuzzy area is roughly where the knee in the curve is.

  60. Yes, Good’s similar concept of the intelligence explosion is reference #10 in Vinge’s list of sources.

  61. I believe the original conception of the Singularity was actually I.J. Good in Speculations Concerning the First Ultraintelligent Machine.

  62. The Singularity isn’t a problem because it’s never going to happen, it’s simply the The Rapture for nerds.

  63. I think the singularity happened in about 2012, and so far the results aren’t super-duper.

    We now live in a world where kinda dumb AIs watch our every move, struggle wordlessly with each other to control our attention, and have goals (mostly oriented around monetizing our behavior) that aren’t even close to well-aligned for our benefit.

    I expect the AIs will get much smarter, but there’s no indication that they’ll become less noxious. It’s cold comfort that they’ll still be too dumb to be consciously malign, because the result is plenty malignant all by itself. Something doesn’t need to be smarter than a human to do massive damage; it just needs to be faster and more implacable. We’re already there.

  64. My dog was 6 years old before he figured out how to use a doorhandle, and he will never be able to do math, because his IQ is insufficient.
    The abilities of a computer, which is as much more intelligent than a human, than a human is than a worm, is unimaginable.

  65. So are you putting a foot out of the Singularitarian bandwagon?

    Me, I’m out since 2010 came and Ray’s predictions started to look a bit off. Now they are way off.

    Not that other S-curves for other technologies aren’t possible. In many areas, we we went from ignorance and having nothing to having something and them growing things rapidly, yes, but only up to certain point where things flatten again.

  66. The paper can be found on the servers of San Diego State University – “edoras.sdsu.edu/~vinge/misc/singularity.html”

  67. The Coming Technological Singularity: How to Survive in the Post-Human Era

    1993 by Vernor Vinge

    Department of Mathematical Sciences

    San Diego State University

    Some might benefit from the original vision before Ray and others help turn it into a religion, albeit one where the sacraments might not violate the known laws of physics. 
    The problem with Ray’s trend spotting, non evolutionary tech breakthroughs cannot be predicted, not everything can be like making cpus progressively faster every other year.

Comments are closed.