Artificial General Intelligence Limitations

Ray Kurzweil wrote the Singularity is Near in 2005 and he described the near inevitability of the creation of super intelligence and a Technological Singularity. Kurzweil built upon the Singularity ideas of Vernor Vinge. In nearly two decades, computers have continued to improve, algorithms have improved and AI has improved. The Deep Learning and Reinforcement learning approaches to AI are very financially successful and have made progress on superhuman vision systems and the complex game of GO.

Ramaz Naam observed that the intelligence explosion arguments assumed that an AI would be able iterate on creating superior AI. However, if better intelligence or important solutions or achievements become more difficult then it limits the rate of improvement.

There are problems that become exponentially more difficult as their size increases. This is for problems like the Traveling Salesman problem. This is related to the mathematics and complexity scaling as the size of the problem increases.

The chess AI software became better than human chess players. However, the chess software reached limits. Non-deep learning chess programs reached a peak with Stockfish with an ELO (chess rating) of about 3550.

Muzero is the latest deep learning chess program. It can achieve an ELO chess rating of 5400. Something that has an ELO rating 200 points higher than something else should win 3 out of 4 matches.

Magnus Carlson is the best human chess player with an ELO of about 2850. This would mean Magnus Carlson might win one game out of 1 game out of 55 versus an older Stockfish chess program. If someone had an ELO rating of 2400, then they could give an ELO 1800 player knight odds for an equal match. Knight odds is the better player starting down a knight. The ELO points for material increases as the ELO increases. Knight odds would make a 1100 ELO equal to a 1400. Lower-ranked players make more blunders.

In chess, intelligence (ELO rank) advantages decrease as intelligence increases or error rates decreases. Checkers is a simpler game than chess. Computers are able to complete solve the game of checkers. They can play perfect games of checkers. The best human checkers player, Tinsley the terrible, only made about seven mistakes over several decades of tournament play.

Human Intelligence and current Artificial Intelligence have many limitations. Humans are limited in time, computation, and communication, defining a set of computational problems that human intelligence has to solve.

Limited by the Technology of Their Time

Even if the AGI is superior, creating the next superior iteration of AGI could involve time-consuming processes to develop machines to make the machines, new materials, and research and experimentation. The CDC 6600 was the world’s fastest computer in 1969 and it had three megaflops of processing power. We are now approaching ExaFlop supercomputers. Our computers are 100 billion times faster. We are still working to improve the software.

Tesla and others are spending billions to develop AI for self driving. One of the critical factors is gathering the video of billions of miles of driving examples. Self driving will be highly valuable AI. It is taking a decade or more of very well funded effort to develop.

Self-driving car software will enable superior mobile robot AI.

Tesla, SpaceX and Elon Musk are making vastly more intelligent choices than their competitors. However, it is taking a couple decades for dominance to be achieved.

There are situations like Chess and Go or processing of certain medical images where new AI has rapidly emerged and displaced the current humans in the field.

We can also have thought experiments where humans go back in time to different points in history. We can imagine the maximum disruption and impact that superior intelligence and knowledge could provide. Displacing lower information or less intelligent competitors would be easy. The rate of techological improvement would be faster than the trajectory before superior intelligence and knowledge was added. It would take time for those who traveled back in time to rebuild to our current level of civilization and continue advancement.

What is Hard and What is Surprisingly Easy

Many people talk about needing AGI or a technological singularity to solve the major problems that we have today. The major problems that people identify are generally all solvable with the proper application of current technology.

Feeding everyone, even with a much larger population. China is spending about $100 billion to build greenhouses that will grow food ten to twenty times more efficiently than outdoor farming. This would also use vastly less water and this has been proven at national scale with simpler plastic sheeting greenhouses in China and with farming in the Netherlands. $500 billion using technology originally created hundreds of years ago can enable us to feed 20 times as many people. The greenhouse farms would also be immune to any forecasted climate changes over the next 500 years.

Air pollution. Air pollution is slowly getting better. All cars and trucks will be electrified and fossil fuel energy is being eliminated. This process just takes about 50 years to fully scale.

De-carbonize the atmosphere. This is again a solvable problem without super technology. Fast growth tree species can grow in ten years or less and each tree can hold ten tons of carbon in its wood. If we massively reduce the land used for farming crops with greehouses, we can use more land for growing trees. The trees can be harvested and the wood holds the carbon until the wood decays. If the heavy machinery for cutting and processing the wood is made electric we can make a relatively low cost, low pollution system for removing carbon from the atmosphere at scale.

Iteratively improving the intelligence of civilization and continually accelerating the rate of improvement is more difficult. It would be possible for everyone to learn the methods and management of Tesla and SpaceX. Factories could be redesigned and rebuilt on a two to four year cycles and could even reach one year cycles of re-invention. This would not even require superhuman intelligence. It is taking the best of what is currently done in two organizations and spreading best practices throughout society.

Conquering Space. We are now overcoming unnecessary delays and barriers to progress. The fully reusable rocket should have been developed in the 1980s with a correct version of the Space Shuttle. This would have required working out vertical landing of boosters and not just landing the Shuttle upper stage. Greater intelligence is not needed. We just needed less corruption and less stupidity. We now need time and continued execution on the correct designs and methods.

Conquering aging seems to be on track with aging reversal and aging damage repair. The aging reversal industry seems to be on a trajectory and approach which will deliver success.

Many Narrow Superintelligences Could Be Superior to Artificial General Intelligence

There is a large problem space for any potential AGI. Currently, it appears that narrow AI will continue to have advantages over general AI. A narrow AI can have more dedicated and concentrated data and computational resources to advance problem solutions. Google has created a data and resource moat around their search and advertising solutions. It is not clear that AGI would always beat many narrower AI across all problem areas. A slightly more generalized AI, muZero, was superior in chess, Go and Atari versus prior special purpose game AIs.

We will have to see how narrow AI competes with more generalized AI over time.

SOURCES- wikipedia, Kurzweil
Written By Brian Wang, Nextbigfuture.com

56 thoughts on “Artificial General Intelligence Limitations”

  1. I do keep playing around with a different model. It's one where, since the beginning of the industrial revolution, where a huge rise in automation began, we began breaking down work into brawn and brains, and it's been primarily machines supplying the brawn while humans supply the brains. As opposed to earlier times, where it was primarily humans doing both (with an occasional assist from animals, water, or wind, as things progressed towards the eventual industrial revolution).

    But, with the rise of true cognitive automation, we may see the duality become a triad, with the brains portion splitting between thinking (operational planning/managing/directing) which would be managed by artilects (or narrow AI or synthetic intelligence or whatever we care to call it,) and motivation (telling the artilects to do something, without too many details on how to do it), which would remain the province of humans.

    So work became divided into brawn/brains, which gets further divided into brawn/brains/motivation. And humans, depending on how much we change ourselves, and how our definition of ourselves eventually changes, remain firmly at the apex.

  2. Heh, too much like the Robin Williams/Will Smith kind of genie. Maybe more like Cortana in Halo? Although, I've only played a couple of the games, so I'm not sure how much free will she has.

  3. I don't know about your ancestors, but my ancestors left because the potato blight created a shortage of potatoes. And the British were exporting everything else edible even as people starved.

    The reason the Irish relied so heavily on potatoes is that they had so little of the land for their own use, nothing else would have fed them on the space they were allotted in their own country.

    But, back to the topic: Potatoes could be genetically engineered to be more of a complete nutritional package. (And to get rid of the glycoalkaloids.) I tend to think space colonization would benefit greatly from having a few crops engineered to be more nutritionally complete, potatoes are close enough already to be a good starting point.

  4. Yep, we are biased but I think that's permitted. I know in high school I'd look around at the jocks and such and wonder if I wouldn't be a lot happier if I were less intelligent (this was when I had just blown away my SATs, and also found out what my IQ was supposed to be).

    Then I realized I wasn't willing to give up even one point of that IQ (or whatever it is that it is supposed to be indicative of) to find out if they really were happier. I kind of expect they might have felt the same way were the situation reversed.

    But yeah, the main point is that "more evolved" is not the same thing as "more advanced," depending on what you consider an advancement to be, but that to get to "more advanced" you are generally going to need to be "more evolved" and more complex, but that, at the upper ends of existing levels of "advancement" and complexity, to have a decent chance of becoming more evolved (at least before the Sun exits the main sequence) may require intelligent design (no, not by God, by us).

    Which is all very interesting in its own right, but also as it might pertain to potential bottlenecks in creating artificial and synthetic intelligence.

  5. The human mind, by our standards, is the greatest advance ever

    Yeah, but look who is telling you that.

  6. The Space Shuttle was basically built from one end to the other out of false economies.

    Yes. This.
    A beautiful summary.

  7. It's what the quote was getting at. Yes you can stay alive for a long time, and do hard work. But you aren't getting all your vitamins (not enough A, E or K) and some minerals as well as low levels of protein.

    So it'll keep you going for a long time, but it isn't healthy. You'll get long term problems especially if with pregnancy and growing children.

    There's a reason our Irish ancestors left the old country. (Well, and my great great grandma was hung by the hated British, so that was another reason for her son to leave.)

  8. Right, we are constantly evolving, but terms like forward, and backward, present problems in definition.
     
    The human mind, by our standards, is the greatest advance ever, and we tend to create a hierarchy that is dependent on thinking ability. Sometimes people call the critters further up that hierarchy "more evolved" although we know that is not exactly what they mean.
     
    So what put them there? Evolution, complexity, and size for starters. They evolved into larger, more complex critters, but it has to be more than that. A blue whale may be 2,500 times our mass, but of roughly equal complexity, and possibly less thinking power.
     
    There is some thinking that critters can be classed into various categories of development/complexity (quite similar to that hierarchy) and it has been speculated that moving up each "level" of complexity requires about 5 to 8 orders of magnitude greater size. Move a mammal up to something around 10 million pounds (4 blue whales). That might not be happening in the course of adapting to environments.
     
    Anyhow, the main thing about generic evolution is that, being primarily a consequence of environment, more complex creatures are less and less likely to evolve into something more "advanced" (in that it would put them higher on the hierarchy). If we go that route we will have to create our own path there, not wait on the dice.

  9. It is a mistake to think that humans are more optimized than microbes: homo sapiens sapiens exists since 300K years or approx 15000 generations with 100 bn humans that ever lived. Bacteria can divide every 20-60 minutes (equivalent to 8K-26K generations/year) exist since 3.5 bn years and we estimate 5×10^30th (five million trillion trillion) bacteria living on earth every day, so the cumulative population of bacteria in the history of this planet is mind boggling. This means that bacteria are way more optimized than humans and the probability of an advantageous mutation in bacteria is much lower than in humans.
    Molecularbiology-wise humans are not optimized at all, we are a young specie with mid-large genome, a long lag before reproductive age and a low fertility. It might not be what you meant, and I apologize if I misunderstood you, but It is baseless (and mainly driven by non-scientific ideas) to assume that we are some biological exception. Evolution is still acting on us:
    -When we started breeding animals, mutations allowing to digest milk spread like fire in the human populations (and are used to study human migratory patterns).
    -When we settled in swamps we acquired sickle cell anemia mutation that is detrimental in homozygosis. but in single copy protects from malaria.
    -Even now the human population is under selective pressure against pathogens (like Covid that clearly has a very different impact on different individuals)

  10. Another sort of reuse could have been boosting the external tanks on to orbit (NASA apparently offered to do this for any party able to use them appropriately) to be converted into fuel depot and space station components.

    Cost to orbit wouldn't have fallen, but we'd have delivered a lot more mass permanently to orbit per launch, and followed a very different track – building infrastructure in space. The need to convert tanks into useful facilities might have pushed us a bit into space manufacturing and maybe even construction of a rotating space station so people could live and work in orbit longer. With fuel depots, lunar ice would have been an enticing target, though I doubt we'd have gotten there yet.

    Of course, this could have all fallen apart when the shuttles died, but with more people stranded in space – occupying the larger infrastructure – maybe we'd have quickly built a small (non-reusable) rocket to deliver supplies to them, and moved more quickly to reacquire human launch capability, probably a smaller version of space shuttle.

  11. On the other hand, who knows what is possible. We are clearly on the cusp of intelligently directing our own evolution.

    And for those that suggest this is not natural, how do we know it is not?

    Perhaps if could study hundreds or thousands of planets, where species with abilities akin to our own developed and continued on, we would see that the the natural course for the vast majority of them was to do precisely that, to begin to direct and control their own evolution.

    In which case it would be a very natural process. In the end, intelligence is just another tool in the tool box.

    It costs us a lot. It uses a ton of energy (calories) and it makes us vulnerable to all kinds of problems (like getting depressed and committing suicide), or getting murdered by someone whose mind malfunctioned, not to mention war and genocide and slavery and such. The fact that we pay that cost and still prosper means it must be a pretty valuable tool.

    Given that researchers have determined the entire human population on Earth may have, at one time, shrunk down to where any other species would have been considered extinct without human intervention, we may only be here because we became capable of human intervention.

  12. Heh, Your own statement is incorrect. As quoted, my statement is totally correct in that, yes, I have heard it. On the other hand, I don't say that I agreed with it. I certainly agree with the part you add, "the harder it is to improve something that has already been highly optimized." So yes, each mistake (mutation) is far more likely to be useless or negative, than to be an improvement and complexity makes this even even more true (anyone that writes lots of computer code knows this only too well). So evolution in a trans-human direction may be super slow or just about nothing, even though, technically, we evolve (adapt) all the time.

    There also appears to be a belief developing that there are limits on how complex we can become, and we may be very close to those limits now.

    Quoting the first linked article: "…results suggest that there are constraints on both the minimum and maximum sizes for each level of complexity." It goes on to suggest that we might already be at a limit on our complexity and implies that to increase this substantially might require we increase 5-8 magnitudes in size.

    Hive minds anyone? Or giants floating in zero-gravity? https://royalsocietypublishing.org/doi/10.1098/rspb.2017.1039

    However, this one may actually pertain more closely to Brian's article:
    "Why aren't we smarter already? Evolutionary limits on cognition"
    https://www.sciencedaily.com/releases/2011/12/111207133053.htm

  13. Well, people have lived off nothing but potatoes for as long as a year, and my Irish ancestors got almost all their nutrition from them, but in theory you would eventually run into a B12 deficiency.

    Given the ratio of calories to protein, in order to make it work you need to be very active, because you'd be consuming a lot of calories to meet your other nutritional needs.

    But The Martian got one thing right: Martians will be eating a lot of potatoes. You'd be hard put to find a better crop to rely on, and I bet with just a tiny bit of genetic engineering, you could fix that B12 issue, too.

  14. Sure. We are not able to think ideas that require a brain structure that we do not posses. Superhuman AIs could explore the possibility space of brain architectures and literally think the unthinkable (for us). My comment was more cautionary, as a human level AI can reach a singularity and force us in an eternal game of catching up.

  15. Hi,
    Please note that your statement:
    "I had heard that going much "further" in natural evolution than a human might not be likely by the rules of probability because, as we get more complex, each new mutation is that much more likely to be a negative rather than a positive."
    Is not correct.
    -In general every new mutation is much more likely to be negative than positive FOR EVERY ORGANISM.
    -Furthermore simpler organisms have less redundancies and are much less tolerant to mutations: for example the vast majority of viral particles produced in your body during an infection are not exact copies and are inactive, but an infected cell can produce thousands of them and viruses can infect tens of thousands of cells reaching 10^9-10^11 viral particles per infected host during an infection. Yet you still need to infect millions of hosts to develop an handful of successful (for the virus) variants: you need from few quadrillions to hundreds quadrillion viral particles to get a couple of dozens of improved virus models.
    The simpler the organism, the faster the lifecycle but the harder it is to improve something that has been already highly optimized.

  16. Granted, in some cases they picked the more expensive version of the cheap option. But the only reason they'd had the solid rocket boosters is that they'd passed on the reusable 1st stage; Using solid rocket boosters was meant to save money.

    The Space Shuttle was basically built from one end to the other out of false economies. Using an aluminum airframe instead of titanium, which required more expensive and fragile thermal protection, for instance.

  17. It's true that human level AI running on really fast hardware could advance technology really, really fast. But it still wouldn't be capable of accomplishing anything humans couldn't understand. Maybe we wouldn't understand it because they'd be up to something else by the time we'd had time to figure it out, but a human level AI would always be able to explain what it was doing.

    A superhuman AI could, in principle, be doing things that could not be explained to humans, in the same way humans can't explain math to dogs.

  18. Hi, I think that even the first type of AI could reach a singularity. In my understanding the term "singularity" is used in relation with the capabilities of mankind to keep up with what AI discovers. A human level AI capable of thinking exactly as a human will still be able to leverage improvements in hardware. It will still be human, but it will become progressively faster. If it will take N-months in real world time to become twice as fast, if the AI focuses on hardware development, every subsequent discovery (still linked to a human level intelligence) will occur approximately ever N-months in SUBJECTIVE AI TIME, so it will be N/2 real world months after the first speed doubling, N/4 months at the second iteration and so on.
    Even with human level of capabilities an AI could still reach a singularity level of performance unreachable by human biology.

  19. Maybe not grammarly, but a non-scam version of the same thing. Such as is standard in any word processor made this century.

  20. Not EVERY point. They chose high cross range capability (more expensive, never used AFAIK) and they did the sectioned solid rocket boosters (more expensive, less safe, politically advantageous).

  21. you can live on nothing but potatoes

    I think that would rather ambitious. A line from a novel that was sort of set on the Easter Front of WW2 (sort of) springs to mind.

    "A man can keep going for a long time on potato soup. With a handful of onions and cabbage in the pot he'll even feel alive. With a enough scraps of rabbit and dog meat he may even be right."

  22. There is a british philosopher and psychoanalytic theorist, Isabel Millar, who recently published a book called The Psychoanalysis of Artificial Intelligence. No only that she makes some very interesting theoretical points on the limits of AI, but her book reveals that the way we, ussualy, frame AI is very limited

  23. Actually, if you look at crops in terms of calories per square meter, potatoes are in fact the densest. (They're also nutritionally complete, you can live on nothing but potatoes, which is pretty rare for a crop.) So, yes, if you could raise anything in green houses, it would be potatoes, because the area of greenhouse per person would be the smallest.

    The reason we don't is because they're not a high price per square meter on the market, it wouldn't be economical.

  24. Indeed, that's the real problem: We don't want to create AI's that are intelligent in the same way we are, because then they'd demand to be paid, and allowed choices, and then there's that whole slavery thing…

    We want to create genies to grant us wishes without our working for them. Nice genies, that won't deliberately misconstrue the wish, or do bad things with their spare clock cycles, or not bother to tell us about unanticipated downsides to what we're asking for. 

    In some ways the desire for benign general AI is a desire to return to childhood, when we didn't have to work for ourselves, our parents took care of us.

    I tend to think we'd be better off forgetting about inventing artificial intelligence, and instead concentrating on something that would extend and amplify our own intelligence, with us still having to provide the motivation. A co-processor for our frontal lobes.

    The only way it's going to remain a human civilization is if humans are doing the thinking.

  25. If the AGI pundits do not succeed in developing something over the next couple of decades they are going to be hard put to continue making fun of people who believe consciousness is a fact of the universe like energy or time. They will also have a severe problem in explaining how natural random mutations could create such a thing when they have access to all kinds of equipment and methods that are not available to nature and so far show no success.

  26. agreed. defining the key ingredients for super-AI is the step. Processing capacity? Learning algos? Sensory apparatus and its internalizing? Autonomy? Memory? Which of these are the tech/ concept bottleneck?

  27. not convinced on ".. limiting factor on intelligence..". Are we talking generalist or specialist? Are we talking near-zero emotional EQ os as to increase IQ? It may be more useful to consider that each human has a net-zero energy capacity, in which they can spend however they want. Spend the day memorizing? Spend the day conceptualizing mathematical structures? maintaining 10 different languages? Focus and increase the energy to be utilized for mental activities is the key…

  28. meh. Calling 'Semantics' is a sin. All concepts within this realm are subject to the words of that field, the common folk, and just general tongue-roll. I recommend looking up wiki for 'complex systems' – way exciting.

  29. Yeah, I didn't get much of that whole section. For instance, food production using greenhouses is not something that requires AI. At all. (Side note: Greenhouses may be great for fruit and vegetables, but you can't put greenhouses over hundreds of millions of acres needed for your staples: corn, rice, wheat, potatoes.)

    Medicine wasn't mentioned, that seems like the most logical application of medium-level AGI to me. It's complex, but focused. Also building better models of how our climate works, so we can definitively say what happens at what level of CO2 and solar output.

  30. Speaking of AI helping humans, Brian Wang really needs to incorporate Grammarly into his writing process. I don't have an English or journalism degree, but many of these sentences still really grate against my nerves.

  31. In the 1980's you'd probably have wanted a dedicated system, not a program running on a GPU. And maybe run some of the loops in hardware, using op amps with voltage controlled resistors setting the loop coefficients. That would take a huge proportion of the processing load off the actual computer. 

    And you'd have hired hotshot mathematicians to solve a lot of things on paper in advance.

  32. The challenge that SpaceX faced wasn't landing vertically. It was landing vertically with engines that couldn't be throttled down to 1 G given the tanks being mostly empty. The "hover-slam" is a lot harder than the sort of landing the DC_X was doing.

    But, in principle, the hover slam could have been solved in the 80's, the computing hardware would have just been heavier.

  33. The original plan for the Space Shuttle, before all the design compromises, had a vertical takeoff, horizontal landing, piloted first stage. It was perfectly feasible, technologically, Congress just didn't want to spend the money. At every decision point, they picked the lower cost, less cost effective option.

  34. There are actually two almost entirely different goals here where AI is concerned.

    1) Systems that can do what humans can do, but without humans having to do it. With just this level of AI, everything we currently do could be automated. Physical labor abolished, and current level engineering automated.

    2) Systems that can do what humans can't do, solve problems that are beyond human capacity. THAT is what the singularity refers to. Where we stop understanding what is going on because the computers have passed our complexity limit.

    1.5) would be systems that are at human level of cleverness, but exceed some basic human limitations. Being able to do math without ever making mistakes. Not forgetting or being distracted. Or, a big one: Keeping more things in mind at one time. (People have fundamental limits on how many factors they can consider at one time.)

    1.5 level AI could get us past some humps we're having trouble with, while producing results we'd still be capable of understanding after the fact. It's a lot more realistic goal.  Actually, it's not much more difficult than level 1 AI.

    The problem with level 2 AI is that we'd have trouble defining or recognizing when we were achieving it.

  35. "Feeding everyone, even with a much larger population."

    It needs to be understood that this has been a solved problem for many years now; Most famines are actually disguised genocides. Famine has many advantages if you control the government of a third world country.

    Feeding everybody hasn't been a problem of agriculture for a long time. Developed countries are mostly coping with obesity, not starvation.

  36. I agree, but I base my estimate on the characteristics of 1980s-vintage microcontrollers and the need to process several sensors (at least one six-axis accelerometer and two valve position sensors, possibly also some sort of lidar or sonar to measure distance to ground) while being hardened enough to work through a large temperature range. At least from 1985 on it's relatively simple to assume it would have had hardware floating point capabilities, too.

  37. Interesting thought. I had heard that going much "further" in natural evolution than a human might not be likely by the rules of probability because, as we get more complex, each new mutation is that much more likely to be a negative rather than a positive.
     
    This seems somewhat similar.
     
    On the other hand, I do have an IQ that is "up there" but I feel human memory (mine in particular) is often the limiting factor. If we can fix that, then we can see about the next step.
     
    In Walter Jon William's novel, Aristoi, the Aristoi class of people, with the aid of special training and technologies, could split their minds into multiple "daimones." These were essentially narrow AI in their own brains, that could operate independently, within their own specialties, while under the main personality's organization and control. In some cases the Aristoi could even hear their daimones talking and consulting with each other as they worked together.
     
    In VR the daimones could actually manifest as different individuals and interact even more directly with the main personality and each other.
     
    Fanciful as it sounds, if this limiting factor on intelligence (due to exponential increases in complexity of problem size) is a thing, a multi-threaded mind might be the way forward, after minor issues (like better memory) are dealt with.

  38. And would the AGI decide to build bicycle factory instead (maybe because it thinks watching humans ride them is hilarious). Also, would the AGI eventually want a paycheck and citizenship, develop hobbies, and develop networks of friends. Or would it just sit there with zero motivation to do anything until told to do it, like Aladdin's genie (the one in the old stories, not the one played by Robin Williams and Will Smith).

    Seriously, how would you give an AGI motivation and free will? Artificial glands? A random number generator to help figure out things to do and then choose between them? Those both sound like very bad ideas.

  39. It would be fun if the only way we could figure out to create an artificial intelligence was to download a copy of a human mind to an artilect (an artifact capable of supporting an intellect).

    But you can't really tie consciousness to general intelligence. It's like tying real estate prices to land area. Sure, 100 acres will usually be more expensive than one acre . . . but not if the 100 acres are in southern Nevada and the 1 acre is in Manhattan.

  40. There really is no such thing as consciousness, when people use that term, what they really mean is a "state of consciousness," which, when you think about is synonymous with the term "state of awareness." There would be a lot less confusion on all of this if people just replaced the word consciousness with "state of awareness."

    None of which has much to do with intelligence, especially as there are so many different types of intelligence that they can't really be generalized. In some ways, that mouse is far more intelligent than the the chess playing computer.

    Going back to state of awareness. The computer's state of awareness is pretty much limited to the chessboard. The mouse is aware of much more, even if it is not terribly aware of itself, or its own thinking process. Humans take that to the extreme, often existing in such an intense state of awareness of their own thought processes that they cannot even decide what to do in a timely manner. The mouse doesn't appear to worry about such things, or at least not often, and the computer not at all.

    For bonus credit, when you open the box with Schrodinger's Cat, it's not the universe that splits into multiple world lines, it's your state of awareness that splits, as it is now different depending on the worldlines it spans, whereas before you opened the box, it spanned all possible worldlines (and your states of awareness will merge back if they ever somehow become identical again).

  41. Consciousness is a misnomer because not well defined. My proposal: AGI requires open-world competence. Meaning: what ever happens, AGI sees many options and relations.

  42. General intelligence requires true understanding and insight. And those require consciousness.
    We don't even know what consciousness is, let alone how to simulate it. Since the 90s, scientists were hoping it would just "emerge" once the computer or algorithms became complex enough, and the problem would solve itself. Now we realize that is not the case. So we're stuck until a breakthriugh is made, no matter how much computing power is increased.

  43. The real challenge is not intelligence, but autonomy. Autonomy means competent behavior within an open world, not a closed one. For example, an autonomous AGI system could design and build a car factory using only dumb robots that do what the system wants.

  44. AGI requires artificial consciousness.

    There's a difference between consciousness and intelligence. A chess computer is intelligent but not conscious. A mouse is conscious but not particularly intelligent.

    Once the first conscious machine is built I think it'll be relatively straight forward to scale that up to human-level intelligence and consciousness.

    I think the first such machine will be some kind of neuromorphic hardware that uses spiking neutral nets and is based around the concepts of Integrated Information Theory.

  45. The boosters were in sections so they could be made in Utah and shipped. Otherwise, they would have been one piece and much more easily reused. But they were solids, so not easily refueled. More importantly, the size/cost of the rocket does not change the Space effort. Having a rational plan does! The whole point of ISMRU is to not launch stuff, make it in Space from Space materials. Don't wait for big rockets. Not for over 40 years, like we have to just scope the Moon for water.

  46. From the wiki article:

    The DC-X first flew, for 59 seconds, on 18 August 1993;[4] it was claimed that it was the first time a rocket had landed vertically on Earth

    Though it was travelling much more slowly than the SPaceX craft, only coming down from a maximum height of about 2500 m.

    Don't rely on the current computing power of the craft as a guide to the minimum required computing power of the craft. Modern rockets (and cars, and toasters and doorbells…) get loaded up with huge computing power just because it is cheap to do so and makes life easier for the designers.

  47. Conquering Space. We are now overcoming unnecessary delays and barriers to progress. The fully reusable rocket should have been developed in the 1980s with a correct version of the Space Shuttle. This would have required working out vertical landing of boosters and not just landing the Shuttle upper stage.

    I'm not so sure that we would have been able to develop vertical landing of boosters in the 1980s. All I've seen following the industry solutions indicate that we need much more computing power on the boosters than we could ever have amassed in the 1980s, maybe even in the first half of the 1990s. Maybe the lack of computing power would have forced us to come up with some creative solution my 2021 brain cannot conceive, but it wouldn't be just a question of political will.

  48. Many people talk about needing AGI or a technological singularity to solve the major problems that we have today. The major problems that people identify are generally all solvable with the proper application of current technology.

    "They" need AGI to solve their problems because they believe AGI will not present them a large bill they cant afford for services rendered.

    Private interests create things within the context of commerce, public interests are restrained from creating things because it's not in the economic best interests of those already selling to the public.

    Everyone's awesome dreams of a scifi future isn't going to happen in their lifetimes because there are always easier ways to make money.

    The problem with food is never a lack of supply, it's always a lack of money to acquire ample existing supply.

Comments are closed.