OpenAI Made Super Spam Generator and Now Seeks Investors and Profit

Last month, OpenAI announced that they had made an AI system could be used for next-generation super spam and a tsunami of fake news. They chose to keep the super-spam system secret to prevent misuse by others. OpenAI has now announced they are seeking investors and will change from being a non-profit to seeking profit. OpenAI was started as a non-profit by Elon Musk and others wealthy technologists concerned about creating safe Artificial General Intelligence.

They are creating OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize their mission.

They now want to make money and save the world from evil Artificial General Intelligence.

They need to invest billions of dollars in upcoming years into large-scale cloud computing, attracting and retaining talented people, and building AI supercomputers.

Sam Altman stepped down as the president of Y Combinator, the Valley’s marquee startup accelerator, to become the CEO of OpenAI LP.

The new limitations on profit for investors in OpenAI is one hundred times their investment. I do not think there is any limitation against OpenAI forming new limited partnerships down the road to enable more profits to be captured by investors. They are already shifting from non-profit to profit with the 100X limit.

OpenAI LP currently employs around 100 people organized into three main areas: capabilities (advancing what AI systems can do), safety (ensuring those systems are aligned with human values), and policy (ensuring appropriate governance for such systems).

OpenAI Nonprofit governs OpenAI LP, runs educational programs such as Scholars and Fellows, and hosts policy initiatives. OpenAI LP is continuing (at increased pace and scale) the development roadmap started at OpenAI Nonprofit, which has yielded breakthroughs in reinforcement learning, robotics, and language.

OpenAI Powerful Text AI

The OpenAI Text system can take a few sentences of sample writing and then produce a multi-paragraph article in the style and context of the sample.

This capability would let AI’s impersonate the writing style of any person from previous writing samples. It could be used for next-generation super spam and a tsunami of fake news.

Recent Drexler Paper Suggests Cloud Base AGI Should be Safe

Maybe the OpenAI mission is not that much of a concern. A recent paper by Eric Drexler suggests that developing AGI in the Cloud with narrow services should be safe.

Super General Intelligence Can Be Created From Many Narrower AI Services

Drexler proposed the strategy of achieving general AI capabilities by tiling task-space with AI services.

It is natural to think of services as populating task spaces in which similar services are neighbors and dissimilar services are distant, while broader services cover broader regions. This picture of services and task-spaces can be useful both as a conceptual model for thinking about broad AI competencies and as a potential mechanism for implementing them.

Super-AGI Domination Seems Avoidable Even with an Impure OpenAI

It is often taken for granted that unaligned superintelligent-level agents could amass great power and dominate the world by physical means, not necessarily to human advantage. Several considerations suggest that, with suitable preparation, this outcome could be avoided:
• Powerful SI-level capabilities can precede AGI agents.
• SI-level capabilities could be applied to strengthen defensive stability.
• Unopposed preparation enables strong defensive capabilities.
• Strong defensive capabilities can constrain problematic agents.

Applying SI-level capabilities to ensure strategic stability could enable us to coexist with SI-level agents that do not share our values. The present analysis outlines general prospects for an AI-stable world, but necessarily raises more questions than it can explore.

A well-prepared world, able to deploy extensive, superintelligent-level security resources, need not be vulnerable to subsequent takeover by superintelligent agents

SOURCES- OpenAI, Drexler, Oxford Future for Humanity

Written By Brian Wang

12 thoughts on “OpenAI Made Super Spam Generator and Now Seeks Investors and Profit”

  1. For the first, say, 10 years, they would be slowed down at normal speed, in order to give them normal human parenting and education, then they would
    be sped up. They would be able to enjoy only each other’s company, and
    would watch us living in slow time. (Steve Jobs expected from human engineers to be like that.)

  2. The problem is by the time the we realize we have built a AI capable of being as smart as us it has already lived a human lifetime.

  3. The giant spider wouldn’t be so bad. Square cube law, after all; A spider that massed as much as a human wouldn’t be able to move, and would suffocate very quickly.

  4. A surely safe form of super AI would be an AGI equivalent to our own, only much faster
    than real time, (see BrainScales). You would communicate with it by mail. You would
    send it an email every day, and it would receive it, say, once a year of its subjective time.
    They would make great scientists and engineers, and brisk telesurgeons.

  5. I think I am smarter than a tiger, but if I am locked in a cage with her, she eats first.
    Same for a giant spider. This fear of “summoning the demon” is overrated, more
    so since we have not even yet built an AGI equivalent to our own.

Comments are closed.