The Foresight Institute is currently having its mini-conference on Artificial General Intelligence and Corporations.
Peter Scheyer has a paper written with a grant given by Paul Christiano on the legal aspects of AGI as corporations. This is the central aspect which is referred by most of the other talks.
This literature review seeks to combine research on artificial intelligence with findings on corporations, then to explore implications of these findings for entity alignment studies. First, we collate research on whether corporations qualify as artificial intelligences (AIs), artificial general intelligences (AGIs), and/or superintelligences. We find that they qualify as artificial general intelligences, and not superintelligences.
We continue through an examination of collective intelligence and examine where, if anywhere, ‘the line should be drawn’ on artificial intelligence’s existence among collective intelligences, including human institutions. We find that among collective intelligences the AI category should include corporations, and not include states, nations, tribes, or teams, by virtue of their specified agency and mandates, and lack of naturally arising from human processes.
We will proceed to peruse the literature on Corporate Personhood, where the legal process by which Corporate AGIs have been conditionally granted rights accorded to persons and citizens will give us a solid foundation for further research.
From here, we use findings from corporate culture, philosophy, and the law to determine on what constitutes an Emergent and thus unique corporate behavior. We continue to research when the ‘corporation’ exists and acts versus when a collection of individuals are acting in a coordinated fashion.
As the earlier research establishes that the existence of an Artificial Intelligence can be said to be contingent on its goals, an examination of the components and considerations of a Goal is relevant to our final category of research.
Finally, we will research and describe potential methods for aligning Corporate AGIs with human interests. We will include several generally considered methods, their pitfalls and merits, and provide thinking points for the future of Corporate AGI Alignment and how it applies to general AI Safety concerns.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.