DARPA Starts $2 Billion Campaign to Develop Next Wave of AI Technologies

To address the limitations of the first and second wave AI technologies, DARPA seeks to explore new theories and applications that could make it possible for machines to adapt to changing situations. DARPA sees this next generation of AI as a third wave of technological advance, one of contextual adaptation. To better define a path forward, DARPA is announcing today a multi-year investment of more than $2 billion in new and existing programs called the “AI Next” campaign.

“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving,” said Dr. Walker. “Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

DARPA is currently pursing more than 20 programs that are exploring ways to advance the state-of-the-art in AI, pushing beyond second-wave machine learning techniques towards contextual reasoning capabilities. In addition, more than 60 active programs are applying AI in some capacity, from agents collaborating to share electromagnetic spectrum bandwidth to detecting and patching cyber vulnerabilities. Over the next 12 months, DARPA plans to issue multiple Broad Agency Announcements for new programs that advance the state of the art in AI.

Under AI Next, key areas to be explored may include automating critical DoD business processes, such as security clearance vetting in a week or accrediting software systems in one day for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as “explainability” and commonsense reasoning.

In addition to new and existing DARPA research, a key component of the campaign will be DARPA’s Artificial Intelligence Exploration (AIE) program, first announced in July 2018. “In today’s world of fast-paced technological advancement, we must work to expeditiously create and transition projects from idea to practice,” said Dr. Walker. Accordingly, AIE constitutes a series of high-risk, high payoff projects where researchers will work to establish the feasibility of new AI concepts within 18 months of award. Leveraging streamlined contracting procedures and funding mechanisms will enable these efforts to move from proposal to project kick-off within three months of an opportunity announcement.

DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools. Towards this end, DARPA research and development in human-machine symbiosis sets a goal to partner with machines. Enabling computing systems in this manner is of critical importance because sensor, information, and communication systems generate data at rates beyond which humans can assimilate, understand, and act. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA is focusing its investments on a third wave of AI that brings forth machines that understand and reason in context.

AI Next builds on DARPA‘s five decades of AI technology creation to define and to shape the future, always with the Department’s hardest problems in mind. Accordingly, DARPA will create powerful capabilities for the DoD by attending specifically to the following areas:

New Capabilities

: AI technologies are applied routinely to enable DARPA R&D projects, including more than 60 exisiting programs, such as the Electronic Resurgence Initiative, and other programs related to real-time analysis of sophisticated cyber attacks, detection of fraudulent imagery, construction of dynamic kill-chains for all-domain warfare, human language technologies, multi-modality automatic target recognition, biomedical advances, and control of prosthetic limbs. DARPA will advance AI technologies to enable automation of critical Department business processes. One such process is the lengthy accreditation of software systems prior to operational deployment. Automating this accreditation process with known AI and other technologies now appears possible.

Robust AI

: AI technologies have demonstrated great value to missions as diverse as space-based imagery analysis, cyberattack warning, supply chain logistics and analysis of microbiologic systems. At the same time, the failure modes of AI technologies are poorly understood. DARPA is working to address this shortfall, with focused R&D, both analytic and empirical. DARPA’s success is essential for the Department to deploy AI technologies, particularly to the tactical edge, where reliable performance is required.

Adversarial AI

: The most powerful AI tool today is machine learning (ML). ML systems can be easily duped by changes to inputs that would never fool a human. The data used to train such systems can be corrupted. And, the software itself is vulnerable to cyber attack. These areas, and more, must be addressed at scale as more AI-enabled systems are operationally deployed.

High Performance AI

: Computer performance increases over the last decade have enabled the success of machine learning, in combination with large data sets, and software libraries. More performance at lower electrical power is essential to allow both data center and tactical deployments. DARPA has demonstrated analog processing of AI algorithms with 1000x speedup and 1000x power efficiency over state-of-the-art digital processors, and is researching AI-specific hardware designs. DARPA is also attacking the current inefficiency of machine learning, by researching methods to drastically reduce requirements for labeled training data.

Next Generation AI

: The machine learning algorithms that enable face recognition and self-driving vehicles were invented over 20 years ago. DARPA has taken the lead in pioneering research to develop the next generation of AI algorithms, which will transform computers from tools into problem-solving partners. DARPA research aims to enable AI systems to explain their actions, and to acquire and reason with common sense knowledge. DARPA R&D produced the first AI successes, such as expert systems and search, and more recently has advanced machine learning tools and hardware. DARPA is now creating the next wave of AI technologies that will enable the United States to maintain its technological edge in this critical area.

11 thoughts on “DARPA Starts $2 Billion Campaign to Develop Next Wave of AI Technologies”

  1. If you are worried by AI itself you are afraid of the wrong thing. An AI, by which we mean a strong AI, something that actually thinks, rather than a work-around for thinking, would have no drives, no need to do anything. In short, it would lack goals. How could we give it goals? Well we could have it randomly generate them. That way lies madness. We could give it artificial glands so it could experience emotions and primal urges that might lead to goals — kind of the way we humans seem to do it. That also seems a rather irresponsible thing for us to do. A third way is we tell it what its goals are, then it intelligently works towards achieving those goals. The thing to fear is people that build strong AIs with random number generators to pick their goals, irresponsible people that install artificial gland equivalents, or bad people that just insist on giving them goals that the rest of us severely disagree with. History is full of parallels from the time some smart ape picked up a rock or a stick. We try to limit crazies, irresponsible people, and bad people from getting guns. When that fails, we use guns to address the problem. (Go ahead and make your obligatory snarky remark about gun control in your mind, no need to write it down, we’ve all heard it before.) The proper way to think of strong AI is something like Aladdin’s genie. Be careful what you wish for and it’s not a bad thing to have. Used to be humans were the brains and the brawn. Then we started using animals and, later, machines, so in a lot of case we now seem to be just the brains and the machines are the brawn. What we’ve overlooked is the fact that we are also the motivation. So we move from two element model, to a three element model, where machines supply the brawn, strong AI supplies the brains, and we supply the motivation. It’s still top of the food chain so try not to fret so much.

  2. If you are worried by AI itself you are afraid of the wrong thing.An AI by which we mean a strong AI something that actually thinks rather than a work-around for thinking would have no drives no need to do anything. In short it would lack goals.How could we give it goals? Well we could have it randomly generate them. That way lies madness. We could give it artificial glands so it could experience emotions and primal urges that might lead to goals — kind of the way we humans seem to do it. That also seems a rather irresponsible thing for us to do. A third way is we tell it what its goals are then it intelligently works towards achieving those goals.The thing to fear is people that build strong AIs with random number generators to pick their goals irresponsible people that install artificial gland equivalents or bad people that just insist on giving them goals that the rest of us severely disagree with. History is full of parallels from the time some smart ape picked up a rock or a stick. We try to limit crazies irresponsible people and bad people from getting guns. When that fails we use guns to address the problem. (Go ahead and make your obligatory snarky remark about gun control in your mind no need to write it down we’ve all heard it before.) The proper way to think of strong AI is something like Aladdin’s genie. Be careful what you wish for and it’s not a bad thing to have.Used to be humans were the brains and the brawn. Then we started using animals and later machines so in a lot of case we now seem to be just the brains and the machines are the brawn. What we’ve overlooked is the fact that we are also the motivation. So we move from two element model to a three element model where machines supply the brawn strong AI supplies the brains and we supply the motivation. It’s still top of the food chain so try not to fret so much.

  3. If you are worried by AI itself you are afraid of the wrong thing. An AI, by which we mean a strong AI, something that actually thinks, rather than a work-around for thinking, would have no drives, no need to do anything. In short, it would lack goals. How could we give it goals? Well we could have it randomly generate them. That way lies madness. We could give it artificial glands so it could experience emotions and primal urges that might lead to goals — kind of the way we humans seem to do it. That also seems a rather irresponsible thing for us to do. A third way is we tell it what its goals are, then it intelligently works towards achieving those goals. The thing to fear is people that build strong AIs with random number generators to pick their goals, irresponsible people that install artificial gland equivalents, or bad people that just insist on giving them goals that the rest of us severely disagree with. History is full of parallels from the time some smart ape picked up a rock or a stick. We try to limit crazies, irresponsible people, and bad people from getting guns. When that fails, we use guns to address the problem. (Go ahead and make your obligatory snarky remark about gun control in your mind, no need to write it down, we’ve all heard it before.) The proper way to think of strong AI is something like Aladdin’s genie. Be careful what you wish for and it’s not a bad thing to have. Used to be humans were the brains and the brawn. Then we started using animals and, later, machines, so in a lot of case we now seem to be just the brains and the machines are the brawn. What we’ve overlooked is the fact that we are also the motivation. So we move from two element model, to a three element model, where machines supply the brawn, strong AI supplies the brains, and we supply the motivation. It’s still top of the food chain so try not to fret so much.

  4. If you are worried by AI itself you are afraid of the wrong thing.An AI by which we mean a strong AI something that actually thinks rather than a work-around for thinking would have no drives no need to do anything. In short it would lack goals.How could we give it goals? Well we could have it randomly generate them. That way lies madness. We could give it artificial glands so it could experience emotions and primal urges that might lead to goals — kind of the way we humans seem to do it. That also seems a rather irresponsible thing for us to do. A third way is we tell it what its goals are then it intelligently works towards achieving those goals.The thing to fear is people that build strong AIs with random number generators to pick their goals irresponsible people that install artificial gland equivalents or bad people that just insist on giving them goals that the rest of us severely disagree with. History is full of parallels from the time some smart ape picked up a rock or a stick. We try to limit crazies irresponsible people and bad people from getting guns. When that fails we use guns to address the problem. (Go ahead and make your obligatory snarky remark about gun control in your mind no need to write it down we’ve all heard it before.) The proper way to think of strong AI is something like Aladdin’s genie. Be careful what you wish for and it’s not a bad thing to have.Used to be humans were the brains and the brawn. Then we started using animals and later machines so in a lot of case we now seem to be just the brains and the machines are the brawn. What we’ve overlooked is the fact that we are also the motivation. So we move from two element model to a three element model where machines supply the brawn strong AI supplies the brains and we supply the motivation. It’s still top of the food chain so try not to fret so much.

  5. If you are worried by AI itself you are afraid of the wrong thing.

    An AI, by which we mean a strong AI, something that actually thinks, rather than a work-around for thinking, would have no drives, no need to do anything. In short, it would lack goals.

    How could we give it goals? Well we could have it randomly generate them. That way lies madness. We could give it artificial glands so it could experience emotions and primal urges that might lead to goals — kind of the way we humans seem to do it. That also seems a rather irresponsible thing for us to do. A third way is we tell it what its goals are, then it intelligently works towards achieving those goals.

    The thing to fear is people that build strong AIs with random number generators to pick their goals, irresponsible people that install artificial gland equivalents, or bad people that just insist on giving them goals that the rest of us severely disagree with.

    History is full of parallels from the time some smart ape picked up a rock or a stick. We try to limit crazies, irresponsible people, and bad people from getting guns. When that fails, we use guns to address the problem. (Go ahead and make your obligatory snarky remark about gun control in your mind, no need to write it down, we’ve all heard it before.)

    The proper way to think of strong AI is something like Aladdin’s genie. Be careful what you wish for and it’s not a bad thing to have.

    Used to be humans were the brains and the brawn. Then we started using animals and, later, machines, so in a lot of case we now seem to be just the brains and the machines are the brawn. What we’ve overlooked is the fact that we are also the motivation. So we move from two element model, to a three element model, where machines supply the brawn, strong AI supplies the brains, and we supply the motivation. It’s still top of the food chain so try not to fret so much.

  6. AI is like genomics – the more you know, the more you realize how much more there is to know, in an ever-enlarged multi-dimensional solution space.

  7. AI is like genomics – the more you know the more you realize how much more there is to know in an ever-enlarged multi-dimensional solution space.

Comments are closed.