A series of announcements that will multiply the productivity of scientific research and enhance the ability of lay people to have scientifically correct results. These new programs and robotic systems will be further enhanced with the increasing use of cloud computing, robotic and sensor advances and increased computing power from things like GPGPU. (H/T Michael Anissimov at Accelerating Future) These advances go along with other technology and processes that are accelerating scientific research. Examples of other technology are combinatorial arrays for running many tests at the same time, cheap diagnostic devices and sensors and computer simulation.
1. Wired Science reports on a computer program that self-discovers the laws of physics. It is analyzing large datasets and determining laws (some previously undiscovered) which explain the data. More information and video at Cornell University.
In just over a day, a powerful computer program accomplished a feat that took physicists centuries to complete: extrapolating the laws of motion from a pendulum’s swings.
Developed by Cornell researchers, the program deduced the natural laws without a shred of knowledge about physics or geometry.
The research is being heralded as a potential breakthrough for science in the Petabyte Age, where computers try to find regularities in massive datasets that are too big and complex for the human mind.
Lipson and Schmidt designed their program to identify linked factors within a dataset fed to the program, then generate equations to describe their relationship. The dataset described the movements of simple mechanical systems like spring-loaded oscillators, single pendulums and double pendulums — mechanisms used by professors to illustrate physical laws.
The program started with near-random combinations of basic mathematical processes — addition, subtraction, multiplication, division and a few algebraic operators.
Initially, the equations generated by the program failed to explain the data, but some failures were slightly less wrong than others. Using a genetic algorithm, the program modified the most promising failures, tested them again, chose the best, and repeated the process until a set of equations evolved to describe the systems. Turns out, some of these equations were very familiar: the law of conservation of momentum, and Newton’s second law of motion.
“It’s a powerful approach,” said University of Michigan computer scientist Martha Pollack, with “the potential to apply to any type of dynamical system.” As possible fields of application, Pollack named environmental systems, weather patterns, population genetics, cosmology and oceanography. “Just about any natural science has the type of structure that would be amenable,” she said.
Compared to laws likely to govern the brain or genome, the laws of motion discovered by the program are extremely simple. But the principles of Lipson and Schmidt’s program should work at higher scales.
The researchers have already applied the program to recordings of individuals’ physiological states and their levels of metabolites, the cellular proteins that collectively run our bodies but remain, molecule by molecule, largely uncharacterized — a perfect example of data lacking a theory.
Their results are still unpublished, but “we’ve found some interesting laws already, some laws that are not known,” said Lipson. “What we’re working on now is the next step — ways in which we can try to explain these equations, correlate them with existing knowledge, try to break these things down into components for which we have clues.”
Wolfram|Alpha looks like a search engine, in that there’s a one-line box where you type in a question. The output appears a second or two later, as a page of text and graphics below the box. What’s happening behind the scenes? Rather than looking up the answer to your question, Wolfram|Alpha figures out what your question means, looks up the necessary data to answer your question, computes an answer, designs a page to present the answer in a pleasing way, and sends the page back to your computer.
Let me give three random examples. If you enter the query, “3/26/2009 + 90 days” you’ll get a page that gives a date ninety days later than the first date. If you enter “mt. everest height length of golden gate” you’ll get a page expressing the height of Mount Everest as a multiple of the length of the Golden Gate Bridge. If you enter “temperature in los gatos,” you’ll get something like the current temperature, a graph of the temperatures over the last week with projections for the next few days, and a graph of the temperatures over the last year.
Wolfram|Alpha can pop out an answer to pretty much any kind of factual question that you might pose to a scientist, economist, banker, or other kind of expert. The exciting part is that you’re not just looking up pages on the web, you’re getting new information that’s generated by computations working from the known data. Wolfram says the response can be so speedy because, “We’ve found that, of all the things science can compute, most take a second or less.”
Wolfram|Alpha will let users input data and models, along the lines of Wikipedia. He says they will in fact allow that, although via a less open system than Wikipedia. Contributors would need to fill out a form, including some references verifying that their information is correct.
3. Adam is an automated scientist programmed by a team of researchers at Aberystwyth University and the University of Cambridge to carry out each step of the scientific process — from generating hypotheses to making conclusions — without any direct help from humans. Adam will help speed up research and enable rapid replication of experiments and speed the extension of previous work.
Using a form of artificial intelligence, Adam first examines a model of the life processes of the yeast and determines which enzymes are orphans. He then compares these orphans to similar enzymes in other organisms. And based on this comparison, he formulates an original hypothesis about which genes might encode for the orphans.
“This robot is very exciting,” says Bart Selman, a computer scientist and artificial intelligence expert at Cornell University. “It demonstrates active machine learning, where a robot actually decides what data to collect and what type of experiment to run.”
Adam offers scientists more than just relief from the daily drudgery of laboratory work. He provides them with a new way to understand and share their research. Each step of Adam’s experiments is recorded in a formalized logical language that can be carefully examined and easily replicated.
The researchers were able to reuse Adam’s experimental data in order to investigate other phenomena. “You would expect that when you remove enzymes from yeast, it would become less efficient because it evolved to have those enzymes for a reason,” King says. “But we found that in many cases the opposite was true.”
Adam is not designed to replace scientists. On the contrary, as the researchers are careful to point out, the idea is to develop a way of enabling teams of robot and human scientists to work together.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.