January 04, 2013

New method allows scientists to insert multiple genes in specific locations and delete defective genes

Researchers at MIT, the Broad Institute and Rockefeller University have developed a new technique for precisely altering the genomes of living cells by adding or deleting genes. The researchers say the technology could offer an easy-to-use, less-expensive way to engineer organisms that produce biofuels; to design animal models to study human disease; and to develop new therapies, among other potential applications.

To create their new genome-editing technique, the researchers modified a set of bacterial proteins that normally defend against viral invaders. Using this system, scientists can alter several genome sites simultaneously and can achieve much greater control over where new genes are inserted, says Feng Zhang, an assistant professor of brain and cognitive sciences at MIT and leader of the research team.

The new system is much more user-friendly than prior gene modification methods, Zhang says. Making use of naturally occurring bacterial protein-RNA systems that recognize and snip viral DNA, the researchers can create DNA-editing complexes that include a nuclease called Cas9 bound to short RNA sequences. These sequences are designed to target specific locations in the genome; when they encounter a match, Cas9 cuts the DNA.

This approach can be used either to disrupt the function of a gene or to replace it with a new one. To replace the gene, the researchers must also add a DNA template for the new gene, which would be copied into the genome after the DNA is cut.

Toyota and Audi competing with Google to commercialize self-driving cars

A 2013 Lexus LX is equipped with Toyota's Intelligent Transport Systems (ITS). ITS includes a radar and a communications system for "talking" to other vehicles.

The Christian Science Monitor has a preview report from the 2013 Consumer Electronics Show. Toyota and Audi are both preparing to show off cars with driverless technology at the 2013 Consumer Electronics Show. Google has been working on driverless cars for years, and big automakers like Toyota and Audi are getting serious about the technology as well.

The Wall Street Journal reports Audi will be bringing a car that can find a parking space on its own, and park itself without help from a driver. Audi’s been working on autonomous vehicle technology for quite a while, as have Ford and Mercedes-Benz.

General parameters of an Alcubierre Warp Drive in Higher Dimensional Spacetime Extrapolating success with Harold White - NASA research

Talk Polywell has some calculations of what might be achieved with the Harold White space warping work given plausible power generation and propulsion systems. This information was provided by Paul March who is working on the NASA project to try to create a detectable warping of space.

1. First have a successful creation and detection of a one part in ten million space warp

2. Develop and increase the level of space warping to a full warping of space

3. Develop advanced space propulsion to achieve about 10% of light speed (nuclear fusion propulsion, nuclear fission propulsion, power beamed propulsion)

4. Apply the advanced design of warping technology to the sublight space vehicle

Assuming we use a 100,000 kg vehicle with an initial velocity of 0.1 times the speed of light (c), with green light (6.0x10^14Hz) lasers for our warp field oscillators, a toroidal warp field cavity that has a superconductive Q-Factor of 10^8, with an input power of 1,000 GWe or 1.0 terawatt (T), we could expect a net light speed boost factor of only 4.02c. With 10 TWe input power we get a net c boost factor of 12.72c. Of course I could have used even higher frequencies for the warp field oscillators of say 1,000x the green light frequency, which would reduce these power levels by a factor of ~30 for the same boost factor, but we really don't know how to build X-Ray lasers yet.

Paul March did some more calculations with the warp-field analysis tool and found the following -

by decreasing the resonant cavity dielectric density down to a lunar like vacuum level of 5x10^-12 Torr and increasing the warp-core torodial resonant cavity size up to 20 meter OD by 15 meter ID by 20 meter long while still using green light laser frequency for the RF source and using "just" 1.0 gigawatt of electrical input power, that one might be able to obtain a c boost factor of 88,000 times the speed of light. If one pulled back to using an infrared 1x10^12 Hz (THz) RF source using the same 1.0 GWe of input power, then the c boost factor lowers down to ~3,600c.

As you can tell there from these comments, there are many design parameters that go into this simulation, so your obtainable c boost factor will depend on just how clever we are in the actual design and buildup of the starship.

Germany E-volo continues development of all electric vertical takeoff and landing

Nextbigfuture covered what was four quadcopters that combined for the first manned flight at the end of 2011.

This has now evolved into an all electric 18 propped vertical takeoff and landing system. It is the first purely electrically powered vertical take-off and landing (VTOL) aircraft - the Volocopter.

The German Ministry of Transportation has commissioned a two- to three-year trial program to create a new category of ultralight aviation to cover the two-seat VC200 rotorcraft now in development. In Europe, ultralights are aircraft weighing less than 450kg and carrying up to two people.

In place of a conventional helicopter rotor, E-volo's Volocopter has a fixed branch-like structure on which is mounted an array of battery-powered, electrically driven, individually controlled, multiply redundant mini-rotors.

Under the trial program, the German Ultralight Aircraft Association, Sport Aircraft Association and Federal Aviation Office will work with E-volo to create a manufacturing specification, legal regulations and training requirements for the new "Volocopter" ultralight rotorcraft category.

E-volo has received a €2 million subsidy from Germany's Federal Ministry of Economics and Technology to help build the two-seat VC200, designed to fly at speeds exceeding 50kt (65mph) and altitudes up to 6,500ft with a flight time of more than an hour.

If they can achieve those goals the system has the potential to enable personal commuter flight. Vertical takeoff and landing means that large runways are not needed and point to point travel would be possible. Electrical power in a very lightweight system could be very fuel efficient.

NASA Kepler releases 15847 likely exoplanets detections

NASA Kepler released last month 18,406 planet-like detection events from its last three year mission to search for exoplanets (Kepler Q1-Q12 TCE). Further analysis is required by the NASA Kepler Team and the scientific community to extract and identify true planets, including those potentially habitable.

The Planetary Habitability Laboratory @ UPR Arecibo (PHL) performed a preliminary analysis and identified 262 candidates for potentially habitable worlds in this dataset. These candidates become top priority for further analysis, additional observations, and confirmation.

Over 100 billion planets in the galaxy

A new study from Caltech conckudes that there are over 100 billion planets in the Milky way galaxy. This is based on one planet per star but is conservative and there are likely two or more planets per star.

Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known.To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32's edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.

M-dwarf systems like Kepler-32's are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury's orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. "It's just a weirdo," he says.The fact that the planets in M-dwarf systems are so close to their stars doesn't necessarily mean that they're fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the "habitable zone," the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32's five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones. 

Negative Absolute Temperature for Motional Degrees of Freedom

Absolute temperature is usually bound to be positive. Under special conditions, however, negative temperatures—in which high-energy states are more occupied than low-energy states—are also possible. Such states have been demonstrated in localized systems with finite, discrete spectra. Here, we prepared a negative temperature state for motional degrees of freedom. By tailoring the Bose-Hubbard Hamiltonian, we created an attractively interacting ensemble of ultracold bosons at negative temperature that is stable against collapse for arbitrary atom numbers. The quasimomentum distribution develops sharp peaks at the upper band edge, revealing thermal equilibrium and bosonic coherence over several lattice sites. Negative temperatures imply negative pressures and open up new parameter regimes for cold atoms, enabling fundamentally new many-body states.

Temperature depends on the energy landscape (Image: Ludwig Maximilian University of Munich)

According to temperature's entropic definition, the highest positive temperature possible corresponds to the most disordered state of the system. This would be an equal number of particles at every point on the landscape. Increase the energy any further and you'd start to lower the entropy again, because the particles wouldn't be evenly spread. As a result, this point represents the end of the positive temperature scale.

In principle, though, it should be possible to keep heating the particles up, while driving their entropy down. Because this breaks the energy-entropy correlation, it marks the start of the negative temperature scale, where the distribution of energies is reversed – instead of most particles having a low energy and a few having a high, most have a high energy and just a few have a low energy. The end of this negative scale is reached when all particles are at the top of the energy hill.

Scalable nanopatterned surfaces could make for more efficient power generation and desalination.

Many industrial plants depend on water vapor condensing on metal plates: In power plants, the resulting water is then returned to a boiler to be vaporized again; in desalination plants, it yields a supply of clean water. The efficiency of such plants depends crucially on how easily droplets of water can form on these metal plates, or condensers, and how easily they fall away, leaving room for more droplets to form.

The key to improving the efficiency of such plants is to increase the condensers’ heat-transfer coefficient — a measure of how readily heat can be transferred away from those surfaces, explains Nenad Miljkovic, a doctoral student in mechanical engineering at MIT. As part of his thesis research, he and colleagues have done just that: designing, making and testing a coated surface with nanostructured patterns that greatly increase the heat-transfer coefficient.

NanoLetters -umping-Droplet-Enhanced Condensation on Scalable Superhydrophobic Nanostructured Surfaces

Rice University researchers show short laser pulses selectively heat gold nanoparticles

Plasmonic gold nanoparticles make pinpoint heating on demand possible. Now Rice University researchers have found a way to selectively heat diverse nanoparticles that could advance their use in medicine and industry.

“The key idea with gold nanoparticles and plasmonics in general is to convert energy,” Lapotko said. “There are two aspects to this: One is how efficiently you can convert energy, and here gold nanoparticles are world champions. Their optical absorbance is about a million times higher than any other molecules in nature.

The second aspect is how precisely one can use laser radiation to make this photothermal conversion happen,” he said. Particles traditionally respond to wide spectra of light, and not much of it is in the valuable near-infrared region. Near-infrared light is invisible to water and, more critically for biological applications, to tissue.

Different types of nanoparticles – in this case, shells, rods and solid spheres – mixed together can be activated individually with pulsed laser light at different wavelengths, according to researchers at Rice University. The tuned particles’ plasmonic response, enhanced by nanobubbles that form at the surface, can be narrowed to a few nanometers under a spectroscope and are easily distinguishable from each other. Lapotko Group/Rice University

Advanced Materials - Transient Enhancement and Spectral Narrowing of The Photothermal Effect of Plasmonic Nanoparticles Under Pulsed Excitation

Honda Accord Hybrid will have 68 mpg

Honda Motor will launch the new Accord hybrid sedan in Japan in June offering fuel economy of 29 km/l (68 mpg US, 3.4 l/100km). The new Accord hybrid will feature Honda’s two-motor hybrid drive system which will also be applied in the 2014 Accord Plug-in Hybrid.

Its mileage will far outpace the 23.4km per liter and 23.2km per liter of Toyota Motor Corp.’s Camry and Crown hybrid sedans. The fuel economy will also be the best among Hondas, beating the Insight hybrid’s 27.2km per liter.

It will sell for about 3 million yen (US$35,000)

Magnetic power inverter: AC voltage generation from DC magnetic fields

Researchers have developed a method that allows power conversion from DC magnetic fields to AC electric voltages using domain wall (DW) motion in ferromagnetic nanowires. The device concept relies on spinmotive force, voltage generation due to magnetization dynamics. Sinusoidal modulation of the nanowire width introduces a periodic potential for a DW, the gradient of which exerts variable pressure on the traveling DW. This results in time variation of the DW precession frequency and the associated voltage. Using a one-dimensional model, we show that the frequency and amplitude of the AC outputs can be tuned by the DC magnetic fields and wire-design.

This is a new application of spintronics is the highly efficient and direct conversion of magnetic energy to electric voltage by using magnetic nanostructures and manipulating the dynamics of magnetization.

January 03, 2013

A ground-based system that uses much stronger signals than GPS can pinpoint your location in cities and indoors down to within 2 to 6 inches

Instead of satellites, Locata uses ground-based equipment to project a radio signal over a localised area that is a million times stronger on arrival than GPS. It can work indoors as well as out, and the makers claim the receivers can be shrunk to fit inside a regular cellphone. Even the US military, which invented GPS technology, signed a contract last month agreeing to a large-scale test of Locata at the White Sands Missile Range in New Mexico.

"This is one of the most important technology developments for the future of the positioning industry," says Nunzio Gambale, CEO and co-founder of the firm Locata, based in Griffith, Australia.

Indoor positioning is the next big thing in location-tracking technology, and companies from Google to Nokia have jumped at the chance to prevent users getting lost in cavernous shopping malls, or in the concrete canyons of big cities, where GPS struggles to keep up. But their technologies typically have a short range, and location resolutions in the order of a few metres.

By contrast, Christopher Morin of the US Air Force tested Locata's accuracy recently at White Sands, and it worked to within 18 centimetres along any axis. Morin says it should be possible to get the resolution down to 5 centimetres.

Here is the Locata website

Promising compound restores memory loss and reverses symptoms of Alzheimer's

New research in the FASEB Journal by NIH scientists suggests that a small molecule called TFP5 rescues plaques and tangles by blocking an overactive brain signal, thereby restoring memory in mice with Alzheimer's

A new ray of hope has broken through the clouded outcomes associated with Alzheimer's disease. A new research report published in January 2013 print issue of the FASEB Journal by scientists from the National Institutes of Health shows that when a molecule called TFP5 is injected into mice with disease that is the equivalent of human Alzheimer's, symptoms are reversed and memory is restored—without obvious toxic side effects.

"We hope that clinical trial studies in AD patients should yield an extended and a better quality of life as observed in mice upon TFP5 treatment," said Harish C. Pant, Ph.D., a senior researcher involved in the work from the Laboratory of Neurochemistry at the National Institute of Neurological Disorders at Stroke at the National Institutes of Health in Bethesda, MD. "Therefore, we suggest that TFP5 should be an effective therapeutic compound."

To make this discovery, Pant and colleagues used mice with a disease considered the equivalent of Alzheimer's. One set of these mice were injected with the small molecule TFP5, while the other was injected with saline as placebo. The mice, after a series of intraperitoneal injections of TFP5, displayed a substantial reduction in the various disease symptoms along with restoration of memory loss. In addition, the mice receiving TFP5 injections experienced no weight loss, neurological stress (anxiety) or signs of toxicity. The disease in the placebo mice, however, progressed normally as expected. TFP5 was derived from the regulator of a key brain enzyme, called Cdk5. The over activation of Cdk5 is implicated in the formation of plaques and tangles, the major hallmark of Alzheimer's disease.

Highend Asus laptops and desktops will Bundle in Leap motion gesture control

Leap Motion’s sensors and software will be packaged with some “high-end” laptops and PCs. The bundled products will appear in the first quarter of 2013, says Leap, around the same time the standalone Leap Motion device, priced at $70, is due to begin shipping.

Nextbigfuture believes that the Leap Motion gesture control device will be one of the highest impacting technology device for 2013 through 2016.

Leap’s technology allows for more natural computer interaction than a touch screen or a computer mouse. “It’s much more intuitive because you don’t have to remember a new sign language,” he says. “Someone can reach out as they would in the real world.”

The software preinstalled on the Asus PCs will allow a person to control Windows 8 using finger and hand gestures. Gestures are a central part of Microsoft’s new operating system (see “The Woman Charged with Making Windows 8 Succeed”), and it is plausible that some people may find them easier to perform using their fingers in the air than with a mouse.

The Leap device is roughly the size of a pack of gum: three inches long, one inch wide and half an inch thick. One side is black glass, under which are two small cameras and a handful of infrared LEDs gather the data needed to track fingers to an accuracy of one hundredth of a millimeter.

Buckwald says that the same functionality could be added to even smaller devices. “Even today, it is possible to put Leap into a tablet or smartphone,” he says, by using smaller sensors. “The accuracy and power will stay the same.”

Aubrey and SENS Expect to Announce Lysosens success which will help prevent cardiovascular disease

Here is interview with Aubrey de Grey. Aubrey is the main driver behind SENS.

They hope to make public soon a revolutionary advance to insert a gene [derived from bacteria] in our patients and prevent them from dying from cardiovascular disease - the No. 1 cause of death today.

This would be a successful development from the lysosens project.

They will enable cells to break the junk down so that cells do not accumulate extra junk as a byproduct of normal function. This can be accomplished by equipping the lysosome with new enzymes that can degrade the relevant material. The natural place to seek such enzymes is in soil bacteria and fungi, as these aggregates, despite not being degraded in mammals, do not accumulate in soil in which animal carcasses are decaying, nor in graveyards where humans are decaying.

About 25% of all deaths in developed countries is causes by cardiovascular diseases. IF cardiovascular disease were removed as a healthrisk for everyone that would translate to about a 4 year increase in life expectancy.

This potential treatment will still be the need to go through the clinical trials and if it was successful would have to be adopted by health providers in the medical system. The actual effectiveness is also not known publicly at this time.

January 02, 2013

Google Now will provide a visual boost to productivity

IEEE Spectrum has an interview with the leader of the Google Glass project.

The main points:

* Developers will be the main ones usjng Google Glass in 2013
* They are looking to boost visual productivity and utility
* One of the main things will be Google Now which will suggest useful information (pushing useful info that it expects you will want)

January 01, 2013

Synthetic Biology, Nanotechnology and Life Extension

Nextbigfuture had written about the big common components of "visions of the future" as noted by Jamais Cascio.

The list Jamais had was :

Synthetic biology
Molecular nanotechnology
Life extension
Artificial intelligence and robots galore
3D printers
Augmented reality
Ultra-high speed mobile networks
Space colonies

I have grouped together Synthetic Biology, Molecular nanotechnology and Life Extension.

I will focus on the next 20 years out to the end of 2032.

Some comments on the list of big technologies

I have already discussed ultra-high speed mobile networks. Some people are getting 1 gigabit per second wireless this year and most should have it by 2017 as LTE advanced and a lot of super-wifi and more hotspots are rolled out. Over the next 20 years we will be heading to terabit wireless speeds. This will happen first at the hotspots. It is easier to speed up the optical network and rollout fast wifi with new vortex / twisted signals to boost data. Having new memristor combined memory and logic for petabytes of superfast and persistent data storage will help enable more caching.

The list is missing energy technologies like deep burn fission, modular mass produced fission or fusion, low energy nuclear reactions, nuclear fusion, radically improved solar, space based solar and exotic energy production. Recently the big impact on energy has been shale gas and tight oil. Low natural gas prices have shifted 10% of US energy generation from coal to natural gas. This trend and increased natural gas production will continue and spread out globally through 2030.

I would also integrate in my Mundane singularity list of technologies.

3d printers. I would include all additive manufacturing. This needs to be integrated into overall manufacturing to broadly impact the economy. Airbus is working on projects to develop the materials which will enable the printing of wings and the body of airplanes. There is work towards the printing of organs and human tissue.

Space Colonies - mainly it depends upon Spacex getting the Falcon Heavy and ideally fully reusable rockets. Blue Origin, Skylon spaceplane are also in the running on reusable space launch. It is also will be with Bigelow aerospace and the inflatable space stations. There is also the Golden Spike company trying to commercialize travel to the moon and Planetary Resources trying to develop asteroid mining. There are also the NASA NIAC projects.

Augmented reality is coming with Google Glass, but unless there are big breakthroughs applying it to radically improve education, then it is "just" the next form factor after the chocolate bar shape for the smartphone.

AI and robotics will have a lot more economic impact over the next 20 years. I have discussed it before and I will focus on an update over the next few days. I will also include new computing, quantum computing and other electronics.

55 inch OLED TV for about US$10,000

High speed wireless technologies from tens of gigabits per second to terabits per second

We have recently reviewed LTE Advanced (which will deliver up to 1 gigabit per second) and is being deployed starting in 2013 and the new multi-gigabit wifi deployments.

Here is a list of technologies which will enable even more access and higher speed wireless communication.

1. Chamtech has developed the “Spray-On Antenna” to assist intelligence gathering operations by concealing the antennas for equipment that needs to transmit and or receive in frequency bands from 1 MHz through 6.4 GHz. The antenna paint has thousands on nanocapacitors that are far more efficient (so they do not build up heat) than antennas with wires. This technology can boost wireless range by a large amount. RFID range is boosted from 5 feet to 700 feet.

2. DARPA’s 100 Gb/s RF Backbone (100G) project intends to develop a fiber-optic-equivalent communications backbone that can be deployed worldwide. The goal is to create a 100 Gb/s data link that achieves a range greater than 200 kilometers between airborne assets and a range greater than 100 kilometers between an airborne asset (at 60,000 feet) and the ground.

3. American and Israeli researchers have used twisted, vortex beams to transmit data at 2.5 terabits per second. This twist encoding technique is likely to be used in the next few years to vastly increase the throughput of both wireless and fiber-optic networks.

December 31, 2012

802.11ac for up to 3.6 gbps and 802.11ad for up to 7 Gbps and wireless replacement of HDMI, USB and PCI-e

The number of public wifi hotspots is projected to increase from 2 million in 2012 to 4 million in 2014 Combined with the new 802.11ac and 802.11ad wifi standards for multi-gigabit speeds and beamforming for longer ranges this will enable better hotspots for offloading data traffic from cellular networks.

802.11ad will allow devices to communicate over four, 2.16GHz-wide channels, delivering data rates of up to 7 Gigabits per second, even for mobile devices with limited power, a significant improvement over both 11n and 11ac.

802.11ac should still have 500 mbps speed at distances of 50 meters.

By 2015, 802.11ac and 802.11ad should be the dominant wifi

Advanced LTE deployments in 2013 and 2014

The International Telecommunications Union (ITU) did not originally consider LTE, WiMAX, and HSPA+ to be 4G systems Advanced LTE has download speeds of 300 mbps to 1 Gbps and is a true 4G system.

A brief summary of status of mobile carriers’ LTE-A penetration is provided below.

SK Telecom

SK Telecom has successfully tested their core LTE-Advanced technologies as of July 5th, 2012
CoMP* (Coordinated Multi-Point) technology commercialized in January 2012
eICIC** (Enhanced Inter-Cell Interference Coordination) technology successfully tested in collaboration with Qualcomm and Nokia Siemens Network
Plans to commercialize Carrier Aggregation*** towards the end of 2013

SK Telecom already has 100 mbps wireless service using a heterogeneous network of LTE and Wifi.

Russian Yota

Yota has launched the first commercial network using LTE Advanced, though its vendor,

Huawei, also describes it as a test LTE-A network
LTE Advanced technology has been installed on 11 base stations in the city, and initial transmissions have achieved the target 300 megabits speed
No user terminals are yet available
Expects delivery of the first devices in the first half of 2013


Sprint plans to deploy its LTE-Advanced network
800 MHz spectrum by first half of 2013


AT&T confirms LTE-Advanced deployment in 2013


T-Mobile has signed multi-year agreements with Ericsson and Nokia Siemens Networks
3GPP Release 10 equipment will be provided by the two companies
LTE-Advanced trials set to begin this summer
Broad LTE deployment on track for 2013

Future capabilities and expectations need to be qualified with key metrics and prices

Jamais Cascio had a piece from early in 2012 about how the major components of "visions" of the technological future have not changed.

The items on the list are -

Molecular nanotechnology
Artificial intelligence and robots galore
3D printers
Augmented reality
Ultra-high speed mobile networks
Synthetic biology
Life extension
Space colonies

The problem is that there is no key metrics qualified, amounts or ranges, prices or adoption levels to this list.

You could have human flight on this list and you could say that has not changed for centuries, even after lighter than air human travel started in 1783 and heavier than air travel started in 1903. Is there any difference between the 12 second flight in 1903 with some of the later developments ? Of course there is. Metrics and capabilities matter.

Ultra-high speed mobile network over the last 20 years has gone from tens of kilobits per second to tens of megabits per second on some mobile networks (mainly in asia and europe). Speeds for some high cost mobile links can get to tens of gbps now. There will be multi terabit per second links in the 20-50 year timeframe (2032-2062) that was mentioned in the Cascio article.

The original date range of consideration is quite wide and was not anchored to specific dates. When Cascio discussed the lack of change over the last 20 years then whoever was predicting in 1992 for 20-50 years then would need to true up the predictions against what happened in the 20 years.

1 gigabit per second (peak) from LTE advanced is starting to be deployed in 2013 and there is multi-gigabit per second wifi (802.11ac and 802.11ad) being deployed and will enable millions of hotspots to offload some data traffic. (2 million hotspots in 2012 and 4 million in 2014). There is also a twist dimension encoding that will enable wireless transmission speeds to go to multi-terabit speeds. Petabit speeds will be enabled over the ten years for optical network fiber backbones.

Molecular nanotechnology - we have DNA origami and DNA nanotechnology but mainly in the lab

Artificial intelligence and robots galore - there is commercialization of voice recognition (SIRI and Google Voice) and IBM is commercializing Watson.

3D printers - Additive manufacturing is a multi-billion dollar business

Augmented reality - There has been a very modest amount of commercialization of augmented reality but it appear set to pick up speed with Google Glass.

Ultra-high speed mobile networks - We have some 4G deployments

Synthetic biology - There has already been commercialization of synthetic biology for fuels, drugs, cosmetics, materials and other applications.

Life extension - Over 20 years, life expectancy increased by about 4 to 5 years.

Space colonies - There were no space colonies established over the last 20 years. There is some progress in cost reduction for launches.

Carnival of Nuclear Energy 137

Carnival of Space 281/282

Distribution center robots

Everything Robotics covered the improvements in the speed of distribution centers through the use of robotics and the Kiva goods to man process.

Kiva Goods to man process achieves 600 units per hour versus 160 picks per hour for Man to Goods

Amazon has and continues to lead e-commerce-driven distribution with their pick-to-cart method (otherwise known as man-to-goods) and their promise of speedy economical delivery. Workers run around and fill carts and deliver them to conveyors where they are transported to packing stations where individual shipments are processed and staged for pickup by FedEx, UPS, etc.. The metrics for this are 160 picks per hour. The video below shows that process.

Kiva Systems disrupted those metrics and increased worker productivity by reversing the man-to-goods process. This method brings the goods to the packer (goods-to-man). As Kiva's success became proven in the field, Amazon acquired Kiva for $775 million and is beginning to install Kiva systems in their new warehouses. It is estimated that the new Kiva metric for Amazon consumer goods is 600 items per hour.

Controlled clockwise and anticlockwise rotational switching of a molecular motor

Researchers have made a reversible rotor that sits on a ruthenium atomic cal bearing.

Nature Nanotechnology - Controlled clockwise and anticlockwise rotational switching of a molecular motor

The design of artificial molecular machines often takes inspiration from macroscopic machines. However, the parallels between the two systems are often only superficial, because most molecular machines are governed by quantum processes. Previously, rotary molecular motors powered by light and chemical energy have been developed. In electrically driven motors, tunnelling electrons from the tip of a scanning tunnelling microscope have been used to drive the rotation of a simple rotor in a single direction and to move a four-wheeled molecule across a surface. Here, we show that a stand-alone molecular motor adsorbed on a gold surface can be made to rotate in a clockwise or anticlockwise direction by selective inelastic electron tunnelling through different subunits of the motor. Our motor is composed of a tripodal stator for vertical positioning, a five-arm rotor for controlled rotations, and a ruthenium atomic ball bearing connecting the static and rotational parts. The directional rotation arises from sawtooth-like rotational potentials, which are solely determined by the internal molecular structure and are independent of the surface adsorption site.

December 30, 2012

Creating Substrate-Independent Minds

Carboncopies.org is a nonprofit organisation with a goal of advancing and creating Substrate-Independent Minds (SIM). Through carboncopies.org, we reach out to the public (e.g. meetings, Facebook group), to projects and experts, in order to introduce SIM, to explain why we should accomplish SIM, to maintain development roadmaps, as well as to facilitate research and development networks, secure funding and the establishment of new projects to address the complete mosaic of requirements.

Besides Whole Brain Emulation (WBE), they also look at Brain Computer Interfaces (BCI) and Loosely-Coupled Off-Loading (LCOL). LCOL would be re-creations dependent on sources such as self-report, life-logs, video recordings, artificial intelligence that attempts to learn about an individual, etc.

Information presented here is from the Carboncopies.org FAQ and website. There is article by Randal A. Koene, on Substrate Independent minds

What is Advancing Substrate-Independent Minds (ASIM)?

In the past the transferal of minds into computer-based systems has been rather vaguely referred to as 'uploading'. However, those hoping to advance this multidisciplinary field of research prefer to use the term Advancing Substrate Independent Minds (ASIM), to emphasize a more scientific, and less science fiction approach to creating emulations of human brains in substrates other than the original biological substrate. The term ASIM captures the fact that there are several ways in which hardware and software may be used to run algorithms which mimic the human brain, and that there are many different approaches that can be used to realize this objective.

Once you implement the functions originally carried out in one substrate in the computational hardware of another substrate you have achieved substrate-independence for those functions.

ASIM depends on developments in many disciplines. From a technical perspective, some of the foremost are neuroinformatics, neuroprosthetics, artificial general intelligence, high-throughput microscopy and brain-computer interfaces. Conceptually, there are also strong associations with applied bioinformatics and life-extension research.

The notion that the human mind is central to the experience of our existence and the realization that the brain can be understood as a biological machine have both been raised many times throughout the history of science. Following the development of computers and serious attempts to create mind-like function in artificial intelligence, there are now multiple high-profile projects directly aimed at reimplementing brain structure and functions of neurophysiology. To name the most obvious current candidates: the Blue-Brain Project, and the DARPA Synapse Project. Finally, converging developments in the areas of neural interfacing, optogenetic techniques and high-throughput microscopy, we arrive at the very real possibility to learn from and re-implement structure and function of specific brain samples.

ASIM is a subset of AGI (artificial general intelligence). It is the technical approach to mind uploading.

Winterbergs micro-chemical fusion bomblets

Winterberg's work in nuclear rocket propulsion earned him the 1979 Hermann Oberth Gold Medal of the Wernher von Braun International Space Flight Foundation. Winterberg is well respected for his work in the fields of nuclear fusion and plasma physics, and Edward Teller has been quoted as saying that he had "perhaps not received the attention he deserves" for his work on fusion.

Nextbigfuture has recently written briefly about a recent Winterberg paper - Hybrid Chemical-Nuclear Convergent Shock Wave High Gain Magnetized Target Fusion

Winterberg describes how to use a chemical explosive to boost the a nuclear fusion reaction that generates 1000 times the energy of the chemical explosive.

Winterberg describes a 30 cm sphere (1 foot sphere) of high explosive that would generate a 25 ton hybrid chemical - nuclear fusion pulse.

The 14 MeV DT fusion reaction neutrons are slowed down in its dense combustion products, raising the temperature in it to 100000 K. At this temperature the kinetic energy of the expanding fire ball can be converted at a high (almost 100%) efficiency directly into electric energy by an MHD Faraday generator. In this way most of the 80% neutron energy can be converted into electric energy, about three times more than in magnetic (ITER) or inertial (ICF) DT fusion concepts.

Currently we use high explosive, nuclear fission- nuclear fusion system for nuclear fusion bombs (Teller-Ulam bombs). The Winterberg system would remove the nuclear fission component which produces all of the fallout. The micro-chemical fusion system would have almost no fallout. It would enable nuclear pulse propulsion systems with almost clean pulses.

It would also be a way to make nuclear devices that were 1000 times the power of chemical bombs that scaled down to smaller weapons.

HMX, also called octogen, is a powerful and relatively insensitive nitroamine high explosive, chemically related to RDX. Like RDX, the compound's name is the subject of much speculation, having been variously listed as High Melting eXplosive, Her Majesty's eXplosive, High-velocity Military eXplosive, or High-Molecular-weight rdX. HMX is used almost exclusively in military applications, including as the detonator in nuclear weapons, in the form of polymer-bonded explosive, and as a solid rocket propellant.

Reasons to go beyond biological life extension

A talk by Randal Koene covers the motivation for human technological augmentation and reasons to go beyond biological life extension

* Life-extension in biology may increase the fragility of our species & civilization… More people? – Resources. Less births? – Fewer novel perspectives. Expansion? – Environmental limitation.

* Biological life-extension within the same evolutionary niche = further specialization to the same performance “over-training” in conflict with generalization

* Significant biological life-extension incredibly difficult and beset by threats.

* Life-extension and Substrate-Independence are two different objectives

* Developing out of a “catchment area” (S.Gildert) may demand iterations of exploration – and exploration involves risk.Hard-wired delusions and drives. What wouldan AGI do? Which types of AGI would exist in the long run?

* “Uploading” is just one step of many – but a necessary step – for a truly advanced species

Форма для связи


Email *

Message *