January 09, 2010

768-bit RSA Factored by Academics

Factorization of a 768-bit RSA modulus (22 page pdf)

On December 12, 2009, we factored the 768-bit, 232-digit number RSA-768 by the number
field sieve. The number RSA-768 was taken from the now obsolete RSA Challenge list as a representative 768-bit RSA modulus. This result is a record for factoring general integers. Factoring a 1024-bit RSA modulus would be about a thousand times harder, and a 768-bit RSA modulus is several thousands times harder to factor than a 512-bit one. Because the first factorization of a 512-bit RSA modulus was reported only a decade ago it is not unreasonable to expect that 1024-bit RSA moduli can be factored well within the next decade by an academic effort such as ours. Thus, it would be prudent to phase out usage of 1024-bit RSA within the next three to four years.

The following effort was involved. We spent half a year on 80 processors on polynomial selection. This was about 3% of the main task, the sieving, which was done on many hundreds of machines and took almost two years. On a single core 2.2 GHz AMD Opteron processor with 2 GB RAM per core, sieving would have taken about fifteen hundred years. This included a generous amount of oversieving, to make the most cumbersome step, the matrix step, more manageable. Preparing the sieving data for the matrix step took a couple of weeks on a few processors, the final step after the matrix step took less than half a day of computing, but
took about four days of intensive labor because a few bugs had to be fixed.
It turned out that we had done about twice the sieving strictly necessary to obtain a usable matrix, and that the extra data allowed generation of a matrix that was quite a bit easier than anticipated at the outset of the project. Although we spent more computer time on the sieving than required, sieving is a rather laid back process that, once running, does not require much care beyond occasionally restarting a machine. The matrix step, on the other hand, is a more subtle affair where a slight disturbance easily causes major trouble, in particular if the problem is by its sheer size stretching the available resources. Thus, our approach to overspend on an easygoing part of the computation led to a matrix that could be handled relatively smoothly, thereby saving us considerable headaches. More importantly, and another reason behind the oversieving, the extra sieving data allow us to conduct various experiments aimed at getting a better understanding about the relation between sieving and matrix efforts and the effect on NFS feasibility and overall performance. This is ongoing research, the results of which will be reported elsewhere. All in all, the extra sieving cycles were well spent.

These figures imply that much larger matrices are already within reach, leaving preciously little doubt about the feasibility by the year 2020 of a matrix required for a 1024-bit NFS factorization. As part of the experiments mentioned above we also intend to study if a single large cluster would be able to handle such matrices using the block Lanczos algorithm (cf. [8]). Compared to block Wiedemann this has advantages (a shorter, single sequence of iterations and no tedious and memory-hungry central Berlekamp-Massey step [40]) but disadvantages as well (it cannot be run on separate clusters and each iteration consists of a multiplication by both a matrix and its transpose).

At this point factoring a 1024-bit RSA modulus looks more than five times
easier than a 768-bit RSA modulus looked back in 1999, when we achieved the first public factorization of a 512-bit RSA modulus. Nevertheless, a 1024-bit RSA modulus is still about one thousand times harder to factor than a 768-bit one. If we are optimistic, it may be possible to factor a 1024-bit RSA modulus within the next decade by means of an academic effort on the same limited scale as the effort presented here.

Another conclusion from our work is that we can quite confidently say that if we restrict ourselves to an open community, academic effort as ours and unless something dramatic happens in factoring, we will not be able to factor a 1024-bit RSA modulus within the next five years (cf. [29]). After that, all bets are off.
The ratio between sieving and matrix time was almost 10. This is probably not optimal if one wants to minimize the overall runtime.

Minimization of runtime may not be the most important criterion. Sieving is easy, and doing more sieving may be a good investment if it leads to a less painful matrix step.

Our computation required more than 1020 operations. With the equivalent of almost 2000 years of computing on a single core 2.2GHz AMD Opteron, on the order of 267 instructions were carried out.

Memristor, Memcapacitor and Meminductor Research Update

In mid-2009, research was showing that memristors and memcapacitors are similar to synapses and can be made into neural networks

There is an updated version of the paper Experimental demonstration of associative memory with memristive neural networks

Abstract—Synapses are essential elements for computation and information storage in both real and artificial neural systems. An artificial synapse needs to remember its past dynamical history, store a continuous set of states, and be ”plastic” according to the pre-synaptic and post-synaptic neuronal activity. Here we show that all this can be accomplished by a memory-resistor (memristor for short). In particular, by using simple and inexpensive offthe- shelf components we have built a memristor emulator which realizes all required synaptic properties. Most importantly, we have demonstrated experimentally the formation of associative memory in a simple neural network consisting of three electronic neurons connected by two memristor-emulator synapses. This experimental demonstration opens up new possibilities in the understanding of neural processes using memory devices, an important step forward to reproduce complex learning, adaptive and spontaneous behavior with electronic neural networks.

A recently demonstrated resistor with memory (memristor for short) based on TiO2 thin films offers a promising realization of a synapse whose size can be as small as 30×30×2 nm^3. Memristors belong to the larger class of memory-circuit elements (which includes also memcapacitors and meminductors), namely circuit elements whose response depends on the whole dynamical history of the system. Memristors can be realized in many ways, ranging from oxide thin films to spin memristive systems. In the present paper, we describe a flexible platform allowing for simulation of different types of memristors, and experimentally show that a memristor could indeed function as a synapse. We have developed electronic versions of neurons and synapses whose behavior can be easily tuned to the functions found in biological neural cells. Of equal importance, the electronic neurons and synapses were fabricated using inexpensive off-the-shelf electronic components resulting in few dollars cost for each element, and therefore can be realized in any electronic laboratory. Clearly, we do not expect that with such elements one can scale up the resulting electronic neural networks to the actual brain density. However, due to their simplicity reasonably complex neural networks can be constructed from the two elemental blocks developed here and we thus expect several functionalities could be realized and studied.

Practical approach to programmable analog circuits with memristors by Yuriy V. Pershin and Massimiliano Di Ventra

We suggest an approach to use memristors (resistors with memory) in programmable analog circuits. Our idea consists in a circuit design in which low voltages are applied to memristors during their operation as analog circuit elements and high voltages are used to program the memristor’s states. This way, as it was demonstrated in recent experiments, the state of memristors does not essentially change during analog mode operation. As an example of our approach, we have built several programmable analog circuits demonstrating memristor-based programming of threshold, gain and frequency.

Applications for analog memristor circuits
* Programmable threshold comparator
* Programmable gain amplifier
* Programmable switching thresholds Schmitt trigger
* Programmable frequency relaxation oscillator

Concerning reproducibility of memristive behavior, experiments with TiO2 thin films demonstrate a significant amount of noise in hysteresis curves. Possibly, the resistance change effect in colossal magnetoresistive thin films is more suitable for analog-mode memristor applications

Memristive circuits simulate memcapacitors and meminductors by Yuriy V. Pershin and Massimiliano Di Ventra

Abstract—We suggest electronic circuits with memristors (resistors with memory) that operate as memcapacitors (capacitors with memory) and meminductors (inductors with memory). Using a memristor emulator, the suggested circuits have been built and their operation has been demonstrated, showing a useful and interesting connection between the three memory elements.

We have demonstrated that simple circuits with memristors can exhibit both memcapacitive and meminductive behavior. Memcapacitor and meminductor emulators
have been designed and built using the previously suggested memristor emulator since solid-state memristors are not available yet. These emulators can be created from inexpensive off-the-shelf components, and as such they provide powerful tools to understand the different functionalities of these newly suggested memory elements without the need of expensive material fabrication facilities. We thus expect they will be of use in diverse areas ranging from non-volatile memory applications to neuromorphic circuits.

Solid State Memcapacitor

Solid-state memcapacitor by J. Martinez, M. Di Ventra, Yu. V. Pershin (7 page pdf)

We suggest a possible realization of a solid-state memory capacitive (memcapacitive) system. Our approach relies on the slow polarization rate of a medium between plates of a regular capacitor. To achieve this goal, we consider a multi-layer structure embedded in a capacitor. The multi-layer structure is formed by metallic layers separated by an insulator so that non-linear electronic transport (tunneling) between the layers can occur. The suggested memcapacitor shows hysteretic charge-voltage and capacitance-voltage curves, and both negative and diverging capacitance within certain ranges of the field. This proposal can be easily realized experimentally, and indicates the possibility of information storage in memcapacitive devices.

Ionic Memcapacitive Effects in Nanopores by Matt Krems, Yuriy V. Pershin, Massimiliano Di Ventra

Using molecular dynamics simulations, we show that, when subject to a periodic external electric field, a nanopore in ionic solution acts as a capacitor with memory (memcapacitor) at various frequencies and strengths of the electric field. Most importantly, the hysteresis loop of this memcapacitor shows both negative and diverging capacitance as a function of the voltage. The origin of this effect stems from the slow polarizability of the ionic solution due to the finite mobility of ions in water. We develop a microscopic quantitative model which captures the main features we observe in the simulations and suggest experimental tests of our predictions. These effects may be important in both DNA sequencing proposals using nanopores and possibly in the dynamics of action potentials in neurons.

We have shown, using molecular dynamics simulations, that nanopores act as memcapacitors, namely capacitors with memory. The latter is due to the finite mobility of ions in water and hence the slow polarizability of ions compared to the pore. This phenomenon may potentially play a role in nanopore DNA sequencing proposals, especially those based on acelectric fields, as well as in other nanopore sensing applications. Moreover, the effect of the charge buildup on the nanopore surface may influence DNA translocation and its structure in proximity to the pore. Finally, due to the ubiquitous nature of nanopores in biological processes, these results may be relevant to specific ion dynamics when time-dependent fields are of importance, such as in the action potential formation and propagation during neuronal activity.

January 08, 2010

CES 2010 - Nvidia Tegra 2 Powering Tablets and 4G Products

1. Nvidia declared that 2010 is the year of the tablet at its CES2010 press conference, and went on to launch its latest ARM based Tegra 2 platform.

The company wanted these tablets to have the performance of a PC, but have the energy efficiency of a cell phone. This is where the next generation of Nvidia's Tegra 2 comes in. It features a dual-core Cortex A9 processor—part of its eight independent processors, which also include a Geforce GPU. Nvidia claims Tegra 2 will have 10 times the performance of a smartphone, operating at only 500 milliwatts. So battery life will be far better than products based on Qualcomm's Snapdragon or Intel's Atom chips, according to Nvidia.

EETImes - The Tegra 2, said to be implemented in a 40-nm process technology, combines 3-D graphics, video processing, basic computing and communications including mobile phone voice. Mike Rayfield, general manager of the mobile business unit at Nvidia, demonstrated that a machine based on Tegra 2 can run 1080 progressive scan video while machines based on the Intel Atom or Qualcomm's Snapdragon cannot. The Tegra-2 can play 140 hours of music or 16 hours of video on one charge, Nvidia said.

2. On Sprint’s WiMax-based 4G network, the Overdrive, which is about the size of a drink coaster, will reliably deliver 3 to 4 Mbps of download bandwidth, Sprint executives say, with peak speeds as fast as 10 Mbps. Upload speeds will be slower, but could peak as fast as 4.5 Mbps

3. LG Electronics is demonstrating Long Term Evolution (LTE) technology at its booth at CES 2010, dazzling attendees with download speeds of 100 Mbps.

Verizon will roll out Long Term Evolution (LTE) service to between 25 to 30 cities in 2010 and nationwide by 2013

By connecting to the LG's LTE USB Modem, LG presented one of the world's first real-time demonstrations of full HD video files, video conferences and web-surfing at speeds up to 100Mbps for downloads and 50Mbps for uploads. To put this technology in perspective, imagine being able to download an entire movie to your mobile phone in only one minute. Demonstrating data downloads at such an incredible speed marked yet another technological milestone for LG.

LG will also be unveiling to the public the "handover" technology which enables seamless network conversion between LTE and CDMA networks. Using the even slimmer version of the previous 4G LTE device, an LG proprietary device designed for the Handover last August, LG demonstrated seamless and endless data transmissions between LTE and CDMA antennas which allows for video file downloads, web surfing and internet calling.

4. About the size of a pack of Orbitz chewing gum, the Trendnet TEW-655BR3G has a USB port so that you can connect a compatible USB modem from 3G/4G Internet service providers such as Sprint, AT&T, or Verizon to it. All you need to do is plug the USB dongle into the port and you have a real Wireless-N network that connects to the Internet. Unlike the MiFi, the TEW-655BR3G can share the Internet connection to a lot more than just five people.

At 150Mbps it still offers up to six times the speed of a Wireless-G solution and has up to three times the range. This is speedy enough to share even the fastest existing cellular Internet connections available in the U.S., which currently maxes out at 7.2Mbps.

5. CES 2010 Day one roundup

A Pessimist View of China Economic Future

Gordon G. Chang,author of The Coming Collapse of China, indicates why he thinks China will not have a $123 trillion economy in 2040

This related to previous articles at this site and are directed at 1993 nobel prize winning economist Robert Fogel predicting China to have a $123 trillion economy in 2040 (Note: others like Goldman Sachs predict China to have a $57 trillion economy in 2040

First, he neglects to mention that China’s educational system, despite all the money it receives, remains inappropriate for a modern society.

Second, Fogel is right to note that migration of labor to cities has been the engine of Chinese growth, but that process has stalled in the global economic downturn. Yes, China still has cheap labor, but not mentioned in the article are the generally accepted projections that the labor force will level off in a half decade and then shrink.

Third, it’s true that Beijing’s National Bureau of Statistics does not fully account for the output of the fast-growing service sector. That’s why its estimate of 13.0% growth for 2007 is low by about two percentage points. Then, small businesses were the most vibrant part of the economy. Today, the failure to properly assess the output of small business is resulting in an overestimation of GDP because these enterprises, which tend to be more dependent on exports, are suffering more than the larger ones.

Fourth, Fogel’s views of the political system are questionable. He neglects to say that Hu Jintao has presided over a seven-year crackdown and that the Communist Party tolerates less criticism today than it did two decades ago. Economic reform has stalled because China has progressed about as far as it can within its existing political framework. A true market economy, for example, requires the rule of law, which in turn requires “institutional curbs” on government.

Fifth, Fogel apparently knows almost nothing about Chinese consumer spending. Historically, consumption contributed about 60% of China’s economic output. Today, it accounts for about 30%--and that number is going lower. Why? Beijing’s stimulus spending, about $1.1 trillion last year, is devoted almost entirely to building infrastructure and industrial capacity. As a result, the role of consumer spending is decreasing.

Nicholas Consonery is a China analyst at Eurasia Group also has a more negative view on China's future economy

The most important reason why we won't see 1.4 billion Chinese earning an average of $85,000 per year is simply that the Earth can't sustain such rapid growth. The biggest environmental challenge for China's leadership will be in securing enough water to keep the economy afloat.

Fogel warns that Europe faces some serious demographic challenges. True enough, but as Fogel only briefly acknowledges, China has an aging population of its own to reckon with. Chinese statistics show that the country's birthrate fell 42 percent from 1990 to 2007, and government projections suggest that by 2025, nearly a quarter of China's population will have celebrated its 60th birthday.

China can rely less on cars and planes and more on high speed rail.

Also, advanced electric bikes will also be part of China's transportation solutions.

China is developing cleaner energy with a plan that is compatible with high growth

China is building grand canals to address water shortages and is also building up desalination capabilities.

China is now ranked second in scientific research output.

An annual report by data analyst Evidence, published today by the Department for Business, Innovation and Skills, shows that China has moved into second place after the US in a ranking of nations by their research output.

Although the UK published 91,273 papers in 2008 – an average of 2.3 per researcher and up more than 11,000 on 2007 – it was not enough to keep pace with the most populous country in the world, which has experienced a four-fold rise in its output over the past decade.

China produced more than 110,000 papers in 2008 – an increase of about 30,000 on the 2007 figure.

The Evidence data show that the UK was responsible for 7.9 per cent of the world’s research papers in 2008, down from an average of 8.5 per cent over the past five years. The US retained its lead, although its world share has also dropped, from 34 to 29.5 per cent over the period.

The report notes an “exceptional” global increase in the number of papers published this year, driven largely by China, Brazil, India and Iran.

Despite the drop in its share of publications, the UK’s share of the world’s citations – formal references of papers by fellow academics – increased. It rose from an average of 11.2 per cent over the past five years to 11.8 per cent in 2008, putting the UK in second place after the US.


Economist Bill Conerly projects China passing the US economy in 2018

The Most Important Nanotechnology Blog Post of Eric Drexler

Eric Drexler makes what he self identifies as his most important blog post and it is on the topic of nanotechnology

Summarizes hig Reaction And Interpretation of the NRC Report

Eric Drexler gives his official reaction and interpretation of the 2006 report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative.

The committee examined the concept of advanced molecular manufacturing, and found that the analysis of its physical principles is based on accepted scientific knowledge, and that it addresses the major technical questions. However, in the committee’s view, theoretical calculations are insufficient: Only experimental research can reliably answer the critical questions and move the technology toward implementation. Research in this direction deserves support.

Note that the tone of the report is skeptical. I would expect this tone to strongly influence the impression left on casual readers, blunting the impact of what, in substance, amounts to a sharp rebuke to the conventional wisdom.

Highlights the Call for Research

The report closes with a call for research on pathways toward molecular manufacturing, quoted above, and an earlier section outlines some appropriate objectives.

Eric Drexler Notes The Responses and His View of Relevant Progress

Structural DNA nanotechnology - “DNA origami”. This technology opened the door to systematic, atomically precise engineering on a scale of hundreds of nanometers and millions of atoms.

Polypeptide foldamer nanotechnology - There’s also been rapid progress in design methodologies for complex, atomically precise nanoscale structures made from polypeptide foldamers (aka proteins). In recent years, protein engineering has achieved a functional milestone: systematically engineering devices that perform controlled molecular transformations

Framework-directed assembly of composite systems - Looking forward, promising next steps involve integrating structural DNA frameworks with polypeptide foldamers, other foldamers, and other organic and inorganic materials.

These capabilities could be exploited to pursue a spiral of improvement in materials, components, and molecular machine systems.

Each generation of tools can be expected to enable fabrication processes and products that are more robust, more susceptible to computational simulation, and better suited to established systems engineering design methodologies. This indicates the potential for an accelerating pace of development toward a technology platform that can support the implementation of high-throughput atomically precise fabrication.

This path is being followed today, but without coordination, and without a sense of mission and urgency that would reflect its potential to provide solutions to long-term yet urgent problems.

New Terahertz Detectors and Light Sources in Japan Could Open practical application of terahertz waves

There are high expectations for the application of terahertz-frequency electromagnetic waves in various fields, including the non-destructive detection of narcotics or stimulants in mail, the identification of foreign matter in food, and investigation of residual chemicals in crops. However, terahertz waves have yet to be used widely because of the difficulty in generating and detecting them. For this reason, terahertz waves are considered to be ‘unexplored’ waves. RIKEN’s Tera-photonics Team has been developing a terahertz light source and detector, and an associated database, to open the way for the application of terahertz waves.

Terahertz wave technology has boundless applications because they have the properties of both light and radio waves.

Terahertz waves are a form of electromagnetic wave, like gamma-rays, X-rays, ultraviolet light, visible light, infrared light and radio waves (Fig. 1). On the frequency spectrum, terahertz waves (0.1–100 THz) fall between infrared light and radio waves. However, this part of the electromagnetic spectrum has been largely ignored. “Terahertz waves have hardly been used because of the difficulty in both generating and detecting them,” says Hiroaki Minamide, deputy team leader of the Tera-photonics Team.

They have successfully generated terahertz waves of 1–3 THz and are completing work on 1-40 THz light sources and detectors.

The development of the light source is in its second stage. The goal is now to develop a terahertz light source that can generate any frequency in the range from 0.1 to 100 THz, the entire terahertz range. At this stage, one of the key points is what non-linear optical crystal to use. Minamide is currently using an organic non-linear optical crystal called 4-(4-dimethylaminostyryl)-1-methylpyridinium tosylate, or ‘DAST’. Compared with inorganic non-linear optical crystals such as lithium niobate, the organic DAST crystal offers higher conversion efficiency from incident excitation light to terahertz waves. It was Ito who selected the crystal. “DAST is a non-linear optical material invented by Professor Hachiro Nakanishi of Tohoku University,” says Minamide. “‘DAST could be used to generate terahertz waves,’ said Dr Ito, who also worked as a professor at Tohoku University. This intuition was surely based on his wealth of experience. Thus, we also developed a technique for growing large, practical crystals from small pieces of crystal.”

The new light source can generate any terahertz wave in a frequency range from 1 to 40 THz, and the frequency can be changed in as little as one millisecond

One future challenge is to generate terahertz waves below 1 THz and above 40 THz. Why do they pursue a wider frequency range? “Terahertz waves offer great potential in various applications, but in fact we do not know which frequency is suitable for each field. We may miss important applications in which terahertz waves might have been the best choice if our light source provides only a limited range of frequencies. We want to develop a dream light source that can cover all frequencies in the terahertz range

Another reason why terahertz waves have not been developed is that they are difficult to detect. “Even if we develop a dream light source, the application of terahertz waves will not proceed without a user-friendly detector. Thus, we are striving to develop a broadband terahertz detector to accompany the dream light source.”

We are now developing a detector that can detect terahertz waves indirectly by detecting the light generated when terahertz waves enter the DAST crystal. We want to complete the set, a dream light source and detector, within several years. A table-top, compact terahertz system.

Terahertz Substance Fingerprint Database

Minamide believes that a ‘fingerprint spectrum’ is essential for the application of terahertz waves. Some substances may transmit incident terahertz waves, whereas others may absorb them. The frequency components that a substance absorbs are unique to the individual substance. Thus, individual substances could be identified by referring to a set of absorption spectra that indicates which substances absorb particular frequencies. Such an absorption spectrum is called a ‘fingerprint spectrum’ because it can be compared to the fingerprints used to identify individuals

Contractual Incentives and Penalites to Motivate Oil Companies Increase Iraq Oil Production to 12 million barrels per day

Stuart Staniford chart on Iraq Oil

Jay Park describes why he thinks Iraq can achieve its 12 million barrels per day in 6-7 years This was a comment on an article (crossposted to the Oil Drum) by Stuart Staniford of Early Warning blog.

Jay Park is a Partner, and Chair of the Global Resources Practice Group with Macleod Dixon, where he has practiced oil and gas law since 1980. Based in Calgary, Jay leads a team of international energy lawyers.

He is the instructor of the "International Petroleum Transactions" course at the Faculty of Law of the University of Calgary. He co-instructs the five day training course, "World Legal Systems and Contracts for Oil & Gas", which is held semi-annually in London. He also co-instructs the five day courses, "Global Gas Transportation and Marketing" and "International Petroleum Joint Ventures", which are presented annually in London and other locations.

Stuart Staniford has a lot of excellent information on iraqi reserves by province.

Stuart's Iraq Oil analysis on his site

Stuart article on how long it takes to ramp up mega-mega oil projects

Jay Park's Analysis of the Likely Development of Iraq Oil

I [Jay Park] have been involved for a number of years with the Iraqi oil industry, and I am familiar with the Technical Service Contracts (TSCs) which were awarded in the First and Second Petroleum Licensing Rounds by the Petroleum Contracts and Licensing Division (PCLD) of the Iraq Ministry of Oil (MoO). I have met Dr. Al-Shahristani and many of the other MoO executives. Consequently, some of what I know can shed light on the opinions and comments above.

It seems to me that the possibility that Iraq may actually succeed in doing this should be taken seriously.

Let me explain why I agree with this sentiment. In 2004 and the years that followed, MoO entered into a number of "Memoranda of Understanding" with various major international oil companies (IOCs) to study the discovered Iraqi fields, both producing and non-producing, and share this information with MoO. Extensive analysis work was done by the IOCs, in the hopes that the work would lead to an award of a contract for the fields, or at least, the knowledge gained would give an upper hand in a bid process. Neither proved to be the case; all contracts have been awarded by bidding, and all information was shared with prospective bidders. The consequence is that all IOCs went into the bid process with good knowledge of the fields.

The Technical Service Contracts impose an obligation on the IOC (who becomes a a "Contractor" for the relevant Iraqi regional oil company, such as the South Oil Company, or the North Oil Company) to increase production to the Plateau Production Target. This must be done within 6 years (for First Round fields) or 7 years (for Second Round fields). The PPT must be maintained for 7 years.

The Plateau Production Target was one of two factors which the IOCs bid during the rounds. The second bid factor was the Remuneration Fee, expressed in dollars per barrel. The winning bid was determined using a formula involving (in the First Round) the product of the production target and the remuneration fee, or (in the Second Round) a point system that put 80% of the weight on the Remuneration Fee.

In either case, there was a tremendous incentive on the bidding IOCs to propose a VERY high Plateau Production Target. It has been said that MoO was amazed at the PPTs that were bid. MoO had hoped to get commitments for 6 million bbl/day of production; instead, they got 12 million bbl/day, even though less than all of the fields were awarded.

Can these production rates actually be achieved in Iraq? On the 'yes' side of this case are the following arguments:

1. The IOCs had good information about these fields
2. The Contractor's remuneration fee is based on a per-barrel fee which creates an economic incentive to achieve the PPT
3. The Contractors have a contractual obligation under the TSCs to reach the PPT. If they fail to do so, there are non-performance penalties under the TSC that grind down the already-modest remuneration fees, and other possible consequences

I don't make it my business to bet against some of the world's most capable companies achieving objectives that they are contractually bound to perform, and with economic incentives that encourage such performance, when they voluntarily set those objectives with all the relevant information they needed.

The following are reasons why these production levels may not be achieved:

1. Iraq may choose to comply with an OPEC quota at less than 12 million bbl/day. The TSCs expressly permit MoO to take less than the PPT. This triggers certain other consequences under the TSC to protect the Contractor's interest (such as relief from the penalties associated with failing to acheive the PPT, and the right to extend the contract term so that the expected total remuneration fees can ultimately be earned at lower production rates). There is now an active debate in Iraq regarding what might happen with its OPEC quota. Some Iraqis think that OPEC will give Iraq a generous quota in recognition that it has underproduced for more than a decade. Personally, I think that is an unrealistic expectation-- I don't see Hugo Chavez cutting back Venezuelan production rates to compensate Iraq for problems of its own making. Other Iraqis think that they will quit OPEC if they don't get all the quota they need; but others point to the fact that Iraq was one of OPEC's founders, so quitting will not be a decision to be taken lightly.

2. While IOCs are very good at achieving their committed goals, the TSCs (particularly for the First Round fields) give them quite limited control over ensuring that operations are successful. It is up to MoO to develop the transportation and export infrastructure to take away all the produced oil, and MoO's performance record since 2003 in increasing Iraqi production is less than stellar.

3. Security issues in the fields or attacks on pipelines may prevent the Contractors from being able to fulfill the PPT.

In a presentation I heard from Mr. Thamir Ghadhban, a former Iraqi oil minister, and now an oil advisor to the Iraqi Prime Minister Maliki, he doubted that 12 million barrels/day could be achieved. He believes that the IOCs bid too high, just to get the contracts. However, others have suggested to me that a really good oil field can be very forgiving-- and have no doubts, these are some of the world's best oilfields. Kirkuk has been producing since the 1930s, and shows no signs of stopping.

Indeed, production capability could conceivably go over 12 million bbl/day, once the Kirkuk field contract is negotiated (probably with Shell), and if Kurdistan region production is added. The Kurdistan Regional Government's Minister of Natural Resources, Dr. Ashti Hawrami, predicts that there could be 1 million bbl/day from Kurdistan within the decade. In my view, it is only a matter of time before there is resolution of the political wrangling that prevents Kurdistan production from being exported (I can explain my reasoning for this in another post if anyone cares).

Also, the First and Second Bid Rounds were dealing only with discovered fields. There are 430 geological anomalies in Iraq; only 130 have been drilled, with a 70% success ratio. There is bound to be some oil in the 300 or so that haven't yet felt a drill bit.

The remuneration fee is the 'profit' to the Contractor. And it is less than many people understand. The table in Stuart's post that lists the fields and the remuneration fee shows the gross fee. There is an Iraqi state partner in the Contractor consortium who gets 25% of that remuneration fee, and then there is income tax of 35% on the remainder. So the $2.00 per barrel fee that BP and CNPC agreed to receive for Rumailah becomes only $0.97 after those deductions. At $80 oil, that is 98.7% government take-- a new world high.

Please remember that Iraq's situation is unique. In 2003, they had six discovered fields with reserves of over 5 billion barrels of proven reserves-- and only three of them were producing. They had 21 discovered fields with between 500 million barrels and 5 billion barrels of proven reserves, and only nine of them were producing. And they have 35 fields with less than 500 million barrels of proven reserves, and none of them were producing. It is this significant discovered but non-producing capacity that is the source of the potentially large increase in production. This is not comparable to the development profile of other basins, because no other country has ever kept so many fields offline for so long.

Avatar International Box Office Breakdown

The Number has a breakdown by country of the box office for the movie Avatar

Japan $42,833,478 1/6/2010
China $9,703,926 1/5/2010

Japan has had movies that grossed $170+ million
China had 2012 with $70 million

So several countries are likely to more than double the current box office.

The prediction made here that Avatar will be the new worldwide number one (non-inflation) adjusted box office champion looks like a pretty safe prediction.

Avatar has reached $67 million in IMAX box office in 17 days

"Avatar continues to break IMAX records, and the incredible legs and word of mouth on this film point to continued momentum in the weeks ahead," added Greg Foster, Chairman and President of IMAX Filmed Entertainment. "These results demonstrate how James Cameron's vision combined with The IMAX Experience can bring in new audiences, which is a great way to kick-off 2010. We look forward to the excitement continuing as Avatar opens on 11 more IMAX screens in China this week."

As of September 30, 2009, there were 403 IMAX theatres (280 commercial, 123 institutional) operating in 44 countries.

Avatar continued its record-breaking box office run over its third weekend, grossing approximately $8.7 million from 179 IMAX(R) theatres domestically from January 1 through January 3, 2010. The film registered more than 12% of the film's total domestic gross of $68.3 million during that period on less than 3% of the screens, driving the domestic IMAX box office to date to approximately $47.1 million.

The worldwide IMAX box office total for Avatar is estimated to be $67 million as of the end of day Sunday, January 3, 2010.

Dark Knight made about $60-70 million on Imax. Avatar should make $200+ million from its Imax showings alone.

University of Central Florida Alzheimer's Discovery Could Lead to Long-sought Preventive Treatment

A new discovery by University of Central Florida researchers has revealed a previously unknown mechanism that may drive the early brain function deterioration of Alzheimer's victims, thus opening a new exploratory path in the quest for an Alzheimer's cure

The research was published in the science and medicine journal PLoS ONE, also demonstrates how the unique application of an existing cell research technique could accelerate the discovery of treatments to exploit the new findings.

Most Alzheimer's studies have focused on brain cells already damaged by amyloid-beta or the effects of high concentration of amyloid-beta. The University of Central Florida team, led by James Hickman, head of the UCF NanoScience Technology Center's Hybrid Systems Laboratory, instead explored impacts of very low amyloid-beta concentrations on healthy cells in an effort to mimic the earlier stages of Alzheimer's. The results were shocking.

The UCF team found that over time, though there are no outward signs of damage, exposure to moderate amyloid-beta concentrations somehow prevents electrical signals from traveling normally through the cells. Because the effect is seen in otherwise healthy cells, Hickman believes the team may have uncovered a critical process in the progression of Alzheimer's that could occur before a person shows any known signs of brain impairment.

"What we're claiming is that before you have any behavioral clues, these electrical transmission problems may be occurring," he says.

If this proves true, then the team has opened a promising potential path to an Alzheimer's treatment that could block the onset of the mild cognitive impairment associated with early Alzheimer's. In contrast, all currently available treatments manage symptoms of Alzheimer's after they first appear -- when it is likely too late for prevention.

Kucku Varghese, a former graduate student in the Hickman lab now at the University of Florida, first demonstrated amyloid-beta's effects at low concentrations on healthy cells using a common cell research method that is laborious and unsuitable for long-term experiments. But the Hickman team quickly moved to more advanced experiments using microelectrode arrays (MEA) to study the new finding. MEA studies use cultures of neurons on plates embedded with tiny electrodes that can send and measure electrical signals through nearby cells without damaging them, allowing extended experimentation.

Hickman hopes to use MEAs and other tools to pinpoint the physiological and chemical changes within the brain cells that cause the loss of signal generation in healthy cells. Mechanisms responsible for the changes could offer potential targets for drugs, which pharmaceutical companies could search for using the MEA techniques demonstrated, and the mechanisms might provide a measurable target for early diagnosis of Alzheimer's.

"We're trying to find a marker that will lead to detection and treatment while slowing down Alzheimer's progression and can really make a difference by delaying or even preventing onset of the disease," says Hickman.


In the early stages of Alzheimer's disease, patients typically suffer a major loss of the brain connections necessary for memory and information processing. Now, a combination of nutrients that was developed at MIT has shown the potential to improve memory in Alzheimer's patients by stimulating growth of new brain connections.

In a clinical trial of 225 Alzheimer's patients, researchers found that a cocktail of three naturally occurring nutrients believed to promote growth of those connections, known as synapses, plus other ingredients (B vitamins, phosopholipids and antioxidants), improved verbal memory in patients with mild Alzheimer's.

2. But some are disagreeing with the MIT nutrient study -
Anti-Alzheimer's 'Cocktail' Meets With Disdain as Alzheimer's Experts Worry New Study May Mislead Consumers

3. The University of South Florida genetically altered 96 mice to develop the Alzheimer's disease. They then flooded them with the same electromagnetic waves generated by US mobile phones.

Older mice with Alzheimer's saw deposits in the brain of beta-amyloid, a protein fragment that accumulates in the brain of Alzheimer's sufferers to form the disease's signature plaques, erased. They apparently remembered things a bit better too.

Young adult mice with no apparent signs of memory impairment were protected against Alzheimer's disease after several months of exposure to the mobile phone waves, the study showed.

The memory benefits of phone exposure took months to show up, suggesting that a similar effect in humans would take years.

The researchers conclude that electro-magnetic field exposure could be an effective, non-invasive and drug-free way to prevent and treat Alzheimer's disease in humans.

Chuanhai Cao, another author of the study, said: "Since production and aggregation of beta-amyloid occurs in traumatic brain injury, particularly in soldiers during war, the therapeutic impact of our findings may extend beyond Alzheimer's disease."

4. Study Shows Diffusion Tensor Imaging May Help Identify Early Alzheimer's Disease

A new imaging technique that measures the random motion of water within the brain may prove useful for detecting early signs of Alzheimer's disease.

The technique, known as diffusion tensor imaging (DTI) or diffusion MRI, is used to assess changes in the white matter regions of the brain.

But it is increasingly clear that DTI can also be used to identify very small structural changes in the gray matter of the brain, which is critical for learning and memory, researcher Giovanni A. Carlesimo, MD, PhD, of Italy's Tor Vergata University tells WebMD.

In a study published in the Jan. 19 issue of Neurology, Carlesimo and colleagues found that DTI scanning predicted declines in memory performance with more accuracy than traditional MRI.

"This type of brain scan appears to be a better way to measure how healthy the brain is in people who are experiencing memory loss," Carlesimo says in a news release. "This might help doctors when trying to differentiate between normal aging and diseases like Alzheimer's."

Freitas Food Replacement Nanobot

Michael Anissimov at Accelerating Future has gotten more details on a Robert Freitas design of a food replacement nanobot.

The 148Gd (gadolinium) power source proposal was described in NMI (1999) at http://www.nanomedicine.com/NMI/ The semiconductor shell structure crudely illustrated in Fig. 6.7 is intended to be an atomically precise structure. The radioactive 148Gd is kept permanently encapsulated while inside the body. The minimum radius for this powerplant is on the order of ~11 microns, so it is clearly intended for fixed-site multi-nodal (not bloodborne) use.

I haven’t yet published any detailed scaling studies specifically describing dietary-related nanorobotic systems. These proposals now exist only in rough form in my long (across 2 decades!) accumulated notes for Chapter 26 in Vol. III of my Nanomedicine book series. I hope to find time to publish NMIII sometime in this decade.

The mass of the alpha-particle is ~7000 times greater than that of an electron, so the velocity and hence the range of a-particles in matter is considerably less than for beta-particles of equal energy. Consequently the optimum radionuclide for medical nanorobots is predominantly an alpha emitter.

Among all gamma-free alpha-only emitters with t1/2 > 106 sec, the highest volumetric power density is available using Gd148 (gadolinium) which a-decays directly to Sm144 (samarium), a stable rare-earth isotope. A solid sphere of pure Gd148 (~7900 kg/m3) of radius r = 95 microns surrounded by a 5-micron thick platinum shield (total device radius R = 100 microns) and a thin polished silver coating of emissivity er = 0.02 suspended in vacuo would initially maintain a constant temperature T2 (far from a surface held at T1 = 310 K)

75-year half-life, initially generating 17 microwatts of thermal power which can be converted to 8 microwatts of mechanical power by a Stirling engine operating at ~50% efficiency. (Smaller spheres of Gd148 run cooler.) While probably too large for most individual nanorobot designs, such spheres could be an ideal long-term energy source for a swallowable or implantable "power pill" (Chapter 26) or dedicated energy organ (Section 6.4.4). A ~0.2 kg block of pure Gd148 (~1 inch3) initially yields ~120 watts, sufficient in theory to meet the complete basal power needs of an entire human body for ~1 century (given suitable nucleochemical energy conversion and load buffering mechanisms, and a sufficiently well-divided structure).

Michael highlights the cost factor:

in 1998 gadolinium cost about a dollar per two cubic microns(!) This is expensive stuff. The number of nanobots that might be used would be on the order of a billion, each with a cubic micron-sized power core, though 11 microns across due to shielding. Given the 1998 cost of Gd148, a full system would cost about $500 million for the fuel alone

Japanese Researchers Seeking to Print Out Li-polymer Battery

A Japanese research group developed a lithium polymer battery that can be manufactured by printing technology

The sheet-shaped battery is expected to be used with a flexible solar battery or display and to be attached to a curved surface. If the battery is integrated with a solar battery formed on a flexible substrate, it is possible to realize a sheet that can be used both as a power generator and a power storage, AMIC said.

Because the battery is made by using printing technology, it can be reduced in thickness, increased in area and laminated. Furthermore, when combined with a roll-to-roll production method, its costs can be reduced, AMIC said

They prototyped two types of batteries. One has an output voltage of about 4V at a room temperature while the other has an output voltage of about 2V. The thickness of the battery is about 500μm, but the battery capacity was not disclosed. Its negative and positive electrodes were formed on a flexible substrate by using printing technology.

This time, the research group used a normal sheet-shaped flexible substrate but employed a printing technology that can be applied to roll-to-roll production, it said. When a roll-to-roll production method is used, the thickness of the flexible substrate can be reduced, enabling to manufacture thin batteries.

The group did not use a printing technology to package polymer electrolyte this time. It did not disclose the details of the polymer electrolyte or the negative or positive electrode materials.

The research project is a three-year project that will end in March 2011. In the final year, the research group plans to improve manufacturing technologies for commercial production, seek appropriate applications of the battery and set numerical targets such as of battery capacity.

Japan Riken Makes Progress Towards Quantum Simulators

Figure 1: Schematic diagrams of three types of quantum simulators: atoms (red) held in place by an optical field (green; top left); ions (yellow) aligned using an electromagnetic field (top right); and superconducting circuits (bottom).

Iulia Buluta and Franco Nori of the RIKEN Advanced Science Institute present an overview of how quantum simulators may become a reality in the near future. They pinpoint future directions and argue that the technologies are now within reach.

Science - Quantum Simulators

Quantum simulators are controllable quantum systems that can be used to simulate other quantum systems. Being able to tackle problems that are intractable on classical computers, quantum simulators would provide a means of exploring new physical phenomena. We present an overview of how quantum simulators may become a reality in the near future as the required technologies are now within reach. Quantum simulators, relying on the coherent control of neutral atoms, ions, photons, or electrons, would allow studying problems in various fields including condensed-matter physics, high-energy physics, cosmology, atomic physics, and quantum chemistry.

Nanowerk has coverage

Among the various physical systems that could be used to build a quantum simulator, one possibility is the use of regular arrays of atoms or ions that are held in place by laser fields. According to Buluta and Nori, the interactions between these atoms provide a good model for emulating the interaction between other particles in complex systems. To model electrical conductivity, for example, this type of quantum simulator can be used to study the transition from the insulating state to the conducting state, where the atoms switch from being fixed to being free to move.

Buluta and Nori also point out that electronic devices fabricated on a computer chip could be used as a controllable quantum system. In this system, small circuits made from superconducting wires possess quantum physical properties that could be used to model atomic physics problems.

These quantum systems have been demonstrated experimentally; however, challenges remain until more advanced and versatile quantum simulators can be built. Synchronizing the operation of a large number of components, for example, has not yet been achieved, Buluta notes. From a theoretical viewpoint, she says that much also needs to be learned about meaningfully programming quantum simulators.

7 page pdf with supplemental material

Universal Quantum Simulators at wikipedia

Abstract of the Seth Lloyd article on Universal Quantum Simulators and links to articles that cite it

January 07, 2010

Electronic Liquid Crystal States Discovery hint at common mechanism for high-temperature superconductivity in two families

Schematic drawing of the surface reconstruction in Ca(CoxFe1-x)2As2. The circles indicate the As position, the gray lines indicate the surface reconstruction lines seen in the topographic images. For better visibility, we don't include possible surface dimerization (31) in this drawing. The black rectangle marks the orthorhombic unit cell with lattice parameters as reported by Ref. 32. The inset illustrates how the angle of the surface reconstruction changes relative to the Fe-Fe lattice from the tetragonal (right) to the orthorhombic phase (left). From Ref. 32, we calculate an angle of 90.5° or 89.5°, depending on the orientation of the a- and b-axis. This angle is exaggerated in the drawing for better visibility.

An international team lead by scientists at the U.S. Department of Energy’s (DOE) Center for Emergent Superconductivity, an Energy Frontier Research Center headquartered at DOE’s Brookhaven National Laboratory, has discovered evidence for ‘electronic liquid crystal’ states within the parent compound of one type of iron-based, high-temperature (high-Tc) superconductor.

“Because these findings appear similar to what we have observed in the parent state of cuprate superconductors, it suggests this could represent a common factor in the mechanism for high-Tc superconductivity in these two otherwise very different families of materials,” said team leader Séamus Davis, Director of the Center for Emergent Superconductivity at Brookhaven and the J.D. White Distinguished Professor of Physical Sciences at Cornell University. The team of scientists describes their findings, which may help elucidate that long-sought mechanism and lead to higher-temperature superconductors.

An important breakthrough was the capability demonstrated by the team to achieve atomically flat and perfectly debris-free surfaces for these studies. Without these conditions the spectroscopic imaging STM techniques cannot be applied. But as soon as the first large-scale images of the electronic arrangements were achieved, it became clear to the team that they were onto something very different than expected.

The scientists observed static, nanoscale arrangements of electrons measuring about eight times the distance between individual iron atoms, all aligned along one crystal axis reminiscent of the way molecules spatially order in a liquid crystal display. They also found that the electrons that are free to travel through the material do so in a direction perpendicular to these aligned ‘electronic liquid crystal’ states. This indicates that the electrons carrying the current are distinct from those apparently aligned in the electronic liquid crystals.

The next step will be to see how these conditions affect the superconductivity of the material when it is transformed to a superconductor.

“Then, if we’re able to relate our observations in the iron-based superconductors to what happens in cuprate superconductors, it may help us understand the overall mechanism for high-Tc superconductivity in all of these materials. That understanding could, in turn, help us to engineer new materials with improved superconducting properties for energy applications,” Davis said.

Science - Nematic Electronic Structure in the "Parent" State of the Iron-Based Superconductor Ca(Fe1–xCox)2As2

The mechanism of high-temperature superconductivity in the newly discovered iron-based superconductors is unresolved. We use spectroscopic imaging–scanning tunneling microscopy to study the electronic structure of a representative compound CaFe1.94Co0.06As2 in the "parent" state from which this superconductivity emerges. Static, unidirectional electronic nanostructures of dimension eight times the inter–iron-atom distance aFe-Fe and aligned along the crystal a axis are observed. In contrast, the delocalized electronic states detectable by quasiparticle interference imaging are dispersive along the b axis only and are consistent with a nematic 2 band with an apparent band folding having wave vector along the a axis. All these effects rotate through 90 degrees at orthorhombic twin boundaries, indicating that they are bulk properties. As none of these phenomena are expected merely due to crystal symmetry, underdoped ferropnictides may exhibit a more complex electronic nematic state than originally expected.

13 page pdf with supplemental information

Skiff E-Reader and Amazon Kindle DX

The Skiff Reader, the first e-reader to integrate the upcoming Skiff Service, is a state-of-the-art device that is simple and easy-to-use.

It features the largest and highest-resolution electronic-paper display yet unveiled in a consumer device, at 11.5" in size (measured diagonally) and a resolution of 1200 x 1600 pixels (UXGA). Skiff has signed a multi-year agreement with Sprint (NYSE:S) to provide 3G connectivity for Skiff’s dedicated e-reading devices in the United States. Plans are underway to have the Skiff Reader available for purchase later this year in more than 1,000 Sprint retail locations across the U.S., as well as online at www.sprint.com. Availability, pricing, additional distribution channels and other details will be disclosed at a later date. the Skiff Reader will also support wireless connectivity via Wi-Fi.

Innovations include:
* Largest e-paper display › More viewing area for a richer reading experience.
* Thinnest e-reading device › Remarkably sleek. Easy to hold, use and carry.
* Most durable e-reader › First-of-its-kind metal-foil display (eliminating the fragility of glass). A magnesium housing. An incredibly sturdy device.
* Highest display resolution › Four times as many pixels as most e-book readers, for more immersive reading.
* Full touch screen › For intuitive content selection and navigation. Instant page turns with the swipe of a finger.
* Extraordinary battery life › Read for a week between charges

2. Amazon has introduced Kindle DX with Global Wireless – a new version of the 9.7-inch wireless reading device now with the convenience of wireless content delivery in over 100 countries. The new Kindle DX with Global Wireless has a large 9.7-inch electronic paper display, auto-rotate capability and storage for up to 3,500 books. Kindle DX with Global Wireless is available for pre-order starting today for $489 at www.amazon.com/kindledx and ships January 19.

Matrix Movie Agent Smith Was 8% Right that Humans Are a Virus

Matrix movie quote:
Agent Smith: I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species and I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You're a plague and we are the cure.

Popular Science reports as much as eight percent of the human genome consists of viruses that inserted themselves into our DNA for replication, including the gene that causes schizophrenia.

Science Daily also has coverage

Human DNA has DNA from retroviruses and Bornavirus.

Wired Reports on the Bornavirus

Bornaviruses, a type of RNA virus that causes disease in horses and sheep, can insert their genetic material into human DNA and first did so at least 40 million years ago, the study shows. The findings, published January 7 in Nature, provide the first evidence that RNA viruses other than retroviruses (such as HIV) can stably integrate genes into host DNA. The new work may help reveal more about the evolution of RNA viruses as well as their mammalian hosts.

In the new study, Japanese researchers found copies of the bornavirus N (for nucleoprotein) gene inserted in at least four separate locations in the human genome. Searches of other mammalian genomes also showed that the gene has hitched rides in a wide variety of species for millions of years.

Nature - Endogenous non-retroviral RNA virus elements in mammalian genomes

Retroviruses are the only group of viruses known to have left a fossil record, in the form of endogenous proviruses, and approximately 8% of the human genome is made up of these elements. Although many other viruses, including non-retroviral RNA viruses, are known to generate DNA forms of their own genomes during replication none has been found as DNA in the germline of animals. Bornaviruses, a genus of non-segmented, negative-sense RNA virus, are unique among RNA viruses in that they establish persistent infection in the cell nucleus. Here we show that elements homologous to the nucleoprotein (N) gene of bornavirus exist in the genomes of several mammalian species, including humans, non-human primates, rodents and elephants. These sequences have been designated endogenous Borna-like N (EBLN) elements. Some of the primate EBLNs contain an intact open reading frame (ORF) and are expressed as mRNA. Phylogenetic analyses showed that EBLNs seem to have been generated by different insertional events in each specific animal family. Furthermore, the EBLN of a ground squirrel was formed by a recent integration event, whereas those in primates must have been formed more than 40 million years ago. We also show that the N mRNA of a current mammalian bornavirus, Borna disease virus (BDV), can form EBLN-like elements in the genomes of persistently infected cultured cells. Our results provide the first evidence for endogenization of non-retroviral virus-derived elements in mammalian genomes and give novel insights not only into generation of endogenous elements, but also into a role of bornavirus as a source of genetic novelty in its host.

15 page pdf with supplemental information

After 20 days of Release Avatar is the Number 2 Worldwide Box Office Movie

After 20 days of release. Avatar is the 2nd highest worldwide box office movie (non-inflation adjusted)

I have predicted Avatar will be the number one worldwide box office movie. Somewhat risky prediction. Counting on more "legs" in the foreign markets. Move up from the current 67% foreign to 72% or so. Avatar needs $550+ million domestic and $1.3+ billion foreign.

I am thinking $580 million domestic (with some Oscar boost later this year and not counting likely re-releases). An exceptionally strong and long Imax run could help push it to $650-700 million domestic. Foreign box office of $1.45-1.6 billion is my current guess. Thus about 20% chance of first $2 billion movie and 70% chance of beating Titanic for number one.

Expecting Titanic 3D re-release though.

IEEE Spectrum Selects Technology Winner and Losers Again

IEEE Spectrum is selecting technology winners and losers again

Spectrum predicted winners
* Google’s Chrome operating system
* Pixel Qi’s dual-mode screen provides both e-paper readability and full-color video.
* Intrinsity's hot-rodded processor gives cellphones PC smarts.
* IBM helps Russian Railways reinvent the railroad’s data infrastructure.
* NanoGaN’s gallium nitride substrates will help manufacturers make better lasers.

Spectrum predicted losers
* D-Wave Systems’ quantum computers won’t outperform ordinary ones. (I disagree)
* NanoUV’s extreme ultraviolet light source is revolutionary, but that won’t entice chipmakers to use it.
* Cellulosic ethanol— “grassoline”—is an environmental threat rather than a panacea.
* The Chevrolet Volt plug-in hybrid car is imaginative, daring, and superb, but uneconomical.
* Airport security screening will go a lot faster with a new biometric system that reads passengers’ minds.

Spectrum is very clear in declaring Dwave Systems Adiabatic Quantum Computer a loser

Spectrum: Bigger, costlier and slower than conventional computers and not quantum.

D-Wave believes it could beat today's best methods for approximating the solution to difficult optimization problems in financial engineering, logistics, machine learning, and bioinformatics, either by getting the same answer faster or getting a more exact solution.

David DiVincenzo, a leading quantum computing expert at IBM's T.J. Watson Research Center, in Yorktown Heights, N.Y., says that "there has yet to be an established methodology for how [adiabatic quantum computation] could function fault tolerantly," that is, with effective error correction.

Umesh Vazirani, a computer scientist at the University of California, Berkeley, says D-Wave hasn't taken into account the need to control the rate of the adiabatic process. "Running the adiabatic algorithm without this 'tuning' gives no speedup," he says.

"This will never work—if you define ‘never’ as ‘not in 20 years."—Robert W. Lucky

D-Wave's investors are happy with the company's progress. "Quite happy," says Steve Jurvetson, a director at Draper Fisher Jurvetson.

Hartmut Neven, a Google scientist who is using D-Wave's computer to design and test image-recognition algorithms, says the company is taking a "very sensible approach" and has "a very good chance at getting it to work."

Rose says the collaboration with Google shows that the company is tackling real-world problems, even if it's at the proof-of-concept level. "Our ultimate objective is to build systems with spectacular performance on these sorts of problems," he says.

IEEE Spectrum prediction will clearly be wrong if Dwave does succeed in scaling the system and solving real world problems (like Google image search) in the next five years. If it takes 20 years or longer or never then IEEE spectrum will be right.

Dupont Switchgrass Ethanol

IEEE Spectrum - David Schneider writes
That’s an enormous quantity of land—almost as much as the country now devotes to farming. And even if you covered all that land with switchgrass, it wouldn’t produce enough fuel to supply the country’s diesel trucks and buses, its jet aircraft, or the homes and businesses that use petroleum for heating fuel.

Carpeting the continent with enough switchgrass to displace all that petroleum use is theoretically possible—but it would be an environmental catastrophe on many counts.

Strict U.S. regulations may save forests from being replaced by fields of switchgrass, but elsewhere in the world trees would inevitably be chopped down, either to make way for biofuel feedstock or to grow the crops that switchgrass displaces elsewhere. For this reason alone, DDCE’s project is destined to be a loser, even if it one day proves a commercial success.

So how will this prediction get proven right ? Based on some journal articles that trash switchgrass ethanol now and in the future ? David Schneider sounds like he is saying yes they will do it and probably make a lot of money and me and some other people won't like it.

NanoGaN's substrates will grow better, cheaper lasers

Gallium nitride substrates haven't improved substantially, either, nor has the yield of the laser chips grown on those substrates. Clearly, the company that finds a way to make better growth platforms at lower prices will not only cash in for itself but also lift the entire industry.

A lot of big materials suppliers are in the race, but a dark horse called NanoGaN seems likely to win it. The company, a spin-out from the electrical engineering department of the University of Bath, in England, can make gallium nitride substrates of high quality—and what's more, it can recycle them, saving scarce and costly gallium.

The market analysis firms Strategy Analytics, Strategies Unlimited, and Yole Développement differ widely in their estimates of the current size of the market for gallium nitride substrates, from a low of $124 million to a high of $515 million, but all three firms agree that the rate of growth will average in the double digits over the next five years. If so, the market NanoGaN will be tapping into could be worth from $172 million to $800 million by 2013.

NanoGaN's substrate will do far more than provide a more efficient platform for the growth of the 5- to 8-milliwatt, 405-nanometer-wavelength lasers used to read discs in Blu-ray players and game consoles. It should also aid the production of much more powerful 150- to 200-mW violet lasers, which the industry needs for its next challenge: to read the four pairs of layers in a 200-gigabyte high-definition DVD. Future laser printers will use violet lasers instead of today's red ones, allowing them to double the print quality to 1200 dots per inch; a blue version of the lasers will still be used in tiny, portable color projectors.

Prediction translation (what I am hearing them say about ) - Gallium nitride substrates have to get a market of more than $172 million per year by 2013 and NanoGaN will not get more than 20% of that market.

NanoUV's unproven light source won't shine in the next-gen lithography market

According to Peter Choi, nanoUV’s president and director of technology, the source has two plasmas—a very hot, tiny one surrounded by a cylindrical one. The farther you move from the center, the cooler the outer plasma becomes, dropping to a positively brisk 10 000 kelvin at the rim. As the density increases, the index of refraction decreases, which means the EUV rays bend more at the edges than in the middle, thus converging on a point. The device requires more input power than the leading light source candidates, Choi says, but because it’s just a few centimeters long, hundreds of sources can be ”multiplexed” in a many-headed ”Hydra” pattern for greater output power and brightness.

”The question is: What are they going to do with the X-ray ’lightbulb’ when they perfect it? The real problem is the X-ray mask. The thin chrome of current masks cannot stop X-rays, and the thick quartz substrates do block them—hence the need for exotic masks. But the dimensional control and temperature coefficients are showstoppers for those masks.”

Prediction translation (what I am hearing them say about ) - This small company may not succeed with a new EUV light source and EUV lithography might not be the next big thing in 2013.

A lot of players and a lot of competing tech, so a very safe (trivial) prediction.

California State Assemblyman Chuck Devore Proposes Bill With New Oil Drilling and Tax Prepayments to Help Solve Budget Problems

This site had previously advocated using tax revenue from allowing oil drilling and new nuclear reactors to help solve California's budget problems. [This is repeated below]

California assemblyman Chuck Devore introduced a proposal that generates as much as $16 billion by 2011, enough to fill more than three-quarters of California's estimated $21 billion budget deficit. Chuck's proposal is based on allowing new oil drilling and tax prepayments .

DeVore's proposal creates an Interim Resources Management Board to consider bids for new oil leases within California's waters (within 3 miles of the coast) and imposes a 40% royalty on the value of the oil and gas at the time of extraction. The measure also provides an option allowing bidders to pre-pay royalties on the value of the oil at the time a bid is accepted by the IRMB at a 20% rate, creating an incentive for leaseholders who believe oil prices will increase to pay the state royalties immediately. In addition, the bill includes a prepayment option for existing onshore leaseholders through which they may lock in their current royalty rate, generally 15-25%, by paying the state royalties on the gas and oil's current market value.

"Allowing new offshore leases under this plan prevents cuts to education, public safety and other government services," added DeVore. "It is simply irresponsible to continue our energy dependence on the Middle East when we can not only provide more energy right here in California, but also repair the state's budget and economy."

With the estimated one billion barrels of oil off California's coast valued at approximately $70 a barrel, the state could see revenues of up to $14 billion if all new leaseholders chose the prepayment option. If only half of new leaseholders chose the prepayment option, the state could issue lease-revenue bonds for the remaining royalties. Combined with California's corporate tax rate of 8.84%, the added value of the oil and gas ensures that California could see as much as $16 billion in new revenue in the coming fiscal year.

Advances in slant drilling techniques allow oil and gas extraction to occur from the coast or from existing platforms. No new offshore platforms would need to be constructed to reach the new leases.

This is only to get the close coastal oil. There is more oil under deeper water, but those would require offshore platforms.

Chuck DeVore has raised over one million dollars in his campaign for the Senate seat of Barbara Boxer.

Unseating Boxer is not DeVore’s only battle on the road to the U.S. Senate. He must first face former Hewlett-Packard executive Carly Fiorina in the June Republican primary.

Previous Discussion at Nextbigfuture on Using Energy Policy to Help Solve California Budget Issues

Below are Nextbigfuture ideas and data and not related to what Chuck Devore is proposing.

California is going through a state budget crisis and has been going through chronic and persistent budget problems for over a decade. California chooses not to use its offshore oil or develop more nuclear power. Some environmentalists will say that the oil and nuclear power would not be enough to solve the energy problems of the United States. However, this will show that California could get $5-10 billion per year of tax revenue from the development of 10 billion barrels of oil and 16 trillion cubic feet of natural gas. Also, the development of nuclear energy could offset electricity purchases from out of state sources which can often be at spot prices. Each nuclear reactor could offset about $1 billion of electricity and natural gas purchases each year. California's budget gap is projected to be $40 billion over two years. The initial issuance of oil leases would provide immediate revenue to the state of one billion/year or more. The construction to build the oil rigs and nuclear plants would provide construction jobs, taxes and fees which would provide immediate benefits as the projects are being built and before oil is pumped or electricity is generated.

Does anyone believe that California will not need $10-20 billion/year in state tax revenue in ten years or that $2-5 billion/year of tax revenue over the next several years would not help a great deal?

Alaska made about $10 billion in oil revenues in 2008. They made about $5.6 billion in oil revenue in 2007.

Alaska's oil resources is projected to be about 13 billion barrels of oil. California's offshore oil is of comparable scale.

California's state budget is projected to have a $14 billion shortfall for 2008-2009 and about $40 billion for 2009-2010.

California could choose to stop screwing up its finances by having a state energy policy that would have avoided its past financial problems and could still help fix its future financial problems.

The Tennessee Brown's Ferry nuclear plant saved 800 million for the TVA by helping avoid purchases of power on the spot market.

The current plan is for 33% of California's power to come from renewable energy at a build-up cost of $60 billion. For $28 billion or less the equivalent energy could be provide by new nuclear power. The nuclear power choice would save $32 billion.

Форма для связи


Email *

Message *