By 2010, says Dadi Perlmutter, vice president of Intel's mobility group, the company hopes to ship an optical cable called Light Peak that will be able to zip 10 gigabits of data per second from one gadget to another, a rate equivalent of transferring a Blu-ray movie from a computer to a mobile video player in 30 seconds. A single Light Peak cable will also be capable of transporting different types of data simultaneously, meaning it will be possible to back up a hard drive, transfer high-definition video, and connect to a network with just one line.
At both ends of a Light Peak cable are chips that contain devices that produce light, encode data in it, and send it on its way. The chips can also amplify incoming signals and convert the light to an electrical signal that can be interpreted by gadgets. The first generation of Light Peak will use chips made with standard optical materials such as gallium arsenide. However, to truly make optical cables cheap enough to replace copper, future versions of Light Peak, which will handle 40-gigabits-per-second and 100-gigabits-per-second transfer rates, will most likely need to rely on silicon-based optical chips, a product of the maturing field of silicon photonics. Silicon photonics researchers hope to transform computing by making high-bandwidth connectors cheaper than ever before, not just in cables, but also eventually within electronic motherboards and microprocessors.
"This will be a long-term transition," says Perlmutter, referring to the fact that it takes years to develop and adopt standards for new connecting technologies
The first generation of Light Peak cables will use the same sort of $75 optical chips found in telecommunications devices. But Intel has employed some tricks to drive down cost by more than a factor of 10, says Victor Krutul, director of Intel's optical I/O team. For one, the chips don't need to transmit data over the distances of telecom devices. For another, they don't need to last as long or withstand harsh conditions. Because telecom chips in consumer cables won't need to last for decades or withstand heat and humidity, manufacturing standards can be relaxed and allow the chips to be made more inexpensively.
EEtimes reports that Intel Corp. demonstrated a working version of its first discrete graphics processor, Larrabee, at the Intel Developer Forum. Intel would not say when it plans to release Larrabee which is expected compete with graphics chips from AMD and Nvidia.
Semiaccurate.com has pictures of the demonstration system and other pictures from the Intel Developer Forum.
Larrabee is the one everyone cares about, and it was shown off publicly for the first time today. The machine it was on is a six core Gulftown computer, the Westmere Exxxxxxtreme chip and ultimately next gen server CPU. It was running the traditional game Quake Wars, ported to do raytracing.
Waves moved, geometry was not static, and in general it worked. Instead of multiple four core chips, the new demo was running on the 'GPU', although Intel would not call it that. The only thing on the CPU was the game engine itself, exactly what you would expect from a CPU/GPU machine. As we said earlier, B0 silicon, the bug fixed Larrabee, taped out a month ago, and would possibly be shown at IDF.
Sadly, it has not come back from the fabs yet, so the demo was running on Ax silicon, most likely A6. It worked, but didn't seem to be a huge step forward from four quad core Xeons. Oh wait, one GPU running at sub-10% of hoped for performance beating 16 Xeon cores is a huge step forward.