The Tesla 20-series GPUs combine parallel computing features that have never been offered on a single device before. These include:
* Support for the next generation IEEE 754-2008 double precision floating point standard
* ECC (error correcting codes) for uncompromised reliability and accuracy
* Multi-level cache hierarchy with L1 and L2 caches
* Support for the C++ programming language
* Up to 1 terabyte of memory, concurrent kernel execution, fast context switching, 10x faster atomic instructions, 64-bit virtual address space, system calls and recursive functions.
The family of Tesla 20-series GPUs includes:
Tesla C2050 & C2070 GPU Computing Processors
* Single GPU PCI-Express Gen-2 cards for workstation configurations
* Up to 3GB and 6GB (respectively) on-board GDDR5 memory(i)
*Double precision performance in the range of 520GFlops – 630 Gflops
Tesla S2050 & S2070 GPU Computing Systems
*Four Tesla GPUs in a 1U system product for cluster and datacenter deployments
*Up to 12 GB and 24 GB (respectively) total system memory on board GDDR5 memory(ii)
*Double precision performance in the range of 2.1 TFlops – 2.5 Tflops
The Tesla C2050 and C2070 products will retail for $2,499 and $3,999 and the Tesla S2050 and S2070 will retail for $12,995 and $18,995. Products will be available in Q2 2010.
• Improve Double Precision Performance—while single precision floating point performance was on the order of ten times the performance of desktop CPUs, some GPU computing applications desired more double precision performance as well.
• ECC support—ECC allows GPU computing users to safely deploy large numbers of GPUs in datacenter installations, and also ensure data-sensitive applications like medical imaging and financial options pricing are protected from memory errors.
• True Cache Hierarchy—some parallel algorithms were unable to use the GPU’s shared memory, and users requested a true cache architecture to aid them.
• More Shared Memory—many CUDA programmers requested more than 16 KB of SM shared
memory to speed up their applications.
• Faster Context Switching—users requested faster context switches between application programs and faster graphics and compute interoperation.
• Faster Atomic Operations—users requested faster read-modify-write atomic operations for their parallel algorithms.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.