“Supercomputing on demand” becoming a reality

In 2006, Amazon unveiled the Elastic compute cloud, which allowed individual users to rent large numbers of CPUs from Amazon’s server farms. Since then Amazon has repeatedly slashed the cost of the CPUs. For organizations requiring supercomputing levels of power, a company called Cycle computing is able to gather tens of thousands of CPUs from Amazon in order to create a “virtual supercomputer”. Cycle recently created a 50,000 core cluster for the software company Schrodinger. In an interview with Sander Olson for Next Big Future, Cycle CEO Jason Stowe discusses the burgeoning field of “utility supercomputing”, how GPUs will affect cloud computing, and what industries will benefit most from “supercomputing on demand”.

Jason Stowe 

Question:  How did Cycle Computing get started?

Cycle Computing started in 2005 to help enterprises manage computation at large scales, and our first customer was Hartford Life Insurance.  Since then, we have grown to be the leading provider of what we call ‘utility supercomputing’. Our software uses cloud infrastructure, for example Amazon Web Services’ Elastic Compute Cloud(“EC2”) infrastructure to build large-scale, virtual supercomputers for a variety of industries, allowing companies to “rent” supercomputer performance for small fractions of the price of building an actual supercomputer.

Question:  How has cloud computing grown during the past decade? How many organizations currently use this technology?

Most of our clients have their own internal computing clusters, which aren’t sufficient to meet their computing demands. So we seamlessly integrate their limited resources with those of Amazon EC2. The demand for our services has risen steadily since our inception, particularly in the last 12 months, as we’ve announced the creation of massive virtual supercomputers containing tens of thousands of cores.

Question: Cycle Computing recently created an Amazon cluster of 50,000 cores. What is the maximum number of cores that AWS can support?

We don’t know where the ceiling is regarding the maximum number of supercomputing cores. But we are confident that we can scale this technology well past 50,000 cores, if our customers need that level of computing.

Question: What was the CycleCloud Big Science Challenge?

When we created the 30,000 core cluster, we wanted to show how valuable such a cluster could be to a wide variety of science experiments. So we offered a $10,000 dollars, which equals about 8 hours on the cluster, and Amazon added another $2500, allowing a researcher to have 10 hours on a 30,000 core cluster. We ended up choosing a computational biologist at the Morgridge Institute for Research as our winner, and he will use the utility supercomputing time to do stem cell research.

Question:  Cycle Computing now supports GPUs as well as CPUs. Wouldn’t GPUs  be better suited to many cloud computing tasks?

We’ve supported the GPU for over a year now, and the GPUs are dependent upon the software available. We are using GPUs for genomic applications, molecular modeling, and manufacturing computational fluid dynamics type workloads.

Question: Can utility supercomputing be used to do large-scale, 3d  simulation?

Yes, and we see this as a major growth area. Clients already use this technology for weather simulations for earth.

Question:  Nvidia has just released GPUs capable of virtualization. How long before Cycle Computing offers this service?

We already do. As an example, Amazon’s CG1 GPU-enabled servers were available in November 2010. Our clients sometimes use applications that enable them to take advantage of GPUs, and we’ve had several clusters running that use 80-120 GPUs. We also have software that allows customers to seamlessly combine their own clusters with the Amazon cloud, so routing GPU workloads externally is easy. The advantage of our approach is that the customer doesn’t have to think about things like complicated & power intensive GPU deployment, because that has already been handled for them by Amazon.

Question: It would seem that this technology could transform the way that computer graphics are made.

We have visual effects houses that render cgi animation, and they have been our clients for several years now. Given the enormous computational demands of cgi, this area should increasingly make use of utility supercomputing in the coming years. But many other industries can effectively leverage this technology as well.

Question:  What customers are currently using Cycle Computing?

Life Sciences have been an early adopter, and strategic analysis is another major consumer of utility supercomputing. Insurance companies and big banks have surprisingly large computing infrastructures. They use these resources in order to calculate risk, specifically the performance of financial instruments and trading applications, and the hedging of risk. All of those workloads are exceptionally computationally intensive and lend themselves well to the Cloud. Additionally, we now have a large number of manufacturing, energy, and CAD simulation clients in our pipeline.

Question:  How many software applications can take advantage of huge numbers of CPUs?

CycleCloud creates clusters that look exactly like internal HPC environments, so many workloads don’t require any recompilation. Recompilation may be required for GPUs, but no recompilation is required for “pleasantly parallel” tasks. We have developed software that allows data to be moved around, so we can actually have multiple simulations running in parallel, without the user having to worry about data distribution.

Question: Are there any industries that aren’t using utility supercomputing which could be?

Five years ago, we were focused on getting the word out regarding the compelling benefits of utility supercomputing to certain industries, such as finance and manufacturing, energy, life sciences, and image processing. But today, those industries are all employing utility supercomputing to some degree. The life sciences industry is probably the furthest along, but by now everyone who is using high-performance computing is aware of this, and is in the process of making the transition.

Question: So the life sciences field is adopting utility supercomputing in a big way?

Yes, they are using it for drug design, which is highly computationally dependant. They are also using it to perform large, fine-grained simulations of all manner of cellular and molecular processes. For a researcher to be able to get access to supercomputing levels of power within minutes, and then to be able to shut everything down once the simulation is complete, fundamentally affects the way research is performed.

Question: Will Cycle Computing obsolete any data centers?

This technology will supplement and enhance the capabilities of data centers, rather than replace them. It is clear, however, that for the majority of users it is no longer necessary to create high-performance computing clusters from scratch. Most of the workloads are compatible with utility supercomputing, and we have automated many of the processes needed to get applications up and running. Researchers are getting results back orders of magnitude faster than before. Now even the smallest startup has access to supercomputing performance that was once only accessible by the very largest company.

Question:  What sort of growth do you predict for utility supercomputing during the next decade?

Over the next several years, a multibillion dollar opportunity will emerge for infrastructure and software to implement utility supercomputing.  We have had a 5x increase in customer interest in our software products during the past 3 months alone. This technology has proven its usefulness and cost-effectiveness beyond a doubt, and within a decade, many researchers in most industries will make frequent and extensive use of this invaluable resource.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks

“Supercomputing on demand” becoming a reality

In 2006, Amazon unveiled the Elastic compute cloud, which allowed individual users to rent large numbers of CPUs from Amazon’s server farms. Since then Amazon has repeatedly slashed the cost of the CPUs. For organizations requiring supercomputing levels of power, a company called Cycle computing is able to gather tens of thousands of CPUs from Amazon in order to create a “virtual supercomputer”. Cycle recently created a 50,000 core cluster for the software company Schrodinger. In an interview with Sander Olson for Next Big Future, Cycle CEO Jason Stowe discusses the burgeoning field of “utility supercomputing”, how GPUs will affect cloud computing, and what industries will benefit most from “supercomputing on demand”.

Jason Stowe 

Question:  How did Cycle Computing get started?

Cycle Computing started in 2005 to help enterprises manage computation at large scales, and our first customer was Hartford Life Insurance.  Since then, we have grown to be the leading provider of what we call ‘utility supercomputing’. Our software uses cloud infrastructure, for example Amazon Web Services’ Elastic Compute Cloud(“EC2”) infrastructure to build large-scale, virtual supercomputers for a variety of industries, allowing companies to “rent” supercomputer performance for small fractions of the price of building an actual supercomputer.

Question:  How has cloud computing grown during the past decade? How many organizations currently use this technology?

Most of our clients have their own internal computing clusters, which aren’t sufficient to meet their computing demands. So we seamlessly integrate their limited resources with those of Amazon EC2. The demand for our services has risen steadily since our inception, particularly in the last 12 months, as we’ve announced the creation of massive virtual supercomputers containing tens of thousands of cores.

Question: Cycle Computing recently created an Amazon cluster of 50,000 cores. What is the maximum number of cores that AWS can support?

We don’t know where the ceiling is regarding the maximum number of supercomputing cores. But we are confident that we can scale this technology well past 50,000 cores, if our customers need that level of computing.

Question: What was the CycleCloud Big Science Challenge?

When we created the 30,000 core cluster, we wanted to show how valuable such a cluster could be to a wide variety of science experiments. So we offered a $10,000 dollars, which equals about 8 hours on the cluster, and Amazon added another $2500, allowing a researcher to have 10 hours on a 30,000 core cluster. We ended up choosing a computational biologist at the Morgridge Institute for Research as our winner, and he will use the utility supercomputing time to do stem cell research.

Question:  Cycle Computing now supports GPUs as well as CPUs. Wouldn’t GPUs  be better suited to many cloud computing tasks?

We’ve supported the GPU for over a year now, and the GPUs are dependent upon the software available. We are using GPUs for genomic applications, molecular modeling, and manufacturing computational fluid dynamics type workloads.

Question: Can utility supercomputing be used to do large-scale, 3d  simulation?

Yes, and we see this as a major growth area. Clients already use this technology for weather simulations for earth.

Question:  Nvidia has just released GPUs capable of virtualization. How long before Cycle Computing offers this service?

We already do. As an example, Amazon’s CG1 GPU-enabled servers were available in November 2010. Our clients sometimes use applications that enable them to take advantage of GPUs, and we’ve had several clusters running that use 80-120 GPUs. We also have software that allows customers to seamlessly combine their own clusters with the Amazon cloud, so routing GPU workloads externally is easy. The advantage of our approach is that the customer doesn’t have to think about things like complicated & power intensive GPU deployment, because that has already been handled for them by Amazon.

Question: It would seem that this technology could transform the way that computer graphics are made.

We have visual effects houses that render cgi animation, and they have been our clients for several years now. Given the enormous computational demands of cgi, this area should increasingly make use of utility supercomputing in the coming years. But many other industries can effectively leverage this technology as well.

Question:  What customers are currently using Cycle Computing?

Life Sciences have been an early adopter, and strategic analysis is another major consumer of utility supercomputing. Insurance companies and big banks have surprisingly large computing infrastructures. They use these resources in order to calculate risk, specifically the performance of financial instruments and trading applications, and the hedging of risk. All of those workloads are exceptionally computationally intensive and lend themselves well to the Cloud. Additionally, we now have a large number of manufacturing, energy, and CAD simulation clients in our pipeline.

Question:  How many software applications can take advantage of huge numbers of CPUs?

CycleCloud creates clusters that look exactly like internal HPC environments, so many workloads don’t require any recompilation. Recompilation may be required for GPUs, but no recompilation is required for “pleasantly parallel” tasks. We have developed software that allows data to be moved around, so we can actually have multiple simulations running in parallel, without the user having to worry about data distribution.

Question: Are there any industries that aren’t using utility supercomputing which could be?

Five years ago, we were focused on getting the word out regarding the compelling benefits of utility supercomputing to certain industries, such as finance and manufacturing, energy, life sciences, and image processing. But today, those industries are all employing utility supercomputing to some degree. The life sciences industry is probably the furthest along, but by now everyone who is using high-performance computing is aware of this, and is in the process of making the transition.

Question: So the life sciences field is adopting utility supercomputing in a big way?

Yes, they are using it for drug design, which is highly computationally dependant. They are also using it to perform large, fine-grained simulations of all manner of cellular and molecular processes. For a researcher to be able to get access to supercomputing levels of power within minutes, and then to be able to shut everything down once the simulation is complete, fundamentally affects the way research is performed.

Question: Will Cycle Computing obsolete any data centers?

This technology will supplement and enhance the capabilities of data centers, rather than replace them. It is clear, however, that for the majority of users it is no longer necessary to create high-performance computing clusters from scratch. Most of the workloads are compatible with utility supercomputing, and we have automated many of the processes needed to get applications up and running. Researchers are getting results back orders of magnitude faster than before. Now even the smallest startup has access to supercomputing performance that was once only accessible by the very largest company.

Question:  What sort of growth do you predict for utility supercomputing during the next decade?

Over the next several years, a multibillion dollar opportunity will emerge for infrastructure and software to implement utility supercomputing.  We have had a 5x increase in customer interest in our software products during the past 3 months alone. This technology has proven its usefulness and cost-effectiveness beyond a doubt, and within a decade, many researchers in most industries will make frequent and extensive use of this invaluable resource.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks