Q Why do supercomputers have to be so big?
— Jonas Tangen, Madison, Wis.
A Lauren Michael, research computing facilitator at University of Wisconsin-Madison’s Center for High Throughput Computing:
We need supercomputers because scientists are doing really awesome work that requires lots of computing time. For some of this work, if we weren’t using supercomputing to break up tasks and make processing faster, it would take years or decades to complete.
By having hundreds of computer processors working on a job, you can divide the effective time that it takes to do a big calculation by hundreds. If you had thousands of processors, you could divide that time by thousands.
There are actually mixed opinions in the computing world about what supercomputing is and isn’t. The thing they have in common is the idea that you’re spreading out a big computational task among many processors and maybe even among many computers that might have multiple processors.
People are also reading…
If your computational problem can be broken up into many independent tasks, we call that high throughput computing.
If a task can’t be broken up into separate pieces, you can still have a group of computers share the load by working together simultaneously.
One example is a simulation of a galaxy in astronomy. There may be multiple moments in time that have to be calculated. To speed up all the processing within each time step, we could split up the calculation of different star positions between multiple processors on multiple computers.
There is a range of research that can benefit from computing. There are researchers from the physical sciences and engineering, the sort of traditional users of large-scale computing that have for decades been using computing to tackle problems.
Blue Sky Science is a collaboration of the Wisconsin State Journal and the Morgridge Institute for Research.

