There are a few different definitions of exactly what defines a super-computer; they do however all have one thing in common. The common theme being that a super-computer is a broad term for a mainframe computer that is among the largest, fastest, or most powerful of those available at a given time.
Supercomputers, just like any other typical computer, have two basic parts. The first one is the CPU, which executes the commands it needs to do. The other one is the memory which stores data. The only difference between an ordinary computer and supercomputers is that supercomputers have their CPUs opened at faster speeds than standard computers. This certain length of time determines the exact speed that a CPU can work. By using complex and state-of-the-art materials being connected as circuits, supercomputer designers optimize the functions of the machine. They also try to have smaller length of circuits connected as possible in order for the information from the memory reach the CPU at a lesser time.
There are effectively three types of super-computer, vector-based architecture, bus-based multiprocessors and parallel computers. All supercomputers make use of parallelism or vector processing either separately or combined to enhance work-rate. An increased demand for even higher rates of calculation brought about the advent of MPP (Massively-Parallel Processing) machines such as the Thinking Machine and Intel’s Hypercube. Vector based machines suit tasks that cannot easily be split up where as parallelism and clustering suits tasks that can be broke down into components (e.g. particle simulations where each computer can emulate a single particle). With the advent of faster and more efficient processors for home users, people can effectively build fairly cheap super computers in their own homes. An example of this is a Beowulf cluster based on the Linux operating system, which can harness parallel processing between computers with standard IBM-PC architectures.
The speed of computers processors is often denoted in Megahertz (Mhz) or Gigahertz (Ghz), but the processing power is measured by the amount of FLOPS (Floating-point Operations Per Second) a computer can perform. The power of home computers is usually expressed in terms of MegaFLOPS where as the power of super-computers is expressed in GigaFLOPS. To put this in simple terms the Cray T3E that has 256 parallel processors puts out 153.4 Gigaflops, that’s 153,400,000,000 mathematical calculations every second. This is equivalent to 25 times the worlds entire population each doing 1 calculation per second.
Supercomputers are typically used for high-end number crunching, which encompasses tasks such as:
• Scientific simulations
• Graphics & Animation
• Analysis of geological or geographical data
• Structural analysis
• Fluid dynamics
• Physics calculations
• Chemistry modelling
• Electronic design & research
• Nuclear energy research
• Meteorology.
The best known and one of the longest standing super-computer manufacturers is Cray Research. Cray Research is the market leader for super-computers and is especially well known as they don’t make any entry level computers, they only focus on super-computers.
Supercomputers, just like any other typical computer, have two basic parts. The first one is the CPU, which executes the commands it needs to do. The other one is the memory which stores data. The only difference between an ordinary computer and supercomputers is that supercomputers have their CPUs opened at faster speeds than standard computers. This certain length of time determines the exact speed that a CPU can work. By using complex and state-of-the-art materials being connected as circuits, supercomputer designers optimize the functions of the machine. They also try to have smaller length of circuits connected as possible in order for the information from the memory reach the CPU at a lesser time.
There are effectively three types of super-computer, vector-based architecture, bus-based multiprocessors and parallel computers. All supercomputers make use of parallelism or vector processing either separately or combined to enhance work-rate. An increased demand for even higher rates of calculation brought about the advent of MPP (Massively-Parallel Processing) machines such as the Thinking Machine and Intel’s Hypercube. Vector based machines suit tasks that cannot easily be split up where as parallelism and clustering suits tasks that can be broke down into components (e.g. particle simulations where each computer can emulate a single particle). With the advent of faster and more efficient processors for home users, people can effectively build fairly cheap super computers in their own homes. An example of this is a Beowulf cluster based on the Linux operating system, which can harness parallel processing between computers with standard IBM-PC architectures.
The speed of computers processors is often denoted in Megahertz (Mhz) or Gigahertz (Ghz), but the processing power is measured by the amount of FLOPS (Floating-point Operations Per Second) a computer can perform. The power of home computers is usually expressed in terms of MegaFLOPS where as the power of super-computers is expressed in GigaFLOPS. To put this in simple terms the Cray T3E that has 256 parallel processors puts out 153.4 Gigaflops, that’s 153,400,000,000 mathematical calculations every second. This is equivalent to 25 times the worlds entire population each doing 1 calculation per second.
Supercomputers are typically used for high-end number crunching, which encompasses tasks such as:
• Scientific simulations
• Graphics & Animation
• Analysis of geological or geographical data
• Structural analysis
• Fluid dynamics
• Physics calculations
• Chemistry modelling
• Electronic design & research
• Nuclear energy research
• Meteorology.
The best known and one of the longest standing super-computer manufacturers is Cray Research. Cray Research is the market leader for super-computers and is especially well known as they don’t make any entry level computers, they only focus on super-computers.