--------------------------------------------------------------------------------------------- WP BW653 MAY 2,1994 16:24 PACIFIC 19:24 EASTERN ( BW)(IBM/MAUI-CENTER) Most powerful IBM POWERparallel system arrives at Maui High Performance Computing Center Business Editors & Computer Industry Writers EDITOR'S NOTE: Photo is available on BW PhotoWire/AP PhotoExpress. Photos delivered by BW PhotoWire/AP PhotoExpress are transmitted digitally on a high-speed, satellite circuit that is received at newspapers subscribing to AP PhotoStream. SOMERS, N.Y.--(BUSINESS WIRE)--May 2, 1994--IBM announced today that the most powerful IBM POWERparallel system(a) yet shipped is being installed at the Maui High Performance Computing Center on Maui, Hawaii. Announced just last month, the 80-node SP2(a), a RISC-based UNIX(a) parallel processing computer, is the initialinstallation of a machine that the Center plans to scale to 400 processors later this year. The 400-node system will be capable of delivering up to 100 billion calculations per second, making it one of the most powerful scalable, parallel computers in the world. The IBM POWERparallel System SP2(a) is a price/performance leader based on its outstanding performance on the NAS benchmark suite, on which it dramatically outperformed the established competition(d). Based on the NAS benchmarks, the POWERparallel SP2 has up to twice the price/performance of the Cray T3D(c). The NAS benchmark suite, which is comprised of both pseudo-application and kernel benchmarks, was run on 16 nodes and 64 nodes of the 80-node system being installed at the Maui Center. The SP2 wide node has a LINPACK DP (Double Precision) performance of 130.4 MFLOPS and a SPEC--fp92 rating of 242.4. This is over twice the performance of SP1 nodes. "We have been exceptionally pleased with the performance and ease of use of our SP1," said Dr. Frank Gilfeather, of the University of New Mexico who, along with Drs. Brian T. Smith and John Sobolewski, are responsible for establishing and managing the Maui High Performance Computing Center. "We are excited about having the new SP2 here. The Center's users, already impressed with the SP1, are looking forward to putting the system through its paces as we analyze satellite data, run various chemistry and environmental modeling programs and explore new parallel applications. "Demand to use the POWERparallel machines is far outstripping our expectations," Dr. Gilfeather continued. "Of critical acclaim is the ease of porting software applications. Veteran supercomputer users are excited about porting more applications to the SP2 and the Center." The NAS benchmark suite will continue to be run on Maui's system as nodes are added. The 80-node system has a peak performance of 21 gigaFLOPS (billions of calculations per second), and when it grows to 400 nodes, that peak performance will be over 100 gigaFLOPS. Maui's Center will be officially open for business later this year, but early users are already successfully using the Center's current SP1 for parallel work. "The benchmark results reported today on this machine prove that it is one of the best systems in the world -- both in terms of sheer speed and processor capability as well as price/performance," said Irving Wladawsky-Berger, general manager of IBM POWER Parallel Systems. "We will continue to work closely with the directors of the Maui Center to push the performance limits of the system as it grows to 400 nodes." Benchmark Results In laboratory tests, the POWERparallel SP2 16-node system exceeded the performance of a 64-node Cray T3D on a majority of the NAS benchmark tests(d). The 16-node system also performed well against the 64-node version of the Intel Paragon. The 16-node SP2 exceeded the performance of Kendall Square Research's KSR2 32-node system as well. In tests, the 64-node SP2 system outperformed a 256-node T3D on all but one pseudo application benchmark for which numbers are available and surpassed all other competitors' high-end node configurations. The 16-node and 64-node POWERparallel systems tested were configured differently. The 16-node machine had 128 megabytes (MB) of RAM per node and one gigabyte (GB) of disk per node. The 64-node system had 64 MB of RAM per node and one GB of disk per node. The nodes in the systems tested were wide nodes, which have seven slots for I/O and network attachments, and can accomodate up to eight GB of internal disk storage and up to two GB of memory per node. -0- NAS SIMULATED CFD APPLICATION BENCHMARKS (CLASS A & CLASS B) Performance as Ratio to Cray Y-MP/1 Machine No. Nodes LU SP BT CLASS A IBM POWERparallel SP2 16 5.0 4.8 6.3 Cray T3D 32 1.9 2.5 3.0 Thinking Machines CM-5E 32 2.2 2.8 5.4 Kendall Square Research KSR2 32 1.9 2.1 3.5 Cray T3D 64 3.4 4.8 6.0 Thinking Machines CM-5E 64 3.4 4.5 9.4 Intel Paragon 64 1.0 1.8 3.5 -0- Performance as Ratio to Cray C90/1 Machine No. Nodes LU SP BT CLASS B IBM POWERparallel 64 6.9 5.3 8.7 Cray T3D 128 3.1 3.2 4.1 Thinking Machines CM-5E 128 2.0 2.2 5.0 Cray T3D 256 5.4 5.7 7.6 Intel Paragon 384 1.7 N/A N/A Intel Paragon 400 N/A 2.9 N/A Intel Paragon 408 N/A N/A 5.6 Note: Numbers in tables represent the equivalent Cray Y-MP/1 processor, Cray C90/1 processor. -0- PERFORMANCE PER MILLION DOLLARS BASED ON NAS BENCHMARKS Performance per million dollars CLASS A Machine No. Nodes MG SP IBM POWERparallel SP2 16 4.64 3.22 Cray T3D 128 2.55 1.71 Cray T3D 256 2.79 1.99 Performance per million dollars CLASS B Machine No. Nodes LU SP BT IBM POWERparallel SP2 64 1.28 0.98 1.59 Cray T3D 256 0.58 0.62 0.82 Thinking Machines CM-5E 128 0.50 0.55 1.25 Note: The larger numbers indicate better price/performance. -0- The NAS benchmark was developed through the Numerical Aerodynamic Simulation Program at the NASA Ames Research Center. The NAS Program is a large scale effort to advance the state of computational aerodynamics. The NAS Benchmark is a suite of benchmarks which is comprised of five kernel benchmarks and three simulated application benchmarks. This suite mimics the computation and data movement characteristics of large-scale computational fluid dynamics (CFD) applications. Maui High Performance Computing Center The Maui High Performance Computing Center was established through a cooperative agreement between the Air Force's Phillips Laboratory in Albuquerque, N.M., and the University of New Mexico. The University of New Mexico consortia, which includes the Cornell Theory Center, Carnegie Mellon University, Environmental Research Institute of Michigan, Numerical Algorithms Group and SETS Technology, Inc., was selected last year to establish this Department of Defense resource through a competitive bid process. The Maui Center has been recently established as a large, comprehensive supercomputer center. It is unique in that it is remote from a large base of scientific and engineering users. Thus the performance of the computing environment will be tested under unique circumstances, especially in the area of long-distance remote support. The Center supports the U.S. Department of Defense's general scientific computing needs as well as those of the scientific computing community at large. The Maui Center will benefit educational institutions, industry and government agencies. It is also intended to foster technology exchange with U.S. industry, stimulate economic development and establish educational programs in high-performance computing. World Class Systems IBM's POWER Parallel Systems business unit produces world-class scalable, parallel information systems for commercial and scientific/technical customers. The IBM Scalable POWERparallel Systems 9076 SP2(a) features design and performance leadership, offers exceptional reliability and versatility, and delivers high performance computing at workstation price/performance levels. Headquartered in Somers, N.Y., IBM's POWER Parallel Systems business unit also draws on resources from the IBM Large Scale Computing Division, IBM RISC System/6000 Division and IBM Research. -0- Editor's note: There is a photo available of the arrival of the POWERparallel system in Maui, Hawaii. You can get it from BusinessWire or by calling Elizabeth Albrycht/Ari Fishkind at 212/505-9900. (a) Indicates trademark or registered trademark of International Business Machines Corp. Other product names may be trademarks of their respective companies. Unix is a registered trademark of UNIX System Laboratories, a wholly owned subsidiary of Novell Inc. (b) Convex and Hewlett Packard benchmark data is not available. (c) Competitors' price and performance data are from David H. Bailey, Eric Barszcz, Leonardo Dagum and Horst D. Simon, "NAS Parallel Benchmark Results 3-94, RNR Technical Report RNR-94-006, March 21, 1994" and "NAS Parallel Benchmark Results 10-93, RNR Technical Report RNR-93-016, October 27, 1993." Prices provided by vendors include any associated software costs (operating systems, compilers, scientific libraries) as required to run the benchmark, but do not include maintenance. Prices are as of March 1994; some prices came from previous reports. The Cray T3D price does not include the Cray front end machine. (d) Performance measurements for the IBM POWERparallel system are the result of tests done in a laboratory environment at IBM Kingston using Message Passing Library (MPL)/p. While these values should be indicative of machine performance, no warranties or guarantees are stated or implied by IBM. These measurements are offered only as an indicator of performance. The NAS benchmark suite is a commonly used indicator of parallel computer performance that has been developed through the Numerical Aerodynamic Simulation (NAS) program at NASA's Ames Research Center. The benchmark suite is comprised of five parallel kernel benchmarks and three simulated computational fluid dynamics application benchmarks. The LINPACK benchmark suite compares the performance of different computers in solving dense systems of linear equations. LINPACK routines are compiled on a vendor's own FORTRAN compiler in an effort to approximate what a typical user would experience. The LINPACK Double Precision benchmark measures a system's double precision floating-point performance. The LINPACK HPC benchmark provides a way to compare masssively-parallel computers. Dr. Jack Dongarra at the University of Tennessee (and Oak Ridge National Laboratory) oversees the administration of the LINPACK benchmark. --30--kb/kk/kab/bc/mem/ss/ny CONTACT: Maui Center Frank Gilfeather, 808/879-5077 or 505/277-8249 or IBM, White Plains Nadine Taylor, 914/766-2458/2407 or TSI for IBM, New York Elizabeth Albrycht or Ari Fishkind, 212/505-9900 KEYWORD: NEW YORK INDUSTRY KEYWORD: COMPUTERS/ELECTRONICS COMED REPEATS: New York 212-575-8822 or 800-221-2462; Boston 617-330-5311 or 800-225-2030; SF 415-986-4422 or 800-227-0845; LA 310-820-9473 -------------------------------------------------------------------------------------------------