QQ登录

只需一步,快速开始

扫一扫,访问微社区

登录 | 注冊 | 找回密码

163 加中网–加拿大曼尼托巴中文门户网站 | 温尼伯华人论坛

 找回密码
 注冊

QQ登录

只需一步,快速开始

扫一扫,访问微社区

查看: 462|回复: 6
打印 上一主题 下一主题

[学习] TOP5 Supercomputers List for November 2003.

[复制链接]

该用户从未签到

跳转到指定楼层
1#
发表于 2004-2-17 15:44:06 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
No. 1 The Earth Simulator
Introduction
The Earth Simulator (ES) is a project of Japanese agencies NASDA, JAERI and JAMSTEC to develop a 40 TFLOPS system for climate modeling.
The ES site is a new location in an industrial area of Yokohama, an hour drive west of Tokyo. The facility became operational in late 2001, and claimed first spot in current Top 500 list. In spite of a public 5-year development period leading to that event, the US supercomputing community was caught by surprise.
Hardware

The ES is based on:
5,120 (640 8-way nodes) 500 MHz NEC CPUs
8 GFLOPS per CPU (41 TFLOPS total)
2 GB (4 512 MB FPLRAM modules) per CPU (10 TB total)
shared memory inside the node
640 × 640 crossbar switch between the nodes
16 GB/s inter-node bandwidth
20 kVA power consumption per node
The vector CPU is made using 0.15 μm CMOS process, and is a descendant (same speed, smaller process) of the NEC SX-5 CPU. The machine runs a version of the Super-UX UNIX-based OS. OpenMP parallel directives are used within a node, and MPI-2 or HPF must be used across multiple nodes, necessitating a dual-level parallel implementation. In fact this can be considered a three-level parallel system, if single-CPU vectorization is taken into account; however, vectorization is largely automatic. Still, an optimized code will need to employ MPI-2 at the subdomain level, OpenMP at the loop level, and vectorization directives at the instruction level all at once.
Physical

The CPUs are housed in 320 cabinets, 2 8-CPU nodes per cabinet. The cabinets (blue) are organized in a ring around the interconnect, which is housed in another 65 cabinets (green). Another layer of the circle is formed by disk array cabinets (white). The whole thing occupies a building 65 m long and 50 m wide. Activity on the nodes is signalled by a bright green beacon at the top of the cabinet, and if a fault occurs, a similar red light turns on. Switch cabinets also have green and red signaling lights for various types of communication events.

The machine room sits at approximately 4th floor level. The 3rd floor level is taken by hundreds of kilometers of copper cabling, and the lower floors house the air conditioning and electrical equipment. The structure is enclosed in a cooling shell, with the air pumped from underneath through the cabinets, and collected to the two long sides of the building. The aeroshell gives the building its "pumped-up" appearance. The machine room is electromagnetically shielded to prevent interference from nearby expressway and rail. Even the halogen light sources are outside the shield, and the light is distributed by a grid of scattering pipes under the ceiling. The entire structure is mechanically isolated from the surroundings, suspended in order to make it less prone to earthquake damage. All attachments (power, cooling, access walkways) are flexible.

138_245.gif (14 KB, 下载次数: 0)

138_245.gif
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友 微信微信
收藏收藏 分享分享
回复

使用道具 举报

该用户从未签到

2#
 楼主| 发表于 2004-2-17 15:46:02 | 只看该作者

TOP5 Supercomputers List for November 2003.

No. 2 ASCi Q
The Q supercomputing system at Los Alamos National Laboratory (LANL) is the most recent component of the Advanced Simulation and Computing (ASCI) program, a collaboration between the U. S. Department of Energy's National Nuclear Security Administration and the Sandia, Lawrence Livermore, and Los Alamos national laboratories. ASCI's mission is to create and use leading-edge capabilities in simulation and computational modeling. In an era without nuclear testing, these computational goals are vital for maintaining the safety and reliability of the nation's aging nuclear stockpile.
ASCI Q Hardware
The Q system, when complete, will include 3 segments, each providing 10 TeraOPS capability. The three segments will be able to operate independently or as a single system. One-third of the final system has been available to users for classified ASCI codes since August 2002. This portion of the system, known as QA, comprises 1024 AlphaServer ES45 SMPs from Hewlett Packard (HP), each with 4 Alpha 21264 EV-68 processors. Each of these 4,096 CPUs has 1.25-GHz capability, creating an aggregate 10 TeraOPS.
Los Alamos has an option to purchase the third 10 TeraOPS system from HP. The final Q system will provide 30 TeraOPS capability:
3072 AlphaServer ES45s from Hewlett Packard (formerly Compaq)
12,288 EV-68 1.25-GHz CPUs with 16-MB cache
33 Terabytes (TB) memory
Gigabit fiber-channel disk drives providing 664 TB of global storage
Dual controller accessible 72 GB drives arranged in 1536 5+1 RAID5 storage arrays, interconnected through fiber-channel switches to 384 file server nodes.
The Network - Tying together 3072 SMPs
Very integral to Q is the Quadrics (QSW) dual-rail switch interconnect, which uses a fat-tree configuration. The final switch system will include 6144 QSW PCI adapters and six 1024-way QSW federated switches, providing high band- width (250 Mbytes/s/rail) and low latency (~5 us). The Quadrics network enables high-performance file serving within the segments. A 6th level Quadrics network will connect the 3 segments.
Q is housed in the new 303,000 sq ft Nicholas C. Metropolis Center for Modeling and Simulation. The Metropolis Center includes a 43,500 sq ft unobstructed computer room and office space for about 300 staff. In addition, it has the facilities to support air or water cooling of computers and 7.1 MW of power, expandable to 30 MW. The final 30T Q system will occupy 20,000 sq ft and use about 3 MW power. The final system will comprise about 900 cabinets for the 3072 AlphaServer ES45 SMPS and related peripherals. Cable trays 1.8 miles in length will hold about 204 miles of cable under the floor.

138_245_1.gif (18.72 KB, 下载次数: 0)

138_245_1.gif

该用户从未签到

3#
 楼主| 发表于 2004-2-17 18:01:40 | 只看该作者

TOP5 Supercomputers List for November 2003.

[这个贴子最后由mib在 2004/02/17 12:02pm 第 1 次编辑]

No. 3 Virginia Tech's X
The G5 cluster contains 1100 Apple G5 systems each having two IBM PowerPC 970 processors rated at 2 GHz. Each node has 4GB of main memory and 160GB of Serial ATA storage. 176TB total secondary storage. 4 head nodes for compilations/job startup. 1 Management node.

138_245_2.gif (15.62 KB, 下载次数: 0)

138_245_2.gif

该用户从未签到

4#
 楼主| 发表于 2004-2-17 18:04:18 | 只看该作者

TOP5 Supercomputers List for November 2003.

No. 4 Tungsten
Tungsten, NCSA's newest cluster, will employ more than 1,450 dual-processor Dell PowerEdge 1750 servers running Red Hat Linux, a Myrinet 2000 high-speed interconnect fabric, an I/O subcluster with more than 120 terabytes of DataDirect storage, and a dedicated 64-node applications development environment. Tungsten is expected to have a peak performance of 17.7 teraflops (17.7 trillion calculations per second).

138_245_3.gif (15.81 KB, 下载次数: 0)

138_245_3.gif

该用户从未签到

5#
 楼主| 发表于 2004-2-17 18:05:49 | 只看该作者

TOP5 Supercomputers List for November 2003.

No. 5 MPP2
Summary
MPP2 is a 11+ TFlops system that consists of 980 Hewlett-Packard Longs Peak nodes (of which 944 will be used for batch processing) with dual Intel 1.5 GHz Itanium-2 processors (also called Madison) and HP's zx1 chipset. The Madison processors are 64-bit processors with a theoretical peak of 6 GFlops. There are two types of nodes on the system, FatNodes (8 Gbytes of memory and 430 Gbytes of local disk space) and ThinNodes (6 Gbytes of memory and 10 Gbytes of local disk space). Fast interprocessor communication between the processors is obtained using a single rail QSNet/Elan-3 interconnect from Quadrics, which will be upgraded to Elan-4 in the near future. The system runs a version of Linux based on Red Hat Linux Advanced Server. A global 53 Tbyte Lustre file system is available to all the processors. Processor allocation is scheduled using the LSF resource manager.
The MPP2 computing system has the following equipment and capabilities:
HP/Linux Itanium 2 ("Madison") 1.5 GHz
980 nodes, 1960 processors
Quadrics Elan 3 interconnect
11 teraflop peak theoretical performance
7 terabytes of RAM
142 terabytes of local scratch disk space
53 terabytes of global scratch disk space

138_245_4.gif (23.72 KB, 下载次数: 0)

138_245_4.gif

该用户从未签到

6#
发表于 2024-4-18 13:56:25 | 只看该作者
回复 支持 反对

使用道具 举报

该用户从未签到

7#
发表于 2024-4-18 13:57:35 | 只看该作者
回复 支持 反对

使用道具 举报

发表回复
您需要登录后才可以回帖 登录 | 注冊

本版积分规则

    联系我们
  • 咨询电话:1.204.294.8528
  • 邮箱:163adv@gmail.com
  • QQ:179091654
    移动客户端:即将开放
    关注我们:
  • 扫描二维码加关注

快速回复 返回顶部 返回列表