Thursday, 9 July 2015

*4 MIRA :- 




                          The U.S. Department of Energy's Argonne National Laboratory uses the Mira supercomputer to explore climate change, design more efficient electric-car batteries, and look at the evolution of the The U.S. Department of Energy's Argonne National Laboratory uses the Mira supercomputer to explore climate change, design more efficient electric-car batteries, and look at the evolution of the universe.

Speed: 8.16 petaflops per second

Date Created: 2011

Country: United States

Sunday, 5 May 2013



*3 K computer :-
               
                             Located at the Riken Advanced Institute for Computational Science in Kobe, Japan, this supercomputer researches disaster prevention, climate change, and meteorology.



                               The performance of K is equivalent to linking around one million desktop computers, Mr. Dongarra said.

                               Supercomputers are used for earthquake simulations, climate modeling, nuclear research and weapons development and testing, among other things. Businesses also use the machines for oil exploration and rapid stock trading.

                               Building supercomputers is costly and involves connecting thousands of small computers in a data center. K is made up of 672 cabinets filled with system boards. Although considered energy-efficient, it still uses enough electricity to power nearly 10,000 homes at a cost of around $10 million annually, Mr. Dongarra said.

                               The research lab that houses K plans to increase the computer’s size to 800 cabinets. That will raise its speed, which already exceeds that of its five closest competitors combined, Mr. Dongarra said.


                                   

Speed: 10.51 petaflops per second

Date Created: 2011

Country: Japan
*1 Titan:-

                            The U.S. Department of Energy's Oak Ridge National Laboratory loaded supercomputer Jaguar ( previously ranked # 6 in June 2012 ) with NVIDIA TELSA K20 GPUs to add more computing power. Now named Titan, this supercomputer will be 10 times more powerful than its predecessor. Titan will use its extraordinary computing power for study alternative and efficient energy, efficient engines, climate change, and identifying new materials.

                       

                             The U.S. Department of Energy's  (DOE ) Oak Ridge National Laboratory has just launched a new era of scientific supercomputing with Titan, a system capable of churning through more than 20,000 trillion calculations each second - - or 20 petaflops - - by employing a family of processors called graphic processing units first created for computer gaming. Titan will be 10 times more powerful than ORNL's last world-leading system, Jaguar, while overcoming power and space limitations inherent in the previous generation of high-performance computers.

                             Titan, which is supported by the Department of Energy, will provide unprecedented computing power for research in energy, climate change, efficient engines, materials and other disciplines and pave the way for a wide range of achievements in science and technology.

                              The Cray XK7 system contains 18,688 nodes, with each holding a 16-core AMD Opteron 6274 processor and an NVIDIA Tesla K 20 graphics processing unit ( GPU ) accelerator. Titan also has more than 700 terabytes of memory. The combination of central processing units, the traditional foundation of high-performance computers, and more recent GPUs will allow Titan to occupy the same space as its Jaguar predecessor while using only marginally more electricity.

                              "One challenge in supercomputers today is power consumption," said Jeff Nichols, associate laboratory director for computing and computational sciences. "Combining GPUs and CPUs in a single system requires less power than CPUs alone and is a responsible move toward lowering our carbon footprint. Titan will provide unprecedented computing power for research in energy, climate change, materials and other disciplines to enable scientific leadership."

                               Because they handle hundreds of calculations simultaneously, GPUs can go through many more than CPUs in a given time. By relying on its 299,008 CPU cores to guide simulations and allowing its new NVIDIA GPUs to do the heavy lifting, Titan will enable researchers to run scientific calculations with greater speed and accuracy.

                              "Titan will allow scientists to simulate physical systems more realistically and in far greater detail," said James Hack, director of ORNL's National Center for Computational Sciences. "The improvements in simulation fidelity will accelerate progress in a wide range of research areas such as alternative energy and energy efficiency, the identification and development of novel and useful materials and the opportunity for more advanced climate projections."

                               Titan will be open to select projects while ORNL and Cray work through the process for final system acceptance. The lion's share of access to Titan in the coming year will come from the Department of Energy's Innovative and Novel Computational Impact on Theory and Experiment program, better known as INCITE.

                                Researchers have been preparing for Titan and its hybrid architecture for the past two years, with many ready to make the most of the system on day one. Among the flagship scientific applications on Titan:

                                Materials Science The magnetic properties of materials hold the key to major advances in technology. The application WL - LSMS provides a nanoscale analysis of important materials such as steels, iron-nickel alloys and advanced permanent magnets that will help drive future electric motors and generators. Titan will allow researchers to improve the calculations of a material's magnetic states as they vary by temperature.

                                "The order-of-magnitude increase in computational power available with Titan will allow us to investigate even more realistic models with better accuracy," noted ORNL researcher and WL-LSMS developer Markus Eisenbach.

                                  Combustion The S3D application models the underlying turbulent combustion of fuels in an internal combustion engine. This line of research is critical to the American energy economy, given that three-quarters of the fossil fuel used in the United States goes to powering cars and trucks, which produce one-quarter of the country's greenhouse gases.

                                  Titan will allow researchers to model large-molecule hydrocarbon fuels such as the gasoline surrogate isooctane; commercially important oxygenated alcohols such as ethanol and butanol; and biofuel surrogates that blend methyl butanoate, methyl decanoate and n-heptane.

                                  "In particular, these simulations will enable us to understand the complexities associated with strong coupling between fuel chemistry and turbulence at low preignition temperatures," noted team member Jacqueline Chen of Sandia National Laboratories. "These complexities pose challenges, but also opportunities, as the strong sensitivities to both the fuel chemistry and to the fluid flows provide multiple control options which may lead to the design of a high-efficiency, low-emission, optimally combined engine-fuel system."

                                    Nuclear Energy Nuclear researchers use the Denovo application to, among other things, model the behavior of neutrons in a nuclear power reactor. America's aging nuclear power plants provide about a fifth of the country's electricity, and Denovo will help them extend their operating lives while ensuring safety. Titan will allow Denovo to simulate a fuel rod through one round of use in a reactor core in 13 hours; this job took 60 hours on the Jaguar system.

                                    Climate Change The Community Atmosphere Model-Spectral Element simulates long-term global climate. Improved atmospheric modeling under Titan will help researchers better understand future air quality as well as the effect of particles suspended in the air.

                                    Using a grid of 14-kilometer cells, the new system will be able to simulate from one to five years per day of computing time, up from the three months or so that Jaguar was able to churn through in a day.

                                   "As scientists are asked to answer not only whether the climate is changing but where and how, the workload for global climate models must grow dramatically," noted CAM-SE team member Kate Evans of ORNL. "Titan will help us address the complexity that will be required in such models."

*2 Sequoia:-

                              The U.S. Department of Energy's Lawrence Livermore National Laboratory supercomputer, Sequoia, simulates nuclear weapons tests so they no longer have to be conducted in reality. The system also makes sure America’s nuclear weapons are secure and ready to launch at a anytime.

                                   
                              The latest TOP500 supercomputer rankings, released today, place America's 16-plus-petaflop machine at the top


                                   

                               The latest TOP 500 ranking of the world’s fastest supercomputers is out this morning, and America is (finally) back on top. After nearly three years trailing supercomputers abroad--Japan’s K computer reigned supreme for most of last year, with China’s Tianhe-1 A close behind--the U.S. Department of Energy’s Lawrence Livermore National Laboratory has stolen the top spot via Sequoia, a 16.32 petaflops ( that’s a quadrillion floating point operations per second ) IBM machine built from 96 racks containing 98,304 computing nodes and 1.6 million cores.

                               That pushes Japan’s K computer into second place while another DOE IBM machine, known as Mira, elbowed its way into the number three spot. In fact, IBM had a really good day, taking the number four spot as well with Germany’s Super MUC. China rounded out the top five with Tianhe-1 A.

What is the DOE doing with all those petaflops? Mostly, it's making sure America’s nuclear weapons stockpile is both secure and ready to annihilate at a moment’s notice. But the ability to simulate and model nuclear weapons tests means we don’t have to actually conduct them (and haven't had to for 20 years), and the science that falls out of those sims benefits the DOE in other tangential ways.

Speed: 16.32 petaflops per second

Country: United States

Date Created: 2012

Saturday, 4 May 2013

*5 JUQUEEN :-

                                           The Julich Research Centre in Germany is home to the JUQUEEN, which is used for computational science, engineering, climatology, physics, and materials science. Ranked No. 8 in June, JUQEEN was upgraded and is now the most powerful system in Europe.


                           





Please consider the following notes on the use of juqueen :-

                                       The resources of this leadership-class system will be provided for a small number of grand challenge projects that will be selected in accordance with strict scientific standards. Resources will be allocated to the projects within the framework of an international reviewing procedure. The following criteria must be met by the projects in order to be eligible:
Scientific excellence.

Clear scientific goals and verifiable milestones on the way to reach these goals.

                                        Preliminary studies that demonstrate the scalability of the program to very high processor numbers (at least 8192 cores) and proof of very good scalability at least up to 4096 processor cores 

                                        A detailed and clearly arranged work schedule in form of a table or Gantt chart.
Well-founded and detailed demonstration of the required runtime of the program and the total required CPU time.Please consult also the detailed technical guidelines for projects applying for JUQUEEN

Speed: 1.38 petaflops 

Country: Germany

Date Created: 2012

Friday, 3 May 2013

*6 SuperMUC :-

                                On 20 th July 2012, Super MUC, Europe’s fastest and extremely energy - efficient supercomputer with a peak capacity of more than 3 petaflops, was inaugurated by the Bavarian Academy of Sciences and Humanities. Super MUC was already included in the 4 th PRACE Regular Call for Proposals and 200 million of Super MUC’s core hours ( out of the 1.134 million core hours for the entire Call ) were allocated to top-level research projects. The Leibniz Supercomputing Center ( LRZ ) that runs Super MUC added the inauguration to the festivities surrounding its 50th anniversary.



                              

                                 Super MUC is the number 4 on the world-wide TOP 500 of fastest supercomputers ( 18th June 2012 ) and is an IBM System x IDataPlex consisting of 155.000 cores. With peak capacity of more than 3 petaflops it can perform 3 quadrillion calculations per second ( a quadrillion is a 1 with 15 zeros ) . Researchers using the power of Super MUC also have 330 Terabyte of main memory at their disposal. The interconnect is a non-blocking InfiniBand Network with Fat Tree topology.


                              

Speed: 2.89 petaflops

Date Created: 2012

Country: Germany
*7 Stampede :-

                             Stampede, a new addition to the Top 10, is installed at the Texas Advanced Computing Center at the University of Texas in Austin. The new system, called Stampede, will be built by TACC in partnership with Dell and Intel to support for four years the nation's scientists in addressing the most challenging scientific and engineering problems. NSF is providing $27.5 million immediately and Stampede is expected to be up and running in January 2013. The estimated investment will be more than $50 million over four years; the Stampede project may be renewed in 2017, which would enable four additional years of open science research on a successor system.

                               When Stampede is deployed in 2013, it will be the most powerful system in the NSF XD environment, currently the most advanced, comprehensive, and robust collection of integrated digital resources and services enabling open science research in the world. As a critical part of XD , the Extreme Science and Engineering Discovery Environment ( XSEDE ) consortium comprising more than a dozen universities and two research laboratories, has now replaced the TeraGrid as the integrating fabric for the bulk of the NSF's high- end digital resources. Researchers from any U.S. open science institution can apply to use Stampede for a variety of novel scientific and educational activities through the XSEDE project.



                                      

                                 AUSTIN, Texas—The Texas Advanced Computing Center ( TACC ) at The University of Texas at Austin today announced that it will deploy and support a world-class supercomputer with comprehensive computing and visualization capabilities for the open science community, as part of the National Science Foundation's ( NSF ) " EXtreme Digital " ( XD ) program.
                                        
Speed: 2.6 petaflops 

Date Created: 2012

Country: United States

thnxxxx.... :)