duquesne
Research Duquesne Advisory delivers in-depth analyses of Information and Communications Technologies, their implementations and their markets. Research is based on critical observation of the market by the analysts and their on-going contacts with the vendor community, together with hands-on, practical experience in consulting engagements.

Report from Ter@tec Forum 2010: high performance simulation for innovation and competitiveness



Report from Ter@tec Forum 2010: high performance simulation for innovation and competitiveness
The Ter@tec Forum 2010 on high performance simulation for innovation and competitiveness, held at Ecole Polytechnique outside of Paris on June 15-16, was arguably one of the most important IT events this year in Europe.

High Performance Computing (HPC) refers to applications requiring very high computing power, today from 1 to 1000 teraflops and in 2020 up to 1 million teraflops. Application domains span the entire economy, including defense, manufacturing (aeronautical design, auto crash test simulation, ...), energy (grids, oil and gas exploration, nuclear, ...), weather forecasting, transportation, architecture, urban planning, but also advanced finance, economic modeling, and many others.

In terms of importance for the IT industry itself, it should be remembered that advances in high performance scientific computing often prefigure the technological changes that will happen in the broader IT market.

In this brief report, Duquesne Group provides a summary of the key economic and technology issues and messages that emerged during the Forum. We will return to many of these points with more detailed analysis in the near future. .


The major economic issues

At the Forum, three important economic messages stood out clearly.

HPC is an engine for innovation

The economic stakes are extraordinarily high. Science - including the applied sciences that drive business innovation - no longer rests only on the two traditional pillars of theory and experimentation. Numerical simulation with HPC (high performance computing) has become the third pillar of progress.

As numerous presentations demonstrated, high performance computing not only enables things to be done faster, it has made it possible to do things that could not be done before. And of course, solutions to problems that cannot be solved in a reasonable time with today’s HPC power will become possible with the power of tomorrow.

Europe is under-investing ... again

The bad news is that, once again, Europe is lagging. Relative to the size of their economies, investments in HPC are twice as important in the US as in Europe.

On the high end, supercomputing spending increased worldwide by 25% in the period 2007-2009 despite the recession, while it declined 9% in Europe. Even countries such as S. Korea are putting in a stronger relative performance than Europe. China and Russia are also moving forward.

All of these countries have recognised that high performance numerical simulation is now crucial for innovation, which is today the true “wealth of nations.”

The good news, however, is that the transition to the next levels of HPC – petascale and exascale – open a new window of opportunity for European industrial players and users.

The HPC ecosystem is coming together in Europe

Another positive point that emerged from Ter@tec 2010 is that the ecosystem is coming together in Europe.

At the Ter@tec Forum, the quality and senior level of the speakers, together with the quality of the presentations, provided evidence of an increasingly dynamic ecosystem. The presentations were “low key, high competence”, the exact opposite of what one usually finds in IT events.

The announcement from the CEA – that it had put into operation the Tera 100 supercomputer built by Bull around the new Intel chips – was especially interesting. As the CEA explained, the system was developed through very close cooperation with Bull up to and including joint patents, a true example of “co-innovation”. In a domain where applications tend to be close to the machines, we expect that this sort of “customer-driven co-innovation” may well be a key ecosystem model for progress in the future.

Ter@tec itself is the core of a European high performance simulation technopole, which will work together with others in the European HPC space. The EU is now taking HPC very seriously, for example in the European program PRACE (Partnership for Advanced Computing in Europe). Ter@tec is also an associate member STRATOS, the PRACE Advisory group for Strategic Technologies.

As an aside, while the European R&D stimulation programs in HPC are of course entirely relevant and positive, it is nonetheless legitimate to ask – given the bureaucracy and slow time frames - whether they are entirely in synch with the fast changing reality of HPC today.

In any case, the public sector – both at the national and the European levels – has a key role to play in this ecosystem. University and public sector research is essential, and depends on public funding and incentives. Many domains of HPC remain highly sensitive in terms of sovereignty as well as competitiveness.

The key technology issues

In this part of our report on Ter@rtec Forum 2010, we will focus not on what is specific to given sectors but on issues that apply in a more general and transversal way.

HPC applications are “hitting the wall”

A growing number of applications experience a performance slump when ported to new, more powerful systems. According to some estimates, this affects more than 50% of HPC sites. Today, scientific code is usually closely linked to the computer system architecture that was in use at development time, and often poorly adapts to new architectures.

It is a major problem because many users need the extra performance but do not have the resources (or the right tools) to re-engineer their applications. There does not appear to be any easy solution for the “HPC legacy” portfolio. Experience from the world of business applications would tend to suggest that code rewrite is the only realistic solution.

Moving forward, addressing the ”application scalability” issue (or more precisely, “HPC code portability” between machines of different generations) may well require the implementation of abstraction layers for a small portion of the critical code – probably implying the acceptance of a performance sacrifice on such code - for the benefit of the entire application.

For many applications, the dominant HPC culture of “making the most of the available power” (justified in times of scarce and expensive processing power) may need to give way to different software designs that enable usage of even more power, but in a slightly less efficient manner. With cheap, massively scalable/parallel-capable systems, this could well be the way to go as HPC percolates from top-end power-hungry applications to more generalusage.

X86 based processor technology dominates the market…today

HPC is now largely dominated by standard processor components. A quick analysis of the Top-500 supercomputer list shows that Intel dominates this market with more than 70% of the top 500 machines relying on x86 technology (Xeon).

The x86 architecture is even more dominant when one includes the 9% share of AMD with Opteron chips.

This situation is not without irony, given that the x86 is a 30 year old architecture developed for the PC. In addition to low cost with steadily increasing performance, x86 has the big advantage of a very large number of standard software tools.

Newer processor technologies are emerging as challengers

However, technology changes are underway. The usage of General Purpose Graphic Processing Units (GPGPUs), essentially NVidia products, has enabled a different set of applications and systems. The use of GPGPUs jointly with conventional CPUs in so-called “hybrid systems”, which is indeed relevant for some workloads, has another ”political” effect as it allows boosting the Linpack performance of a system, propelling it easily up the Top 500 ladder...

In the future, multicore architectures using “simpler” processor cores than x86 for extremely massive parallel code execution will also challenge the current pecking order in the processor space.

These technologies will, as discussed above, require a different approach for the all-important issues of HPC "application scalability".

Intelligence - and challenges – across system architectures

With regard to architectures, the increased « intelligence » of storage and network components - together with performance issues - is driving a different sharing of tasks across entire systems. Previously dedicated components such as network controllers may well follow the GPGPU path and become a major option for true additional processing capability inside large HPC systems.

In the day 2 technical workshops, memory and storage access emerged as critical design issues, as vendors and users seek to further expand computing performance. Amdahl's laws defining the conditions for balanced systems remain valid, and the increase in CPU performance does indeed put pressure on memory access, both in SMP and cluster environments. Beyond hardware design, the need has emerged for new memory access strategies.

Storage systems will be increasingly critical in HPC

Management of data and storage is becoming critical because HPC data volumes are exploding.

Storage systems are indeed computers on their own, and traditional, commercial-type file systems are no longer sufficient. Here again, access strategies need to evolve. On the hardware side, RAID architectures seem to be losing relevance due to difficulties in scaling up to the needs of "petascale storage". In addition, new software solutions are needed to further scale storage capabilities.

Interestingly enough, HPC is confronting issues similar to those encountered by very large Web applications handling largely unstructured data, like Google, Yahoo or MSN. Among the solutions envisioned, the use of non-enumerating, rules-base metadata management engines appears as an attractive development path.



To sum up, the competitive stakes - both on the economic level and in terms of technology - are indeed high. On the basis of the Ter@tec 2010 Forum, it would seem that, all in all, Europe has the ingredients for success.

Even so, it will take smart investments and lots of work to pull it all together into a competitive edge at the global level … and make “high performance happen”.

Tuesday, June 29th 2010
Duquesne Advisory
Newsletter To subscribe to the Duquesne Advisory Newsletter, please enter your e-mail address.

Duquesne Advisory

Duquesne Advisory Ltd is a European firm, headquartered in the UK, dedicated to researching, understanding and advising clients worldwide on opportunities and trends in Information and Communications technology.

Research

Duquesne Advisory delivers in-depth analyses of Information and Communications Technologies, their implementations and their markets. Research is based on critical observation of the market by the analysts and their on-going contacts with the vendor community, together with hands-on, practical experience in consulting engagements.

Consulting

The analysts of Duquesne Advisory leverage the Firm’s ongoing market and technology research to undertake high added value consulting engagements for both ICT users and ICT providers. Focused on client service, their approach is rigorous and methodical, and at the same time pragmatic and operational.