This post was originally published in EnterpriseAI and HPCwire.

In a recent HPCwire article, it was revealed that DARPA is working to optimize programming approaches with the goal of increasing the performance of parallel systems. This is a worthwhile goal, and one that is squarely inline with our vision at Archanan, where we have developed a cloud platform to help speed research and development cycles by providing tools and environments that enable programmers to develop and test applications in real-time, at scale. Our goal is to help maximize the organizational utility of any existing supercomputer (and/or other complex computing systems), while speeding up the tendering and procurement process for system vendors by allowing engineers to develop and test applications on a virtualized model of the future system.

Interestingly enough, the DARPA article notes that “one possible approach to more efficient development of executable HPC code would be accurate modeling and prediction of component performance within a full-blown HPC platform.” As fate would have it, this is exactly what we have developed at Archanan, and are currently rolling out at supercomputing centers across Asia.

We have developed a cloud-based platform in which an organization is able to administer a digital twin of their supercomputing system, emulating every component; from the storage and memory, down to the compute and fabric, thus enabling development and testing of an at scale system without tying up the production system itself. Using the Archanan Development Cloud, organizations are able to administer personal Integrated Development Environments (IDEs) in the Archanan Cloud that mimics their own system. This helps to create new, efficient workflows that eliminate testing bottlenecks and port-over failures associated with not being able to pre-test code at scale.

Through our rich background in supercomputing with several institutions, we have worked with many people in different roles across the high-performance computing community. We consistently hear about issues HPC developers are having with their workflows and are keenly aware that it is a very difficult challenge for an organization to change their development track after it has been deployed. The frustration always come down to the same challenges: over-subscribed test systems that aren’t up to scale with the production machine.

Our mission is to change this paradigm by adding value at the beginning of the lifecycle for a supercomputer by working with hardware manufacturers to provide emulation of their upcoming architectures. They, in turn, can share this virtualized hardware in the Archanan Development Cloud with their customers, thus providing a “test drive” of the system to help provide better estimates for the performance of the system and its elements during the tendering process. Imagine a research center being able to run their top five applications on a system during the tendering process, while making adjustments to the system to right-size its performance to match its application needs. This “at scale” test drive ability has previously been unavailable, but today, there is no reason for any organization to commit financial resources to these expensive systems without first giving them a thorough examination using cloud emulation.

This resource comes at an ideal time in the advancement of supercomputing systems as we see increasing numbers of hybrid machines and specialized, advanced applications like AI, where specific accelerators are being considered. In these cases, it’s very difficult to predict performance when you are working across many different types of hardware. We’ve seen many supercomputing centers either over-provisioning or under-provisioning particular hardware components of the larger system. This, of course, is largely dependent on the applications that are being run, and at what capacities, making it critical to be able to test-drive before committing to a system.

We’re also seeing an increasing number of machines with many processor architectures – multiple CPU architectures (Power, x86, ARM, etc.), accelerated by multiple accelerators (GPU, FPGA, etc.). Previously, it was very difficult to reliably gauge the performance of such a system, but today, we can provide a snapshot of the whole machine, providing accurate benchmarking while sampling it against the applications intended to be run on it.

The best part is that this ability is a single facet to the overall power of the Archanan Development Cloud. Once a system is requisitioned with specs fully determined, it may take upwards of two years before the purchasing organization will take custody of that system. Under the current paradigm, committing resources for development on that system is precarious because there is no way to accurately test the performance and portability of the applications being developed. However, with virtualized access to the machine, at-scale development can happen immediately. When an organization’s users have access to an emulated version of their future machine, production applications can be installed and ran as soon as the power is switched on. Simply put, the supercomputer can reach effectiveness more quickly if people can develop and optimize their applications at scale before the machine is delivered.

Additional possibilities exist as well. For organizations such as universities, where current access to production machines is very limited, independent virtualized clones of their system can be made available on an individual, account level basis. A university can feel less restricted in giving their students access to learn, explore, and experiment. Graduate students, undergrads, and anyone learning large-scale or parallel computing can have access to systems that look like the full machine. They can demonstrate production scale workloads and prepare their projects for a better chance at deployment on the physical machine. Virtualizing the production machine lowers the bar for access to it, while increasing the system’s value and effectiveness.

Users of Archanan will change their supercomputing processes for the better by lowering risk, eliminating bottlenecks and maximizing the utility of these valuable systems. We encourage any organization purchasing or building a supercomputing system to get in touch to discuss how we can help. For more information, please visit us at, or download our solution brief.

Pin It on Pinterest