As Moore’s Law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding.

The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. PAPPA, for Performant Automation of Parallel Program Assembly, seeks to develop new programming approaches that would allow researchers and application developers to compile high-performance programs for use on parallel and “heterogeneous” hardware.

Program officials said this week PAPPA would explore “tradeoffs between programming productivity, solution generality and scalability to enable scientists and domain experts with no understanding of parallel programming and hardware architectures to create highly efficient performance portable programs.”

One aspirational goal would be new compiler technology providing up to a 10,000-fold improvement in programming productivity for massively parallel systems. Those high-performance, portable compilers would among other things lower barriers to deploying new algorithms on widely used programmable platforms.

PAPPA also addresses gaps in current programming approaches that are able to scale over millions of processor cores but require high-end programming expertise. That often translates to long and costly development lead times for HPC systems.

PAPPA specifically seeks to fill the gap between programing frameworks that automate parallelism but lack the ability to scale and popular data science programming languages like Python and R that nevertheless have shown little ability to scale on massively parallel systems.

DARPA has focused on HPC programming and portability for nearly two decades, fueling enterprise application of supercomputer technology. Now, PAPPA seeks to up the ante by leveraging domain-specific languages like PyTorch and TensorFlow that have demonstrated both high performance and programming productivity.

Meanwhile, the emergence of automated programming tools based on machine learning would be applied under the DARPA effort to help automate system modeling based on distributed architectures.

Automating parallel programming also targets off-loading tasks such resource allocation and memory management so developers can focus on scalable HPC systems. Therefore, agency officials have concluded, “it seems likely that a completely new approach is needed” to automate parallel programming.

One possible approach to more efficient development of executable HPC code would be accurate modeling and prediction of component performance within a full-blown HPC platform. Where appropriate, automation and other tools could then be applied to develop domain-specific HPC programs that don’t bust budgets.

DARPA said the two-phase HPC programming effort would focus on two application domains: physical simulations and real-time processing. The former includes data-driven applications such as fluid dynamics, weather forecasting and particle physics. The latter covers edge computing applications, including radar and wireless communication systems.

The DARPA program solicitation for PAPPA released on Tuesday (Sept. 3) is here. Industry proposals are due on Oct. 3.