HPC programmers who are tired of managing low-level details when using OpenCL or CUDA to write general purpose applications for GPUs (GPGPU) may be interested in Harlan, a new declarative programming language designed to mask the complexity and eliminate errors common in GPGPU application development.

GPUs are increasingly being used to provide a boost in computing power in HPC systems. Attaching NVIDIA Kepler or Intel Xeon Phi co-processing cards to a traditional CPU architecture can provide a big increase in the performance of parallel workloads. However, programming the GPUs can be difficult, as it requires different tools and a different skill set than traditional X86 development.

The idea with Harlan is to keep developers focused on the high-level HPC programming challenge at hand, instead of getting bogged down with the nitty gritty details of GPU development and optimization.

Eric Holk, a Ph.D. candidate at the University of Indiana, is the driving force behind the Harlan project. Harlan is a domain-specific language that uses a declarative approach to coordinating computation and data movement between a CPU and GPU, according to a paper that Harlan and his colleagues presented at the September 2011 International Conference on Parallel Computing.

Harlan’s syntax is based on the language Scheme, and compiles to Khronos Group’s OpenCL, a GPU framework that competes with NVIDIA’s Compute Unified Device Architecture (CUDA). The language was designed to provide a “straightforward mechanism for expressing the semantics the user wants” for areas such as data layout, memory movement, threading, and computation coordination. In effect, it lets developers declare the “what,” and leaves the “how” up to the language, the researchers say in their paper.

The benefits of this approach will be even higher for hybrid applications that utilize a combination of GPUs and CPUs, since they introduce even more complexity for the developer, who has to take into account additional levels of memory hierarchy and computational granularity, the researchers say.

“Not only does a declarative language obviate the need for the programmer to write low-level error-prone boiler-plate code, by raising the abstraction of specifying GPU computation it also allows the compiler to optimize data movement and overlap between CPU and GPU computation,” Holk and his colleagues write in the paper, titled “Declarative Parallel Computing for GPUs.”

In addition to Harlan, Holk and his colleagues are developing Kanor, another declarative language for specifying communication in distributed memory clusters. Kanor is unusual, Holk writes, in that can automatically handle the low-level details when appropriate, but gives the programmer the option to step in and hand code the communications when necessary. This provides a “balance between declarativeness and performance predictability and tenability.”

Harlan will provide a productivity boost, but don’t expect it to transform your average coder into a super coder. “It is important to emphasize at this point that we are not proposing a ‘silver bullet’ or ‘magic compiler’ that will somehow make GPGPU or hybrid cluster programming easy,” Holk and his colleagues write.

“Rather, we are seeking to abstract away many of the low-level details that make GPU/-cluster programming difficult, while still giving the programmer enough control over data arrangement and computation coordination to write high-performance programs,” they add.

Harlan will run on Mac OS and Linux. The Harlan project is hosted at GitHub, and has five contributors.

Related Articles

Dust Storms Put GPU CPU Performance to the Test

Swiss ‘GPU Supercomputer’ Will Be Fastest in Europe

NVIDIA’s Contribution to Green Computing