Swig, LLVM, LLILC

As I work to enhance Campy, a C# library for GPU programming I wrote, I’m trying to capitalize on some new code from the NET Foundation Projects. These include LLILC, a MS IL compiler based on LLVM. I want to be able to use some of the APIs in LLVM to perform SSA analysis rather than roll my own. But, that turns out to be easier said than done. This note describes some of the issues in building LLILC, LLVM, and SWIG.

Continue reading “Swig, LLVM, LLILC”

A Short, Practical Review of Foreign Function Calls in Java, C#, C++

Android is a popular platform for smartphones, with ten of thousands of applications developed for it every year. For coding an Android app, there are a number of programming languages, but Java is the most popular. However, Xamarin provides tools for writing applications in C# if you prefer. If you use Xamarin, but sometimes want to use an open-source library written in Java, you still can. The state of currently used, practical Foreign Function Interfaces (FFI) on most platforms is similar. Of course, the difficulty is in the details.

Generally, there are two ways for code in one language to call another: use a native function to call the foreign function; or, use reflection to call the foreign function. A bindings interface offers native functions to the whole foreign API.

Java Native Interface (JNI) provides a native interface to call non-Java code from Java. It also provides an API for use in C++ to invoke Java code, structured like Java’s reflection API.

There are a number of tutorials on how to use JNI, but they are somewhat out of date. This article brings it forward–if only to a small degree, with “how to” explanations.

Continue reading “A Short, Practical Review of Foreign Function Calls in Java, C#, C++”

Developing CUDA and C++ AMP in Visual Studio 2011

Every time Microsoft releases a new version of Visual Studio C++, NVIDIA releases a new version of CUDA to work with the new version of Visual Studio.  Unfortunately, that seemed to take NVIDIA an incredibly long time the last time. After Visual Studio 2010 was released in April 4, 2010, CUDA integration with with Visual Studio 2010 didn’t happen until CUDA 4.0 RC1 in March 4, 2011–a year later!  And, to this day, the build rules have never worked cleanly for me (http://stackoverflow.com/questions/6156037/issue-with-production-release-of-cuda-toolkit-4-0-and-nsight-2-0).  Because I’m developing C++ AMP and CUDA side by side, and cannot wait for NVIDIA, I decided to develop the build rules myself, and work out the details of calling CUDA from Visual Studio C++ 2012.

Continue reading “Developing CUDA and C++ AMP in Visual Studio 2011”

C++ AMP

At the AMD Fusion Developer Summit 2011 in June, Microsoft announced C++ Accelerated Massive Parallelism (C++ AMP), an extension to C++ for parallel programming of GPU’s.  This extension, included in Visual Studio 11, would allow developers to program the GPU with language features that are arguably much more powerful than either CUDA or OpenCL.  In comparison, CUDA and OpenCL seem more like stone knives and bearskins. After what seemed like an long three-month wait, Microsoft has finally released the Visual Studio 11 Developer Preview, which contains C++ Accelerated Massive Parallelism (C++ AMP), Microsoft’s vision of GPU programming.  Was it worth the wait?

Continue reading “C++ AMP”

OpenCL vs. CUDA

Today's processors have undergone a huge transformation from those of just 10 years ago.  CPU manufacturers Intel and AMD (and up and coming CPU designer ARM) have increased processor speed via greater emphasis on superscalar execution, deeper pipelining, branch prediction, out of order execution, and fast multi-level caches.  This design philosophy has resulted in faster response time for single tasks executing on a processor, but at the expense of increased circuit complexity, high power consumption, and a small number of cores on the die.  On the other hand, GPU manufacturers NVIDIA and ATI have focused their designs on processors with many simple cores that implement SIMD parallelism, which hides latency of instruction execution [1].

While GPUs have been in existence for about 10 years, the software support for these processor have taken years to catch up.  Software developers are still sifting through solutions for programming these processors.  OpenCL and CUDA are frameworks for GPGPU computing.  Each framework comprises a language for expressing kernel code (instructions that run on a GPU), and an API for calling kernels (from the CPU).  While the frameworks are similar, there are some important differences.

CUDA is a proprietary framework. It is not open source, and all changes to the language and API are made by NVIDIA. But, some third-party tools have been built around the framework and it does seem to have a large following in academia.  Unfortunately, CUDA only runs on NVIDIA devices.  While it should be possible to run CUDA code on other platforms using Ocelot, this only works on Linux systems.
 
OpenCL is a standardized framework, and is starting to gain popularity.  Similar to NVIDIA's CUDA C++, OpenCL allows programmers to use the massive parallel computing power of GPU's for general purpose computing.  Unlike CUDA, OpenCL works on any supported GPU or CPU, including Intel, AMD, NVIDIA, IBM, and ARM processors. 
 
Does OpenCL make programming multiple platforms easier?  Is it as fast as CUDA, or does it sacrafice speed for diverse platform support?

Continue reading “OpenCL vs. CUDA”

WASTE — A CUDA Emulator

CUDA C++ is a parallel programming language for NVIDIA GPU’s.  Using the CUDA Toolkit (http://developer.nvidia.com/object/cuda_3_2_toolkit_rc.html), developers can write programs that take advantage of the large parallel computing potential of the GPU, speeding up their programs several orders of magnitude.  CUDA programs are executed on the GPU, which works as a coprocessor with the CPU, having its own memory and instruction set.  But, the question is, after the developer invested the time to parallelize his program, can the CUDA program run on a PC without an NVIDIA GPU?   Does the developer have to redo all his software?

Continue reading “WASTE — A CUDA Emulator”

Parallel programming using CUDA

Last month, I took a short course called Introduction to Multicore Programming, which was taught by James T. Demers of Physical Sciences Inc. This course introduced me to current hardware and software systems for parallel computing. Personally, the offering of this course was timely: I have been reading about how important parallel programming is becoming; and, I was starting to become interested in parallel algorithms but my knowledge of parallel computing was embarrassingly sparse. The last time I looked at the subject was in the early 1990’s when I wrote a program that used MPI. Though I have an Intel Q6600 quad-core CPU machine (Figure 1, Figure 2), a byproduct of the rivalry between Intel and AMD, I never really bothered to program it for parallel computing because I did not think four cores would offer that much over one core. What I learned from this course was that I had my head in the sand. In fact, I was surprised to learn that that the graphics card I owned, an NVIDIA GeForce 9800 GT (Figure 3), was a parallel computing system which I could program. So, I decided to apply what I learned in the class by solving two programming exercises on my system (Figure 4): matrix multiplication and graph breadth-first search.  In this post, I describe the first problem, matrix multiplication.

Continue reading “Parallel programming using CUDA”