• Some Picky Presentation Tips

    I just spent the last week at IPDPS in Boston. It was a good time. I got to meet a few new people, and connect with a lot of friends who are now living in the Boston area. I also presented our work on Rust for the GPU at HIPS. In the course of watching a lot of presentations, I came up with a few tips. I admit I did not follow all of these in my own presentation, but hopefully all of us can learn from these.

  • Data Parallel Operators

    In my previous post, we discussed some of the data structures that support data parallel programming. Now we’ll turn our attention to the common operators that manipulate these data structures. I’ll discuss several of them: map, reduce, scan, permute, back-permute and filter.

  • Data Parallel Data Structures

    Data parallelism is a style of programming where essentially the same operation is applied to a large collection of values. This style became popular during the 80s and early 90s as a convenient way of programming large vector processors. Data parallelism has remained popular, especially in light of the rise of GPGPU programming. Often, data parallel programming is used for fine-grained parallelism, but it works at larger granularity too. For example, MapReduce is a restricted example of data parallelism.

  • Beware the Logarithms

    Logarithms are great. They let you talk about incredibly wide ranges of numbers, and they transform multiplication into addition. Algorithms with logarithmic running times are so fast they might as well be constant time algorithms. Logarithmic scales on graphs can also make your results look much better. Let’s see how.

  • Patterns with Ellipses

    Last time, we talked about matching patterns in Scheme. Now we will look at how to extend the pattern matcher and template instantiation code to handle patterns with ellipses.

  • Matching Patterns with Scheme

    A while back, I wrote a post about macros in Scheme. Today I want to take a look at how one might begin to implement a macro system. In Scheme, whether you use syntax-rules or syntax-case do write your macros, at some point you’ll write patterns and templates. Macros match their input against a pattern and then use this to instantiate a template. Let’s consider a two-way or macro:

  • Access Patterns Matter, Part 2

    A couple of readers pointed out some improvements and corrections to my last post on GPU access patterns. These were pretty significant, so I thought it’d be worth doing a follow up post to see how the change things.

  • Access patterns matter

    One of the oft cited difficulties of GPU programming is dealing with memory layout and access patterns. In order to achieve maximum memory bandwidth, it is important to structure your application so that different threads do not access the same bank of memory at the same time. In other words, you need to avoid bank conflicts.

  • Modeling How Programmers Read Code (via Mike Hansen)

    My last post includes a video of my eye movements as I read and interpret a piece of code. I mentioned that this was part of an experiment being conducted by Mike Hansen. He just put up a new post with more details about his work and a video of another programmer reading a similar program. Check it out!

  • How do we read code?

    I recently got to participate in a psychological experiment for programmers. A friend of mine, Mike Hansen, is doing research on how people comprehend programs. The goal is to figure out some way of measuring what features in programming systems help programmers understand what they are doing, and how this can be used to make systems that lead to higher quality software. Mike is currently running an experiment where he shows people several short Python programs and asks them to tell the output of the program. The test subject is sitting in front of an eye tracker, so afterwards Mike can see where you were looking at various times during the experiment.