notes on simple cellphone accelerometer VR

I was hoping to hack in some augmented-reality stuff and some topology optimization here too, overlaying stuff on the cellphone camera and synthesizing numerically improved shapes for physical forms. I didn’t manage to get any of that stuff in mostly because it took me a while to do basic things like erase canvases, draw circles, and read the accelerometer.

I wrote a tiny fragment of APL as a tiny JS library (a bit under 60 lines of code, partly because it’s poorly factored), which allowed me to write things like this:

    x = rand(n).plus(rand(n)).plus(-1).times(64);
    y = rand(n).plus(rand(n)).plus(-1).times(64);
    var ty = y.times(c).plus(z.times(s))
      , tz = z.times(c).plus(y.times(-s))
      , seq = tz.gradeDown()     // painter’s algorithm
    ;

I was pleased with that; it saved me several explicit loops. It made adding the 3D part very easy indeed. I’ll see if I can pull a version of that out as a reusable library for the next few hacks.

In crude experiments and rough calculations it seems like the accelerometer input has good enough resolution and noise to be able to detect hand movement as well as just angle. I was hoping to explore that but ran out of time for today. Some kind of data visualization would have helped a lot with that, but I didn’t have easy data visualization, so maybe that would be a good thing to work on next.

I used <canvas> instead of SVG for this because I wanted to see if it would work on my old iPhone, whose Safari doesn’t support SVG, but of course <canvas> is in Mobile Safari since the beginning. It does, although it seems like the accelerometer direction on that device is reversed from the Android device I tested on, and the update rate feels lower than the Android’s 20Hz, which still feels like noticeable lag interactively.

By the same token, I didn’t want to depend on WebGL. I probably could have benefited from doing a bit of shading on the spheres though.