Progeome is a very simple program for projective geometry. It only has 12 possible user actions:
Internally, the program represents the drawing as a DAG of object dependencies, which is maintained in a topologically sorted order and thus represents a sort of program for redrawing.
The above description of 12 separate actions sounds a bit like a recipe for implementation paralysis. As an initial rough draft that might be achievable within a couple of hours, I am going to implement “create free point”, “select point”, “create line”, and “move point”, as sort of the minimal core that sort of represents the functionality of the application.
This involves three buttons: create-point, create-line, and move. The create-line button gets displayed only when a point is selected, so we need to keep track of the currently-selected point. It doesn’t yet require reordering the objects, since without reconstraining, the order can’t change. Without macros and deletion, you probably can’t practically create enough objects to require incremental redrawing.
So handling a click involves checking which is the currently selected quasimode, which might be nothing, create-line, or create-point, and beginning to display the relevant feedback, which is then updated for every mousemove. Then handling the mouseup event is where we take the actual action; in the nothing case, that might be selecting a point or flashing the create-point button. Flashing the button requires that we have a currently-active-animations list that we step through on each redraw, and that we do redraw periodically, at least while one or more animations are live.
Handling a touch event is only slightly more complicated, because there are potentially multiple touches, and each one may be activating a quasimode or participating in a particular quasimode, which may or may not be the current quasimode. (Indeed, mouse drags too can also be participating in a previously active quasimode; it’s just that there’s only one of them.) So we need to maintain a set of currently active touch events, along with what each of them is doing, so that we can react properly to touchmove and touchend events.
Handling a keyup or keydown event might activate or deactivate a quasimode, and the activated quasimodes go into some kind of ordered list, with the one at the head of the list actually controlling the interaction.
Actually drawing the sketch is pretty simple. Supposing that each point already knows its coordinates, which it will at first because they’re all free, we just walk through the list of objects, drawing each one on the canvas. Points are drawn as filled black circles, lines as thick lines from some point off the canvas to some other point off the canvas. Then we draw the feedbacks, the buttons, and the animations.
Sigh, okay, what I actually managed to get done in two hours is drawing two buttons and drawing some feedback animations in response to touch and click events.
Feedback from Chrome on Android 4.1.2 is that the touchstart events work fine with at least six simultaneous touches, but the animations run at 10fps instead of 60fps. Ancient Android gets even less, maybe 5fps, using setInterval. Also I had a bug where I would enqueue multiple redraw() calls, which had the result that if you have many circles visible they expanded instantly.
At first, I made the circle buttons 32 pixels in radius. However, on my ancient Android, at minimum zoom (which is the only zoom where the coordinates from the touch events are correct) the 32-pixel-radius circle buttons are 6.5 mm in diameter. A normal keyboard key is 18 mm or 19 mm, and at 4.5 mm or 6.5 mm the keys on the phone’s soft keyboard are too small for me. I had been thinking that maybe 11.25 mm would be a more reasonable key size, which suggests 55-pixel-radius circles, which do indeed come out to about 11 mm. This does feel like it would be more reasonable, and I think we can still fit six buttons in the two corners at this kind of spacing, barely.
I have create-a-free-point, select-a-point, create-a-line, and delete-a-point running and working more or less smoothly, though with a bunch of overdraw and kind of sluggish response. This is enough to try out the interface and feel that it feels pretty good.
There are still a bunch of problems, not to mention unimplemented features:
So far, the keyboard/mouse interface has worked surprisingly well as a way to prototype the limited multitouch interface I have so far; I’ve been able to do most of the development on my non-multitouch laptop.