Photo licensed CC-BY-SA 3.0 by Uwe Aranas.
By adding random pentagons to the canvas and leaving them there if the result is closer to the original image, we eventually get something that maybe very slightly resembles the original image. You can see an example from an overnight run if you’re impatient.
There are about a zillion ways you could do this better: not redrawing the entire image for each loss computation, using a loss computation that better reflects human vision (using edge detection, chroma subsampling, and perceptual weighting of color channels, say), adding points to existing polygons sometimes, removing existing polygons or points sometimes, moving points on existing polygons sometimes, changing colors of existing polygons sometimes, computing a gradient to guide those changes, using sum tables to rapidly integrate the color over a large part of the image for incremental loss function updates, adding random restarts to the hill climbing, and so on.
Also, maybe most obviously, it would be a lot better if you could upload your own image to approximate.
I wrote some notes on what I learned from this exercise.