Sunday, October 19, 2014

Game-development Log (8. Performance Checkpoint - I)


I don't want to have my full architecture set up only to realise that something isn't really scalable or performs sub-par. Thus, time for a performance checkpoint where I'm reviewing some of my bottlenecks and try to fix them.

Zoomed-out tile images

Currently my image tiles are generated dynamically per request and stored on the Azure CDN. Although this is quite interesting from a storage point-of-view/setup time it has a drawback: the time it takes to generate an image on the furthest away zoom levels, particularly for the unlucky users that get cache misses on the CDN.

The reason is quite-simple: a tile at zoom level 7 includes about 16.000 individual hexagons, requiring some time (like a couple of seconds) to render. For comparison, a tile at zoom level 12 includes about 30 hexagons and is blazing fast to render.

Pre-generating everything is not an option so I opted for an hybrid-approach:
  • Between zoom levels 7 and 10 (configurable) I generate the tile images and store them on Azure's blob storage with a CDN fetching from it.
  • After 11+ they're generated dynamically and stored on the CDN (as they were)
So in practice I now have two different urls on my map:
So the question is: how many tiles will I need to pre-generate?

The math is quite straight-forward:

tiles for zoom level n = 4^n

tiles 7 = 4^7 = 16.384
tiles 8 = 4^8 = 65.536
tiles 9 = 4^9 = 262.144
tiles 10 = 4^10 = 1.048.576

total = 1.392.640

As I only generate tiles with data, and considering that 2/3 of the planet is water, the approximate number of hexagons generated would be:

approximate number = total * 0.33 ~ 460.000 images

Not too bad and completely viable, particularly as I don't expect to be re-generating these images very often.

Vector-Load to support unit movement

Something that I was noticing is that panning the map view on higher zoom levels was a little bit sluggish, particularly as it was loading the vector tiles with the hexagon data in order to calculate the movement options of the units.

I was loading this info per-tile during panning, which was blocking the UI thread. A colleague of mine suggested me to try something else: HTML5 Web Workers. You basically spawn a worker thread that is able to do some work (albeit with some limitations, like not being able to access the DOM) and using messaging to communicate with the main UI thread.

The implementation was really straightforward. Unfortunately I didn't really notice any performance improvement. Anyway, Web Workers could be an incredibly viable alternative if I ever switch to a client-based drawing logic instead of completely server-side. I've added an entry to my backlog for some WebGL experiments with this :)

I then had a different idea: instead of loading the hexagon-data tile by tile make a single request that would include the various tiles that compose the current viewport. This is triggered on the "viewchangeend" Bing Maps event, particularly using a throttled event handler.

There was a small performance benefit on this approach and it can be further optimised, particularly leveraging local-storage on the client.

Change from PNG to JPEG

At a certain point in time my tiles required transparencies but that's no longer the case. Thus, and although being hit with a small quality downgrade, I've changed my image tiles from PNG to JPEG.

This has 3 advantages:

  • Storing the images on the CDN/Blob Storage will be cheaper as the JPEG images are considerably smaller
  • Latency on loading the image tiles
  • A more subtle advantage happens on the rendering process, as JPEG is faster to load and process, especially when the PNG counterpart has an alpha-channel.

The drawback is, as mentioned, image-quality. Here's a PNG image tile and the corresponding JPEG.

Highly compressed and noticeable image-loss but on the map as a whole is mostly negligible, particularly if there's no reference to compare. I'm more worried with performance than top-notch image quality.

No comments:

Post a Comment