#LifeUnderLockdown: A software engineer's perspective by Nick Sephton
Life in lockdown has been a surreal experience for us all. My personal experience saw me retreat from my home in York to my family home and the place of my birth, the verdant little village of Balsall Common. As such, the shift in lifestyle felt even more extreme, whereas in actuality everything was probably exactly the same as if I had stayed in York, plus I also now had the company of my mother, who was adept at preparing cups of tea and roast dinners alike.
Despite the university suggesting we should expect to work at approximately 60% of our normal rate, I found that with little else to do (save a brief rekindling of an old knitting hobby to produce a single scarf, and plentiful singing practice), I got through work roughly as fast as normal, if not faster. My main task was to provide an AI “computer player” for a Unity project, which was a fairly standard board game where you moved explorers about trying to collect more gold coins than your opponent. The existing AI was a little slow, but also seemingly made completely arbitrary decisions, and I was to attempt to improve on both of these issues. After a brief examination of the code base, it was clear that it had been worked on by a number of different coders, none of which could seem to agree upon a code style.
It quickly became apparent that if I was going to provide a useful AI solution, I’d need to re-write the underlying game logic. In my experience, this is a fairly standard step in adopting games for an artificial player, largely because the focus when creating a game is not to optimise the game state for an artificial intelligence, but to provide a satisfactory experience for the human player. In order to run simulations on future games, the constructs that represent the logical game state must be both very efficient and very small in memory. It would not be unreasonable to assume that during a decision, the AI would need to store tens of thousands of future decision points in memory, plus potentially millions of game actions connecting these decision points.
After I’d performed this work, I applied our existing AI in games project Kingfisher to the game, and was pleasantly surprised that with relatively default settings (playing 5000 game simulations per move), Kingfisher made a strong, but not undefeatable opponent. It seemed to play intelligently, but I could generally beat it if I really put my mind to it. It also managed to make these game decisions in under a second, generally speaking, so it fulfilled the requirement for the AI to act quickly.
Then came the time to attempt a build of the newly functional game in WebGL. Upon producing said build, the game seemed to generally run fine, except with the important exception that the AI now took approximately 35 seconds to consider each move. A quick glance at the profiler and it would appear that WebGL is ignoring all my carefully preallocated memory and somehow performing about 2300 calls to GCAlloc each frame. Further examination and tweaking of my memory management code caused precisely no effect whatsoever, IL2CPP seemingly baulking at my attempts to optimise it.
After a few days and many, many cups of tea later, I’m forced to give up on WebGL for the time being, and instead produce a Windows Desktop build, which makes AI moves in approximately 1 seconds (depending on the size of the decision space.) It’s notable that if I remove my memory management code from this version of the game, it actually performs faster. Definitely some sort of .NET4 magic going on in the background there!
So after some consideration of this problem, I return to an old idea - A Kingfisher server. A web-facing server that receives game states from clients and returns strong moves which can be made by the current player. I wrote part of this service a while back, but never got around to finishing it, as other jobs were more pressing at the time. This idea now holds some water, and it’s what I’ll be working on for the next few weeks, I suspect.