As shown in the first attempt, Dr Dash was unable to overcome various problems, and considering Google Deepmind has been successfully applying deep learning on gaming problems (see here), it's time I try a new strategy. So we want to apply a more advanced form of AI and see if/how it solves obstacles better. However rather than using screen capture as input information as Deepmind has done, we will start with access to the internal game parameters (i.e. map with element locations etc). This will allow a much simpler model with the additional advantage of making it more transparent - being able to analyse and interpret the model is key to developing understanding and insights.

Also, rather than providing the model with the puzzles to solve as training data, let's make it more interesting and develop training settings in a more literal way - the context is a game after all. So this will be a set of custom tailored scenarios, with little resemblance to the actual game levels. The original game levels will be our test data. As the game itself is quite simple to emulate, and can be simplified to be effectively turn-based without the need for graphical output, this will make calculating our 'cost function' orders of magnitudes faster.

An example of a simple training setting for the model can be something like this.

drdash train

To be continued...