• Associative Neural Networks

    The ability to remember and associate concepts can be artificially emulated using Associative Neural Networks (ASNN). (See also autoassociative memory.) Applying ASNN reseach, we can see if it's possible to capture and generalise large amounts of information in an effective way. A neural network consists of nodes called neurons, and connections between these called synapses.

    A simple learning set is used to both construct and train the network. Note that training only takes a single pass to obtain synapse weights. Based on Active Associative Neural Graphs (ANAKG), I produced a customised implemented in C#, which more closely emulates the spiking neuron model, and aims at scalability. After training, a query can be entered by activating neurons in the network, resulting in a response which provides generalised information in relation to the input terms.

    To provide insight into the model and its workings, dynamic visualisation was added. This shows how the network is trained, and how the neurons and synapses in turn are activated. Key stages neurons go through as they are activated (with corresponding colours): activation (red), absolute refraction (blue), relative refraction (green).

    Taking DeepMind's mission statement as example, we train the model which forms the network and determines synaptic weights and efficiencies. Note stopwords are removed. Next we apply the simple query: "DeepMind". From the 123 word input (70 without stopwords), the model produces the folowing 17 word serialised output:

    "[deepmind] world leader artificial intelligence research application multiplier most positive impact human important ingenuity widely beneficial scientific advances"

    From the resulting output we can see the model gives a generalised summary in the context of both the input data and our search query.

    To be continued...

  • Dr Dash 2

    As shown in the first attempt, Dr Dash was unable to overcome various problems, and considering Google Deepmind has been successfully applying deep learning on gaming problems (see here), it's time I try a new strategy. So we want to apply a more advanced form of AI and see if/how it solves obstacles better. However rather than using screen capture as input information as Deepmind has done, we will start with access to the internal game parameters (i.e. map with element locations etc). This will allow a much simpler model with the additional advantage of making it more transparent - being able to analyse and interpret the model is key to developing understanding and insights.

    Also, rather than providing the model with the puzzles to solve as training data, let's make it more interesting and develop training settings in a more literal way - the context is a game after all. So this will be a set of custom tailored scenarios, with little resemblance to the actual game levels. The original game levels will be our test data. As the game itself is quite simple to emulate, and can be simplified to be effectively turn-based without the need for graphical output, this will make calculating our 'cost function' orders of magnitudes faster.

    An example of a simple training setting for the model can be something like this.

    drdash train

    To be continued...