• Associative Neural Networks

    The ability to remember and associate concepts can be artificially emulated using Associative Neural Networks (ASNN). (See also autoassociative memory.) Applying ASNN reseach, we can see if it's possible to capture and generalise large amounts of information in an effective way. A neural network consists of nodes called neurons, and connections between these called synapses.

    A simple learning set is used to both construct and train the network. Note that training only takes a single pass to obtain synapse weights. Based on Active Associative Neural Graphs (ANAKG), I produced a customised implemented in C#, which more closely emulates the spiking neuron model, and aims at scalability. After training, a query can be entered by activating neurons in the network, resulting in a response which provides generalised information in relation to the input terms.

    To provide insight into the model and its workings, dynamic visualisation was added. This shows how the network is trained, and how the neurons and synapses in turn are activated. Key stages neurons go through as they are activated (with corresponding colours): activation (red), absolute refraction (blue), relative refraction (green).

    Taking DeepMind's mission statement as example, we train the model which forms the network and determines synaptic weights and efficiencies. Note stopwords are removed. Next we apply the simple query: "DeepMind". From the 123 word input (70 without stopwords), the model produces the folowing 17 word serialised output:

    "[deepmind] world leader artificial intelligence research application multiplier most positive impact human important ingenuity widely beneficial scientific advances"

    From the resulting output we can see the model gives a generalised summary in the context of both the input data and our search query.

    To be continued...

  • Dr Dash 2

    As shown in the first attempt, Dr Dash was unable to overcome various problems, and considering Google Deepmind has been successfully applying deep learning on gaming problems (see here), it's time I try a new strategy. So we want to apply a more advanced form of AI and see if/how it solves obstacles better. However rather than using screen capture as input information as Deepmind has done, we will start with access to the internal game parameters (i.e. map with element locations etc). This will allow a much simpler model with the additional advantage of making it more transparent - being able to analyse and interpret the model is key to developing understanding and insights.

    Also, rather than providing the model with the puzzles to solve as training data, let's make it more interesting and develop training settings in a more literal way - the context is a game after all. So this will be a set of custom tailored scenarios, with little resemblance to the actual game levels. The original game levels will be our test data. As the game itself is quite simple to emulate, and can be simplified to be effectively turn-based without the need for graphical output, this will make calculating our 'cost function' orders of magnitudes faster.

    An example of a simple training setting for the model can be something like this.

    drdash train

    To be continued...

  • Tierrai

    AI survival simulation

    This is an AI simulating simplified life & survival, inspired by Tierra. I wanted to not only reproduce this experiment, but take it a step further and observe other behaviour resulting from this. To do this I also made the programming more friendly and intuitive. Additionally, I separated out what is possible for entities to do, and things governed by laws or reason such as preservation of mass and energy.

    Instead of common models that are trained with fixed data sets, this is a competitive in situ evolutionary system. The simulation starts with a single entity, programmed with a basic set of instructions in the form of Condition -> Action. Besides entities there are resources that can be absorbed / consumed. These conditions / actions are stored in the entity, so each hold it's own flexible set of instructions. Additionally each entity has mass, energy and a unique identifier. It also has a version number corresponding to an instruction set. This is the standard instructions set:

     

    Conditions Action
    Energy:Low Unit:Entity Version:Different Move
    Energy:High Unit:Entity Version:Different Engage
    Unit:Resource     Engage
    Unit:Entity Version:Same Incomplete Program
    MassEnergy:Clone Unit:None   Clone
    Unit:None     Move

     

    The first action is it's highest priority, as only one action is performed - meaning it's a bit like an If / Else flow. A few key actions: if a resource is encountered, engage it (which means it will absorb it), and if enough mass and energy is available, clone and generate a 'child'. This initial set contains 19 instructions, stored sequentially in the entity.

    The environment is a one dimensional 'world' where the entity can move in. A simple representation is to loop the dimension into a circle as to best use the screen, at the same time limiting the 'world' to effectively loop around. Entities are visualised by a core with a unique ID number (intensity representing the amount of instruction completeness), a blue circle representing mass, and an outer red circle for the amount of energy. Resources look similar except it shows a blank centre. Entities can only engage with others while at the same location. The simulation starts with a single entity (ID '1', version 0), and an abundance of resources.

    tierrai1small

    When an entity is cloned, there is a chance of mutation. This means at random one condition/action will be removed or added. Note that cloning takes mass, energy and time equal to the number of instructions. So less instructions is beneficial. However, without essential instructions such as absorbing resources or cloning, the entity is heading for extinction.

    The game is about surviving in a competitive environment. So when an entity encounters another with a different version number, it will 'attack' it. First the energy will be expended, at the same time breaking down the energy 'barrier' of the other entity.

    Note that all this behaviour is dynamic, captured in the instructions of each entity. The centre of the screen shows how many entities of each version there are, indicating how 'well' a version with corresponding instructions is surviving.

    tierrai2small

    Selecting a version shows it's instruction set for further analysis.

    Although the experiment could be expanded more, it already reveals some interesting patterns. For example, it showed this mutation was out-surviving the standard instructions:

     

    Conditions Action
    Unit:Resource     Engage
    Unit:Entity Version:Same Incomplete Program
    MassEnergy:Clone Unit:None   Clone
    Unit:None     Move

     

    What's of particular interest about this, is that this entity does not engage with other entities, and would try to avoid - but at the same time lose in a sustained encounter. But because it's significantly smaller (only 11 instructions), it multiplies easier, and overall effectively survives better.

    Download Tierrai here

  • Dr Dash

    Based on a classic game, this games features AI algorithms.

    Features:

    - modified A* path finding

    - uses XNA platform

     

    Modified A* pathfinding visualised

     

    Path updated with new objectives

    drdashanim

     

    drdashanim2
    Programming language: C# / Object Oriented / Design Patterns: MVC

    MS Visual Studio; XNA framework; CLR .NET

     

    Controls:

    - XBox 360 controller (USB)


    Alternative controls:

    - movement: [Game pad] / arrow keys

    - start: [Start] / S

    - demo: [B] / D

    Download Dr. Dash V1.1

    Download Microsoft XNA redistributable (required)

  • Dr Pac

    Based on one of the most classic games, this games features AI algorithms - see if you can beat the 'demo' mode!

    Features:

    - Uses weight map to calculate best / worst location

    visualisation of weight map (wrapped around screen edges)

     

    Installation instructions:

    - download zip file

    - unzip contents into a folder

    - run setup.exe to start installer

     

    Controls:

    - movement: arrow keys

    Download Dr. Pac V1.2