• The ability to remember and associate concepts can be artificially emulated using Associative Neural Networks (ASNN). (See also autoassociative memory.) Applying ASNN reseach, we can see if it's possible to capture and generalise large amounts of information in an effective way. A neural network consists of nodes called neurons, and connections between these called synapses.

    A simple learning set is used to both construct and train the network. Note that training only takes a single pass to obtain synapse weights. Based on Active Associative Neural Graphs (ANAKG), I produced a customised implemented in C#, which more closely emulates the spiking neuron model, and aims at scalability. After training, a query can be entered by activating neurons in the network, resulting in a response which provides generalised information in relation to the input terms.

    To provide insight into the model and its workings, dynamic visualisation was added. This shows how the network is trained, and how the neurons and synapses in turn are activated. Key stages neurons go through as they are activated (with corresponding colours): activation (red), absolute refraction (blue), relative refraction (green).

    Taking DeepMind's mission statement as example, we train the model which forms the network and determines synaptic weights and efficiencies. Note stopwords are removed. Next we apply the simple query: "DeepMind". From the 123 word input (70 without stopwords), the model produces the folowing 17 word serialised output:

    "[deepmind] world leader artificial intelligence research application multiplier most positive impact human important ingenuity widely beneficial scientific advances"

    From the resulting output we can see the model gives a generalised summary in the context of both the input data and our search query.

    To be continued...

  • AI survival simulation

    This is an AI simulating simplified life & survival, inspired by Tierra. I wanted to not only reproduce this experiment, but take it a step further and observe other behaviour resulting from this. To do this I also made the programming more friendly and intuitive. Additionally, I separated out what is possible for entities to do, and things governed by laws or reason such as preservation of mass and energy.

    Instead of common models that are trained with fixed data sets, this is a competitive in situ evolutionary system. The simulation starts with a single entity, programmed with a basic set of instructions in the form of Condition -> Action. Besides entities there are resources that can be absorbed / consumed. These conditions / actions are stored in the entity, so each hold it's own flexible set of instructions. Additionally each entity has mass, energy and a unique identifier. It also has a version number corresponding to an instruction set. This is the standard instructions set:


    Conditions Action
    Energy:Low Unit:Entity Version:Different Move
    Energy:High Unit:Entity Version:Different Engage
    Unit:Resource     Engage
    Unit:Entity Version:Same Incomplete Program
    MassEnergy:Clone Unit:None   Clone
    Unit:None     Move


    The first action is it's highest priority, as only one action is performed - meaning it's a bit like an If / Else flow. A few key actions: if a resource is encountered, engage it (which means it will absorb it), and if enough mass and energy is available, clone and generate a 'child'. This initial set contains 19 instructions, stored sequentially in the entity.

    The environment is a one dimensional 'world' where the entity can move in. A simple representation is to loop the dimension into a circle as to best use the screen, at the same time limiting the 'world' to effectively loop around. Entities are visualised by a core with a unique ID number (intensity representing the amount of instruction completeness), a blue circle representing mass, and an outer red circle for the amount of energy. Resources look similar except it shows a blank centre. Entities can only engage with others while at the same location. The simulation starts with a single entity (ID '1', version 0), and an abundance of resources.


    When an entity is cloned, there is a chance of mutation. This means at random one condition/action will be removed or added. Note that cloning takes mass, energy and time equal to the number of instructions. So less instructions is beneficial. However, without essential instructions such as absorbing resources or cloning, the entity is heading for extinction.

    The game is about surviving in a competitive environment. So when an entity encounters another with a different version number, it will 'attack' it. First the energy will be expended, at the same time breaking down the energy 'barrier' of the other entity.

    Note that all this behaviour is dynamic, captured in the instructions of each entity. The centre of the screen shows how many entities of each version there are, indicating how 'well' a version with corresponding instructions is surviving.


    Selecting a version shows it's instruction set for further analysis.

    Although the experiment could be expanded more, it already reveals some interesting patterns. For example, it showed this mutation was out-surviving the standard instructions:


    Conditions Action
    Unit:Resource     Engage
    Unit:Entity Version:Same Incomplete Program
    MassEnergy:Clone Unit:None   Clone
    Unit:None     Move


    What's of particular interest about this, is that this entity does not engage with other entities, and would try to avoid - but at the same time lose in a sustained encounter. But because it's significantly smaller (only 11 instructions), it multiplies easier, and overall effectively survives better.

    Download Tierrai here

  • Theory

    The key points on theory and applications of procedural generation is described in the series Procedural Generation parts 1 to 3, where we looked at a number of elements that could describe our virtual world. In this section we look at a practical implementation, combining all the different elements.


    As application I chose a scenario based on the 3 part article, to generate a artificial 'world' with a varied terrain, climate and biomes. As I was making the implementation while writing the previous articles, the implementation is consistent with the information in the articles.

    The tool of choice was Microsoft Visual Studio, C# .net using WPF. WPF actually comes with a 3D engine (using the WPF component 'ViewPort3D') which was surprisingly easy to use, and ideal for implementing the visualisation of this atificially generated world. Credit also goes to the WriteableBitmapEx extension, which extends the WPF bitmap with convenient access functionality.

    The resulting application is crude, but includes a number of illustrative components:

    - Selection of key parameters such as the master seed number

    - A bitmap representation of the generated map (where 1 pixel represents 1 square km in this case)

    - Numerical key statistics on terrain / biomes

    - A 3D view of the current location within the map including immidiate surrounding


    Download ProceduralWorldGenerator


    Microsoft Visual Studio



  • Terrain

    Continuing from part 2, a scalable fractal method is used to create terrain. All attributes generated is based on a specific seed for the current region.
    An example of terrain elevation (black / blue: low, green / yellow: high), and using a threshold to define land (green) / water (blue)



    Humidity / Rainfall

    Continuing to a more advanced scenario, humidity / rainfall could be determined from obvious sources, any bodies of water. In this case, a streight forward approach is used, using a Gaussian function / filter.

    Humidity / rainfall map (blue: humid, yellow: dry)




    Temperature can be generated by using a similar algorithm used for the terrain, and subsequently applying a Gaussian filter. As mentioned, all the attributes for this region are based on a single seed. This means that for each map or set of attributes, random generators are reset to the start seed. Generating this secondary map for temperature would result in a geography identical to the terrain map. This is resolved by offsetting the generation by 1 or more draws.

    for (loop = n)
    // proceed with random draws from terrain

    It was found that a single draw is enough so the resulted maps show no similarity with previous generated maps based on the same seed.

    Additionally, this map can be adjusted for water and elevation. Above water temperature is moderated, reducing very low or very high temperatures towards a baseline value. For land, the temperature is reduced by the elevation level.

    Temperature map (red: hot, blue: cold)



    We now have elevation, humidity and temperature. Using composite coloring, these can be combined into a single map. But first, lets look at defining biomes.



    As final step, biome type can be defined. The biome type can be determined as function of temperature and rainfall.

    Biome temp precip


    In the composite map, the temperature is shown using the red / blue color channels as in the temperature map, and humidity / rainfall is indicated using the green color channel. Water is shown as plain blue.

    Using a mapped coloring scheme, the biome types can be visualised on the same map.

    Composite map / Biome map



    Generating Random Fractal Terrain

    Projections / Biome generation

    Hero Extant: World Generator

    Wikipedia: Biome

  • In part 1 we looked at the method used for creating terrain. Midpoint Displacement is a fractal method which results in self-similarity implicitely making it scalable. Moreover the method can be scaled depending on the extent of the scope (e.g. 10 m vs 1 km vs 10 km), and number of iterations into smaller sub-divisions it is continued for (e.g. from 1 km to about 1m after 10 iterations). But imagine a large map of 1000 km, where we want to evaluate terrain upto a detail of 1 m, would take a lot of computing power, and significant storage. And this is only considering a single plane in 2 dimensions. So we can introduce partitioning. This is used in Minecraft, which uses three dimension segments of 16 x 16 (x 256) m.
    In this case we use a two dimensional plane, of about 1000 km square, made up of 1000 x 1000 segments each representing 1 km square. Looking back to part 1, where we create an array of random seeds. Instead of a simple array, we create a 2 dimensional array of segements including a seed. To make this aspect scalable as well, seeds are generated in a similar order using a simple iterative midpoint method. This pattern is shown below, starting with the segments marked 1, then 2, 3, until each segment has a seed allocated.

    step = size while step > 1
        for y = 0:size:step
            for x = 0:size:step
                seed[y,x] = new seed
        size / 2

    Segmentation pattern


    Segments are only evaluated on-demand. Each segement is initialised using its seed, generating a terrain map. In addition other characteristics are evaluated as we will see in part 3.


    Generating Random Fractal Terrain

  • The concept of procedural generation is creation based on rules and algorithms, instead of manual construction. A common application is to create scenarios like a landscapes, but in such a way that is repeatable and predictable within a controlled setting. Additionally this normally results in high consistency. Though widely used in games including recent ones such as Minecraft and Rust, there are many other useful applications such as simulation modelling.

    Ideally, the generation is a fractal method providing scalability. We will look at this in more detail in part 2.

    To produce a set of proporties and attributes in a reproducible yet variable way, the common Random (or Rnd) function can be used. This function does not actually provide real random numbers, but psuedo random numbers. The 'random' numbers are generated, effectively implementing procedural generation. Additionally these are normally based on a start seed. So we want to create a set of properties, such as for different locations, each based on a seed. Next we want all of those to be based on one single master seed.

    In simple pseudo code:

    int masterSeed = 12345678

    Random randomGenerator = new Random(masterSeed)

    int[] locationSeeds = new int[100]

    foreach (int locationSeed in locationSeeds)
        locationSeed = randomGenerator.getNext()

    As this example illustrates, changing the masterSeed will result in a different set of location specific seeds. Next, for each specific location, different properties can be generated, in a consistent way - if and when they are required:

    Random randomGenerator = new Random(locationSeed)

    foreach (Property property in properties)
        property = randomGenerator.getNext()

    The method used for creating fractal terrain is Midpoint Displacement, though it is worth mentioning Perlin noise is another popular method used for this. Midpoint Displacement in one dimension is simple (see images below), starting with two points (1), and a straight line connecting these (2), find the midpoint on the line (3) and displace the new point up or down with a particular magnitude(4). There are now 2 connecting straight lines, drawn over 3 points (5). The process continues iteratively, finding the next midpoint between previous points in turn, and displacing these with a decreasing magnitude. For two dimension, the diamond-square algorithm is applied.
    Midpoint Displacement illustration



    Procedural Content Generation in Games

    Generating Random Fractal Terrain

    Fractal landscape

    Diamond-Square algorithm

    Perlin noise

  • Educative visualisation illustrating separation processes



    The full HD videos available are here:


  • Facilitating insight into a simulation model using visualization and dynamic model previews



    Model simplification, by replacing iterative steps with unitary predictive equations, can enable dynamic interaction with a complex simulation process. Model previews extend the techniques of dynamic querying and query previews into the context of ad hoc simulation model exploration. A case study is presented within the domain of counter-current chromatography. The relatively novel method of insight evaluation was applied, given the exploratory nature of the task. The evaluation data show that the trade-off in accuracy is far outweighed by benefits of dynamic interaction. The number of insights gained using the enhanced interactive version of the computer model was more than six times higher than the number of insights gained using the basic version of the model. There was also a trend for dynamic interaction to facilitate insights of greater domain importance.


  • Completely graphical interface

    ProMISE 2 introduces a new completely graphical user interface. A visual representation of the column (top of image) allows selection of visual elements which enables specific input parameters. When input parameters are changed the visual column representation adapts to these parameters, and predictive results are shown real time in graph and numerical forms (bottom of image).

    Free for non-commercial purposes (academic e-mail address required for free registration).

    Download latest version here

    Download manual

    (Previously also featured at theliquidphase.org)

  • Probabilistic Model for Immiscible Separations and Extractions (ProMISE)

    Chromatography models, liquid-liquid models and specifically Counter-Current Chromatography (CCC) models are usually either iterative, or provide a final solution for peak elution. This paper describes providing a better model by finding a more elemental solution. A completely new model has been developed based on simulating probabilistic units. This model has been labelled ProMISE: Probabilistic Model for Immiscible phase Separations and Extractions, and has been realised in the form of a computer application, interactively visualising the behaviour of the units in the CCC process. It does not use compartments or cells like in the Craig based models, nor is it based on diffusion theory. With this new model, all the CCC flow modes can be accurately predicted. The main advantage over the previously developed model, is that it does not require a somewhat arbitrary number of steps or theoretical plates, and instead uses an efficiency factor. Furthermore, since this model is not based on compartments or cells like the Craig model, and is therefore not limited to a compartment or cell nature, it allows for an even greater flexibility.



  • Universal Counter Current Chromatography modelling based on Counter Current Distribution

    There is clearly a need for a model which is versatile enough to take into account the numerous operating modes and pump out procedures that can be used with counter-current chromatography (CCC). This paper will describe a universal model for counter-current chromatography based on counter-current distribution. The model is validated with real separations from the literature and against established CCC partition theory. This universal model is proven to give good results for isocratic flow modes, as well as for cocurrent CCC and dual flow CCC, and will likely also give good results for other modes such as intermittent CCC.