The ability to remember and associate concepts can be artificially emulated using Associative Neural Networks (ASNN). (See also autoassociative memory.) Applying ASNN reseach, we can see if it's possible to capture and generalise large amounts of information in an effective way. A neural network consists of nodes called neurons, and connections between these called synapses.
A simple learning set is used to both construct and train the network. Note that training only takes a single pass to obtain synapse weights. Based on Active Associative Neural Graphs (ANAKG), I produced a customised implemented in C#, which more closely emulates the spiking neuron model, and aims at scalability. After training, a query can be entered by activating neurons in the network, resulting in a response which provides generalised information in relation to the input terms.
To provide insight into the model and its workings, dynamic visualisation was added. This shows how the network is trained, and how the neurons and synapses in turn are activated. Key stages neurons go through as they are activated (with corresponding colours): activation (red), absolute refraction (blue), relative refraction (green).
Taking DeepMind's mission statement as example, we train the model which forms the network and determines synaptic weights and efficiencies. Note stopwords are removed. Next we apply the simple query: "DeepMind". From the 123 word input (70 without stopwords), the model produces the folowing 17 word serialised output:
"[deepmind] world leader artificial intelligence research application multiplier most positive impact human important ingenuity widely beneficial scientific advances"
From the resulting output we can see the model gives a generalised summary in the context of both the input data and our search query.
To be continued...
As shown in the first attempt, Dr Dash was unable to overcome various problems, and considering Google Deepmind has been successfully applying deep learning on gaming problems (see here), it's time I try a new strategy. So we want to apply a more advanced form of AI and see if/how it solves obstacles better. However rather than using screen capture as input information as Deepmind has done, we will start with access to the internal game parameters (i.e. map with element locations etc). This will allow a much simpler model with the additional advantage of making it more transparent - being able to analyse and interpret the model is key to developing understanding and insights.
Also, rather than providing the model with the puzzles to solve as training data, let's make it more interesting and develop training settings in a more literal way - the context is a game after all. So this will be a set of custom tailored scenarios, with little resemblance to the actual game levels. The original game levels will be our test data. As the game itself is quite simple to emulate, and can be simplified to be effectively turn-based without the need for graphical output, this will make calculating our 'cost function' orders of magnitudes faster.
An example of a simple training setting for the model can be something like this.
To be continued...
AI survival simulation
This is an AI simulating simplified life & survival, inspired by Tierra. I wanted to not only reproduce this experiment, but take it a step further and observe other behaviour resulting from this. To do this I also made the programming more friendly and intuitive. Additionally, I separated out what is possible for entities to do, and things governed by laws or reason such as preservation of mass and energy.
Instead of common models that are trained with fixed data sets, this is a competitive in situ evolutionary system. The simulation starts with a single entity, programmed with a basic set of instructions in the form of Condition -> Action. Besides entities there are resources that can be absorbed / consumed. These conditions / actions are stored in the entity, so each hold it's own flexible set of instructions. Additionally each entity has mass, energy and a unique identifier. It also has a version number corresponding to an instruction set. This is the standard instructions set:
Conditions | Action | ||
Energy:Low | Unit:Entity | Version:Different | Move |
Energy:High | Unit:Entity | Version:Different | Engage |
Unit:Resource | Engage | ||
Unit:Entity | Version:Same | Incomplete | Program |
MassEnergy:Clone | Unit:None | Clone | |
Unit:None | Move |
The first action is it's highest priority, as only one action is performed - meaning it's a bit like an If / Else flow. A few key actions: if a resource is encountered, engage it (which means it will absorb it), and if enough mass and energy is available, clone and generate a 'child'. This initial set contains 19 instructions, stored sequentially in the entity.
The environment is a one dimensional 'world' where the entity can move in. A simple representation is to loop the dimension into a circle as to best use the screen, at the same time limiting the 'world' to effectively loop around. Entities are visualised by a core with a unique ID number (intensity representing the amount of instruction completeness), a blue circle representing mass, and an outer red circle for the amount of energy. Resources look similar except it shows a blank centre. Entities can only engage with others while at the same location. The simulation starts with a single entity (ID '1', version 0), and an abundance of resources.
When an entity is cloned, there is a chance of mutation. This means at random one condition/action will be removed or added. Note that cloning takes mass, energy and time equal to the number of instructions. So less instructions is beneficial. However, without essential instructions such as absorbing resources or cloning, the entity is heading for extinction.
The game is about surviving in a competitive environment. So when an entity encounters another with a different version number, it will 'attack' it. First the energy will be expended, at the same time breaking down the energy 'barrier' of the other entity.
Note that all this behaviour is dynamic, captured in the instructions of each entity. The centre of the screen shows how many entities of each version there are, indicating how 'well' a version with corresponding instructions is surviving.
Selecting a version shows it's instruction set for further analysis.
Although the experiment could be expanded more, it already reveals some interesting patterns. For example, it showed this mutation was out-surviving the standard instructions:
Conditions | Action | ||
Unit:Resource | Engage | ||
Unit:Entity | Version:Same | Incomplete | Program |
MassEnergy:Clone | Unit:None | Clone | |
Unit:None | Move |
What's of particular interest about this, is that this entity does not engage with other entities, and would try to avoid - but at the same time lose in a sustained encounter. But because it's significantly smaller (only 11 instructions), it multiplies easier, and overall effectively survives better.
Theory
The key points on theory and applications of procedural generation is described in the series Procedural Generation parts 1 to 3, where we looked at a number of elements that could describe our virtual world. In this section we look at a practical implementation, combining all the different elements.
Implementation
As application I chose a scenario based on the 3 part article, to generate a artificial 'world' with a varied terrain, climate and biomes. As I was making the implementation while writing the previous articles, the implementation is consistent with the information in the articles.
The tool of choice was Microsoft Visual Studio, C# .net using WPF. WPF actually comes with a 3D engine (using the WPF component 'ViewPort3D') which was surprisingly easy to use, and ideal for implementing the visualisation of this atificially generated world. Credit also goes to the WriteableBitmapEx extension, which extends the WPF bitmap with convenient access functionality.
The resulting application is crude, but includes a number of illustrative components:
- Selection of key parameters such as the master seed number
- A bitmap representation of the generated map (where 1 pixel represents 1 square km in this case)
- Numerical key statistics on terrain / biomes
- A 3D view of the current location within the map including immidiate surrounding
Download
Download ProceduralWorldGenerator
Links
Terrain
Continuing from part 2, a scalable fractal method is used to create terrain. All attributes generated is based on a specific seed for the current region.
An example of terrain elevation (black / blue: low, green / yellow: high), and using a threshold to define land (green) / water (blue)
Humidity / Rainfall
Continuing to a more advanced scenario, humidity / rainfall could be determined from obvious sources, any bodies of water. In this case, a streight forward approach is used, using a Gaussian function / filter.
Humidity / rainfall map (blue: humid, yellow: dry)
Temperature
Temperature can be generated by using a similar algorithm used for the terrain, and subsequently applying a Gaussian filter. As mentioned, all the attributes for this region are based on a single seed. This means that for each map or set of attributes, random generators are reset to the start seed. Generating this secondary map for temperature would result in a geography identical to the terrain map. This is resolved by offsetting the generation by 1 or more draws.
for (loop = n)
{
random()
}
// proceed with random draws from terrain
It was found that a single draw is enough so the resulted maps show no similarity with previous generated maps based on the same seed.
Additionally, this map can be adjusted for water and elevation. Above water temperature is moderated, reducing very low or very high temperatures towards a baseline value. For land, the temperature is reduced by the elevation level.
Temperature map (red: hot, blue: cold)
We now have elevation, humidity and temperature. Using composite coloring, these can be combined into a single map. But first, lets look at defining biomes.
Biome
As final step, biome type can be defined. The biome type can be determined as function of temperature and rainfall.
In the composite map, the temperature is shown using the red / blue color channels as in the temperature map, and humidity / rainfall is indicated using the green color channel. Water is shown as plain blue.
Using a mapped coloring scheme, the biome types can be visualised on the same map.
Composite map / Biome map
Links
Generating Random Fractal Terrain
In part 1 we looked at the method used for creating terrain. Midpoint Displacement is a fractal method which results in self-similarity implicitely making it scalable. Moreover the method can be scaled depending on the extent of the scope (e.g. 10 m vs 1 km vs 10 km), and number of iterations into smaller sub-divisions it is continued for (e.g. from 1 km to about 1m after 10 iterations). But imagine a large map of 1000 km, where we want to evaluate terrain upto a detail of 1 m, would take a lot of computing power, and significant storage. And this is only considering a single plane in 2 dimensions. So we can introduce partitioning. This is used in Minecraft, which uses three dimension segments of 16 x 16 (x 256) m.
In this case we use a two dimensional plane, of about 1000 km square, made up of 1000 x 1000 segments each representing 1 km square. Looking back to part 1, where we create an array of random seeds. Instead of a simple array, we create a 2 dimensional array of segements including a seed. To make this aspect scalable as well, seeds are generated in a similar order using a simple iterative midpoint method. This pattern is shown below, starting with the segments marked 1, then 2, 3, until each segment has a seed allocated.
step = size while step > 1
{
for y = 0:size:step
{
for x = 0:size:step
{
seed[y,x] = new seed
}
}
size / 2
}
Segmentation pattern
Segments are only evaluated on-demand. Each segement is initialised using its seed, generating a terrain map. In addition other characteristics are evaluated as we will see in part 3.
Links
The concept of procedural generation is creation based on rules and algorithms, instead of manual construction. A common application is to create scenarios like a landscapes, but in such a way that is repeatable and predictable within a controlled setting. Additionally this normally results in high consistency. Though widely used in games including recent ones such as Minecraft and Rust, there are many other useful applications such as simulation modelling.
Ideally, the generation is a fractal method providing scalability. We will look at this in more detail in part 2.
To produce a set of proporties and attributes in a reproducible yet variable way, the common Random (or Rnd) function can be used. This function does not actually provide real random numbers, but psuedo random numbers. The 'random' numbers are generated, effectively implementing procedural generation. Additionally these are normally based on a start seed. So we want to create a set of properties, such as for different locations, each based on a seed. Next we want all of those to be based on one single master seed.
In simple pseudo code:
int masterSeed = 12345678
Random randomGenerator = new Random(masterSeed)
int[] locationSeeds = new int[100]
foreach (int locationSeed in locationSeeds)
{
locationSeed = randomGenerator.getNext()
}
As this example illustrates, changing the masterSeed will result in a different set of location specific seeds. Next, for each specific location, different properties can be generated, in a consistent way - if and when they are required:
Random randomGenerator = new Random(locationSeed)
foreach (Property property in properties)
{
property = randomGenerator.getNext()
}
The method used for creating fractal terrain is Midpoint Displacement, though it is worth mentioning Perlin noise is another popular method used for this. Midpoint Displacement in one dimension is simple (see images below), starting with two points (1), and a straight line connecting these (2), find the midpoint on the line (3) and displace the new point up or down with a particular magnitude(4). There are now 2 connecting straight lines, drawn over 3 points (5). The process continues iteratively, finding the next midpoint between previous points in turn, and displacing these with a decreasing magnitude. For two dimension, the diamond-square algorithm is applied.
Midpoint Displacement illustration
Links
Procedural Content Generation in Games
Based on a classic game, this games features AI algorithms.
Features:
- modified A* path finding
- uses XNA platform
Modified A* pathfinding visualised
Path updated with new objectives
Programming language: C# / Object Oriented / Design Patterns: MVC
MS Visual Studio; XNA framework; CLR .NET
Controls:
- XBox 360 controller (USB)
Alternative controls:
- movement: [Game pad] / arrow keys
- start: [Start] / S
- demo: [B] / D
Download Microsoft XNA redistributable (required)
This science game was especially designed for a university organised public engagement event.
In this game, you need to order the different coloured components coming out of a chromatography system, aiming for a high purity end result.
Features:
- PC and Android version
- graphics completely scalable
- accurate colour mixing
Separation visualised by accurate colour mixing
Installation instructions:
- download zip file
- unzip contents into a folder
- run setup.exe to start installer
Controls:
- XBox 360 controller (USB)
Alternative controls:
- movement: arrow keys
- start: S
Windows:
Android:
Based on one of the most classic games, this games features AI algorithms - see if you can beat the 'demo' mode!
Features:
- Uses weight map to calculate best / worst location
visualisation of weight map (wrapped around screen edges)
Installation instructions:
- download zip file
- unzip contents into a folder
- run setup.exe to start installer
Controls:
- movement: arrow keys