The ability to remember and associate concepts can be artificially emulated using Associative Neural Networks (ASNN). (See also autoassociative memory.) Applying ASNN reseach, we can see if it's possible to capture and generalise large amounts of information in an effective way. A neural network consists of nodes called neurons, and connections between these called synapses.
A simple learning set is used to both construct and train the network. Note that training only takes a single pass to obtain synapse weights. Based on Active Associative Neural Graphs (ANAKG), I produced a customised implemented in C#, which more closely emulates the spiking neuron model, and aims at scalability. After training, a query can be entered by activating neurons in the network, resulting in a response which provides generalised information in relation to the input terms.
To provide insight into the model and its workings, dynamic visualisation was added. This shows how the network is trained, and how the neurons and synapses in turn are activated. Key stages neurons go through as they are activated (with corresponding colours): activation (red), absolute refraction (blue), relative refraction (green).
Taking DeepMind's mission statement as example, we train the model which forms the network and determines synaptic weights and efficiencies. Note stopwords are removed. Next we apply the simple query: "DeepMind". From the 123 word input (70 without stopwords), the model produces the folowing 17 word serialised output:
"[deepmind] world leader artificial intelligence research application multiplier most positive impact human important ingenuity widely beneficial scientific advances"
From the resulting output we can see the model gives a generalised summary in the context of both the input data and our search query.
To be continued...