David Rawlinson

AGI Conference 2017

We attended the 2017 10th Conference on Artificial General Intelligence, which was located in our hometown of Melbourne, Australia! Excitingly, the IJCAI 2017 conference is also in Melbourne this week and ICML 2017 was in Sydney this year. In particular, the “Architectures for Generality and Autonomy” workshop may be of interest… Read More »AGI Conference 2017

Region-Layer Experiments

Typical results from our experiments: Some active cells in layer 3 of a 3 layer network, transformed back into the input pixels they represent. The red pixels are positive weights and the blue pixels are negative weights; absence of colour indicates neutral weighting (ambiguity). The white pixels are the input… Read More »Region-Layer Experiments

What’s after AlphaGo?

What’s AlphaGo? AlphaGo is a system that can play Go at least as well as the best humans. Go was widely cited as the hardest (and only remaining) game at which humans could beat machines, so this is a big deal. AlphaGo has just defeated a top-ranked human expert. AlphaGo… Read More »What’s after AlphaGo?

New HTM paper – “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”

The artificial neuron model used by Jeff Hawkins and Subutai Ahmad in their new paper (image reproduced from their paper, and cropped). Their neuron model is inspired by the pyramidal cells found in neocortex layers 2/3 and 5. It has been several years since Jeff Hawkins and Numenta published the… Read More »New HTM paper – “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”

How to build a General Intelligence: What we think we already know

Authors: D Rawlinson and G Kowadlo This is the first of three articles detailing our latest thinking on general intelligence: A one-size-fits-all algorithm that, like people, is able to learn how to function effectively in almost any environment. This differs from most Artificial Intelligence (AI), which is designed by people… Read More »How to build a General Intelligence: What we think we already know

“Quantum computing” via Sparse distributed coding?

An interesting article by Gerard Rinkus comparing the qualities of sparse distributed representation and quantum computing. In effect, he argues that because distributed representations can simultaneously represent multiple states, you get the same effect as a quantum superposition. The article was originally titled “sparse distributed coding via quantum computing” but… Read More »“Quantum computing” via Sparse distributed coding?