Competitive learning is a branch of unsupervised learning that was popular a long, long time ago in the 1990s. Older readers may remember – the days before widespread use of GSM mobile phones and before Google won the search engine wars! Although competitive learning is now rarely used, it is worth… Read More »Convolutional Competitive Learning vs. Sparse Autoencoders (1/2)
We’ve just uploaded a spin-off research paper to arXiv titled “Sparse Unsupervised Capsules Generalize Better”. So what’s it all about? Capsules Networks You may have heard of Capsules Networks already – if not, have a read of one of these blog articles (here, here, here, or here (EM routing)), watch this video,… Read More »Sparse Unsupervised Capsules Generalize Better
We attended the 2017 10th Conference on Artificial General Intelligence, which was located in our hometown of Melbourne, Australia! Excitingly, the IJCAI 2017 conference is also in Melbourne this week and ICML 2017 was in Sydney this year. In particular, the “Architectures for Generality and Autonomy” workshop may be of interest… Read More »AGI Conference 2017
Today’s post tries to fit the theoretical concept of Predictive Coding with the unusual structure and connectivity of Pyramidal cells in the Neocortex. Pyramidal neurons Pyramidal neurons are interesting because they are one of the most common neuron types in the computational layers of the neocortex. This almost certainly means… Read More »Pyramidal Neurons and Predictive Coding
Typical results from our experiments: Some active cells in layer 3 of a 3 layer network, transformed back into the input pixels they represent. The red pixels are positive weights and the blue pixels are negative weights; absence of colour indicates neutral weighting (ambiguity). The white pixels are the input… Read More »Region-Layer Experiments
Figure 1: The Region-Layer component. The upper surface in the figure is the Region-Layer, which consists of Cells (small rectangles) grouped into Columns. Within each Column, only a few cells are active at any time. The output of the Region-Layer is the activity of the Cells. Columns in the Region-Layer… Read More »The Region-Layer: A building block for AGI
What’s AlphaGo? AlphaGo is a system that can play Go at least as well as the best humans. Go was widely cited as the hardest (and only remaining) game at which humans could beat machines, so this is a big deal. AlphaGo has just defeated a top-ranked human expert. AlphaGo… Read More »What’s after AlphaGo?
The artificial neuron model used by Jeff Hawkins and Subutai Ahmad in their new paper (image reproduced from their paper, and cropped). Their neuron model is inspired by the pyramidal cells found in neocortex layers 2/3 and 5. It has been several years since Jeff Hawkins and Numenta published the… Read More »New HTM paper – “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex”
Authors: D Rawlinson and G Kowadlo This is the first of three articles detailing our latest thinking on general intelligence: A one-size-fits-all algorithm that, like people, is able to learn how to function effectively in almost any environment. This differs from most Artificial Intelligence (AI), which is designed by people… Read More »How to build a General Intelligence: What we think we already know
An interesting article by Gerard Rinkus comparing the qualities of sparse distributed representation and quantum computing. In effect, he argues that because distributed representations can simultaneously represent multiple states, you get the same effect as a quantum superposition. The article was originally titled “sparse distributed coding via quantum computing” but… Read More »“Quantum computing” via Sparse distributed coding?