We’ve just uploaded a spin-off research paper to arXiv titled “Sparse Unsupervised Capsules Generalize Better”. So what’s it all about? Capsules Networks You may have heard of Capsules Networks already – if not, have a read of one of these blog articles (here, here, here, or here (EM routing)), watch this video,… Read More »Sparse Unsupervised Capsules Generalize Better
The dataset is an integral part of an ML engineer’s toolkit. We recently compiled useful information about a range of these well known datasets. It’s all in one place, and hopefully useful to others as well.
ML Today Today’s Machine Learning has demonstrated unprecedented performance in what seems like every application thrown at it. Almost all the success has been based on advanced memory systems that can learn to recognise an input based on a large number of training examples. This is the equivalent to memory… Read More »The case for Episodic Memory in Machine Learning
2018 is a fresh new year and an exciting milestone for Project AGI. Dave and I have been discussing, dreaming, playing around with and striving towards general purpose AI for over 6 years. It started with musings on the algorithmic underpinnings of consciousness and the nature of intelligence. We quickly… Read More »2018 a Milestone for Project AGI
There are plenty of established machine learning frameworks out there, and new frameworks are popping up frequently to address specific niches. We were interested in examining if one of these frameworks fits in our workflow. I surveyed the most popular frameworks, and aim to provide a helpful comparative analysis.
SVHN is relatively new and popular dataset, a natural next step to MNIST and complement to other popular computer vision datasets. This is an overview of the common preprocessing techniques used and the best performance benchmarks, as well as a look at the state-of-the-art neural network architectures used.
New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)
Reproduced left to right from [8,10,1] Within a 5 day span in October, 4 papers came out that take a significantly different approach to AI hierarchical networks. They are all inspired by biological principles to varying degrees. It’s exciting to see different ways of thinking. Particularly at a time… Read More »New approaches to Deep Networks – Capsules (Hinton), HTM (Numenta), Sparsey (Neurithmic Systems) and RCN (Vicarious)
Today’s blog post is about Reinforcement Learning (RL), a concept that is very relevant to Artificial General Intelligence. The goal of RL is to create an agent that can learn to behave optimally in an environment by observing the consequences – rewards – of its own actions. By incorporating deep… Read More »New Ideas in Reinforcement Learning
This month’s reading list has two parts: a non-Reinforcement Learning list, and a Reinforcement Learning list. Since our next blog post will be on Reinforcement Learning, readers might like to refer to our RL reading list separately. Non-Reinforcement Learning reading list A Framework for searching for General Artificial Intelligence Authors:… Read More »Reading list – October 2017