As AI/ML researchers, we have obviously pondered the risks of AI. We even wrote about it. But what might surprise you is the risks that keep AI/ML folk awake at night aren’t the ones you hear about in the media. We’re not worried about runaway “paperclip maximizers” or “skynet”-style machine… Read More »AI is already harming us – but not the way you think
Adaptive optimization methods, such as Adam and Adagrad, maintain some statistics over time about the variables and gradients (e.g. moments) which affect the learning rate. These statistics won’t be very accurate when working with sparse tensors, where most of its elements are zero or near zero. We investigated the effects… Read More »Optimization using Adam on Sparse Tensors
We recently talked about Capsules networks and equivariances. NB: If you’re not familiar with Capsules networks, read this first. Our primary objective with Capsules networks is to exploit their enhanced generalization abilities. However, what we’ve found instead raises new questions about how generalization can be measured and whether Capsules networks are… Read More »Predictive Capsules Networks – Research update
We recently published 2 new ML/neuroscience research projects as part of the Request for Research (RFRs) projects, with WBAI. They’re fascinating topics that have arisen through the relationship with our advisor Elkhonon Goldberg from the Luria Neuroscience Institute.
It’s such a joy to be able to test an idea, go straight to the idea without wrestling with the tools. We recently developed an experimental setup which, so far, looks like it will do just that. I’m excited about it and hope it can help you too, so here it is. We’ll go through the why we created another framework, and how each module in the experiment setup works.
We are exploring the nature of equivariance, a concept that is now closely associated with the capsules network architecture (see key papers Sabour et al, and Hinton et al). Machine learning representations that capture equivariance must learn the way that patterns in the input vary together, in addition to statistical clusters in… Read More »Understanding Equivariance
Over the last few years, there have been several breakthroughs and exciting new research directions in Reinforcement Learning, Hippocampus Inspired Architectures, Attention and Few-Shot Learning. There has been a move towards multi-component, heterogeneous, stateful architectures, many guided by ideas from cognitive sciences. Google DeepMind and Google Brain are leading the… Read More »Exciting New Directions in ML/AI
This is the second part of our comparison between convolutional competitive learning and convolutional or fully-connected sparse autoencoders. To understand our motivation for this comparison, have a look at the first article. We decided to compare two specific algorithms that tick most of the features we require: K-Sparse autoencoders, and… Read More »Convolutional Competitive Learning vs. Sparse Autoencoders (2/2)
Competitive learning is a branch of unsupervised learning that was popular a long, long time ago in the 1990s. Older readers may remember – the days before widespread use of GSM mobile phones and before Google won the search engine wars! Although competitive learning is now rarely used, it is worth… Read More »Convolutional Competitive Learning vs. Sparse Autoencoders (1/2)
Eager Execution is an imperative, object oriented and more Pythonic way of using TensorFlow. It is a flexible machine learning platform for research and experimentation where operations are immediately evaluated and return concrete values, instead of constructing a computational graph that is executed later.