Artificial intelligence related Sparse coding on D-Wave quantum computer hardware

Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.

Sparse coding is a very interesting idea that Dwave Systems been experimenting with on their adiabatic quantum computer. Sparse coding is a way to find ‘maximally repeating patterns’ in data, and use these as a basis for representing that data.

Sparse coding is probably related to how human perception and cognition functions.

Unsupervised Feature Learning and Deep Learning

Sparse coding requires data. You can think of ‘data’ as a (usually large) set of objects, where the objects each can be represented by a list of real numbers.

Somehow Dwave has worked out how to work through a problem with 60000 images. They condense each image to about 30 bits of information each. There will be a follow up article from Dwave about how to do more with their 512 qubit system on this larger problem.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks