Hyper Efficient Analog AI Chip

Imec’s Analog in Memory Computing (AiMC) architecture (a new AI chip) is optimized to perform deep neural network calculations on in-memory computing hardware in the analog domain. Achieving record-high energy efficiency up to 2,900 TOPS/W, the accelerator is a key enabler for inference-on-the-edge for low-power devices. The privacy, security and latency benefits of this new technology will have an impact on AI applications in a wide range of edge devices, from smart speakers to self-driving vehicles.

This technology should be able evolve towards 10,000 TOPS/W.

19 thoughts on “Hyper Efficient Analog AI Chip”

  1. To put this in perspective, if you read the IEEE article in my other post, this is is very efficient energy wise but not so great performance-wise in chip area and total operations per second. It’s kind of an “add-on” you’d want to do to your systems memory. Say compared to a self-driving computer you’d get a 10% boost in total compute performance at essentially a “free” energy cost as opposed to the 1500 watt peak that computer might normally use. Maybe not worth it, but maybe it is a breakthrough especially if you can keep your TOPS needs low on average and they only spike above that “free” baseline say 30% of the time. For say a solar powered IoT device though it may be the only way to implement anything of use.

  2. Now I’m interested in someone doing a remake of Jurassic park, but on another planet.

    The giant cephalopods ruled the planet, until they were wiped out by what scientists believe was a huge solar flare.

    But trapped in the ice core are some frozen hypersquids, and we can extract their QNA genetic material, and that of the creatures they had recently fed on…

  3. You are quite right. They talk about 1pA per cell an sub 1W. Even with a billion cells, that would still just make 1 mW… which would make it very expensive… 3 Tops per chip.. not very impressive from a cost perspective…

  4. Original samples of the novel coronavirus out of Wuhan, China, were a variation that scientists now call the “D” clade. Before March 1, more than 90% of viral samples taken from patients were from this D variation. Over the course of March, G began to predominate. This mutation is caused by the swapping of an adenine (A) nucleotide to a guanine (G) nucleotide at a particular spot in the coronavirus genome. It always appears alongside three other mutations that similarly swap one building block of RNA for another. (The letters in RNA help code for the proteins the virus makes once inside a cell.)The G variant represented 67% of global samples taken in March, and 78% of those taken between April 1 and May 18. During this time, the locus of the outbreaks shifted away from China into Europe and the United States.

  5. China might have been wrong about covid-19… they got lucky and contained the virus before it mutated into a more transmittable form in the United States… therefore, the newer covid 19 mutation could come back around to China and be unstoppable the next time around.. obvious where the virus first jumps from animal to human is where the virus is weaker and not evolved for maximum infection rate because it’s not fully adapted to human genome yet by evolutionary changes…of course virus evolution is much faster than animal evolution…occurring nearly every 6 months to 1 year to miss copy itself and create a new strain…

  6. That’s really a bunch of unknown factors merged into one.

    And those are more pertinent to us, given we already have a technological civilization.

  7. Yeah, you don’t need to breakout the exact reasons why a civilization collapsed for the Drake Equation. All you need do is account for collapse regardless of the reason.

  8. I believe that the unspoken truth of the comedy movie Idiocracy is that humans came to rely on bad, poorly designed, and poorly implemented AI.

  9. Out of somewhat dark humor I’m curious as to the number of civilizations that have fallen due to attempts to genetically re-engineer dinosaurs, lol.

  10. There are some theories that the human brain may well operate using Quantum States at a molecular/atomic level.

    Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain — which would essentially enable the brain to function like a quantum computer.

  11. They will not bring back the raptors. The prehistoric woolly walrus and mammoths always appeared benign in the Saturday morning tv shows i saw as a kid, so don’t worry.

    Everyone should keep in mind, things such as the the drake equation, Fermi paradox etc are all arguments from ignorance. What’s the easiest way to come up with a provisional outlook on something when you’re to lazy or cheap to go take a proper look.

  12. If I read the announcement correctly, they are still using digital SRAM for state storage, just doing their vector multiplies in analog? Not a lot of detail.
    In fact, they don’t really seem to say that how many watts this test chip disapates – it may have only a tiny amount of hardware that if SCALED to 1W would be able to do 2900TOPs.

  13. The Drake equation DOES have a collapse for unspecified reasons term.
    That’s bad AI, nuclear war, inability to wear masks during plagues, social destruction by social media, genetic re-engineering of dinosaurs, zombie apocalypse… everything.

  14. I have often wondered if an AI equation should be added the the Drake Equation. How many intelligent civilizations have been reduced to nothing due to poor AI integration?

  15. I’d say this is no less reproducible than training a digital NN. Only the noise is probably higher. The ADC’s shouldn’t be a big deal since generally you have an output layer that just weighs the various previous layers and picks one, so it’s more of a comparator than an ADC.

  16. A lot of question marks. Does this mean that inference results are non reproducible? And if you make training on a problem, do you need a lot of ADCs to get the values “out” of the system in a sensible form? Is the training reproducible?

    Having said that, 3000 Toos per W is really good…

Comments are closed.