AI Inception for Lower Costs and Compressed Models

The Stanford Alpaca AI demonstrated the use of a larger and more expensive AI model to train a smaller and cheaper model. The cheaper model was as good and in come cases better than the more expensive model. The more expensive AI model generated vast amounts of better training data to improve the smaller model. This lowered the cost of training by about one thousand times. This could be a form of AI compression. A smaller AI model could, for example, use twenty times less parameters to fit onto cheaper hardware.

This might allow superior Tesla FSD (Full Self Driving) performance on hardware 3 delaying the need for customers to upgrade to more costly hardware 4 or hardware 5 to achieve acceptable robotaxi performance.

3 thoughts on “AI Inception for Lower Costs and Compressed Models”

  1. AI running on your own personal hardware is the only hope of retaining control over your personal information. And it’s not a strong hope – too many powerful companies want it, and there are too many ways that it can leak and be gathered or just inferred by their AIs. Still, demonstrating that useful if not terribly ‘knowledgable’ AI can run on cheap hardware is a start.

    • Control over personal information? Sure there will be control, just not yours. The recent end run legislative attacks on end-to-end messaging encryption lays the foundation for end device content scanning. Why centralize an information control system, when you can distribute self-censorship, one device at a time. Opaque AI’s with unknown training sets running on your smartphone are the endgame. They’ll push it as enhanced CSAM blockers because “think of the children!” or such drivel. Apple tried once to push end device CSAM scanning using perceptual hash technology once, because they saw the writing on the wall and wanted to be an early collaborator for favorable treatment and profits, rather than a later victim and get torn apart by the government.

Comments are closed.