Tesla Virtual AI Cloud Computer

Elon Musk indicates that the FSD computers that are in all Tesla cars can be used to run AI models. This will enable Tesla car owners to share in the revenue of a Virtual Cloud Compute (VCC). VCC will be like a compute version of the Powerwall Virtual Power Plant. Through the Emergency Load Reduction Program (ELRP), homeowners receive $2.00 for every additional kWh their Powerwall delivers during an event.

In Texas, Tesla Electric members who meet the eligibility criteria will automatically become a part of the Virtual Power Plant. While participating in the Virtual Power Plant, your Powerwall will be dispatched when the grid needs support. For your participation, you will earn $10 per Powerwall on your monthly electric bill, in exchange for your Powerwall’s contribution. This $10 per Powerwall is in addition to your monthly Sellback Credits earned for energy that you send back to the grid. You do not have to compromise your energy security to participate.

Cern Basher calculates Tesla HW3 car owners could make about $400 per year participating in a VCC and to about $1000 per year using HW4.

Making $1000 per year would reduce the cost of ownership by 10-12%.

I think this could be higher value AI run on the VCC if xAI is more successful.

Tesla currently has a global fleet of 5 million cars.
Ten million cars could make about $4 billion with the VCC.
A hundred million cars by 2030 (most with more powerful and profitable HW4, HW5 and HW6 chips) could generate about $40-100 billion of revenue each year.

Tesla Making Profitable Assets for Customers

Powerwalls and Solar panel for VPP (Virtual Power Plants)

Megapacks profit for utilities with Autobidder

Cars FSD Chips making VCC (Virtual Cloud Computer)

Teslabots will provide profitable work and will also make VCC and VPP money.

3 thoughts on “Tesla Virtual AI Cloud Computer”

  1. AI processing isn’t just crunching the numbers like SETI or crypto-mining, it also involves shunting huge amounts of data around. A starlink connection wouldn’t be up to the job.

    Each car, or node, or whatever, would only be able to attack its own very small and discrete part of the problem. Thats the main reason why distributed computing has never taken off, its only for limited use cases. Certainly not AI language models.

    • Inference is a pre-trained NN executing a prompt. With multimodal models that might include sound, image, text files or relatively small data files. The only data that has to be up/downloaded is the prompt and the response to it. LTE is easily capable of that and every Tesla already has that. Gen 2 Starlink would let every Tesla use its mobile device LTE to connect anywhere with a view of the sky. Tesla’s are mobile devices with 4G but much more powerful inference hardware than most others.

  2. FSD car computers are optimized for inference and not training.
    Training is extremely compute intensive and this is where there is a market.

    There are several distributed platforms, often mixed with blockchains, to provide that service. You buy some tokens (crypto) and state some parameters for the training job you want to process. The system allocates proper hardware among thousands of distributed providers who get paid some tokens. The job is sent out, processed and sent back. The concept is tried and tested and works a bit like crypto mining, except the compute power is hopefully used for something more productive.

    However, the FSD computers can’t deal with a training workload for many reasons.
    I doubt there is a market for inference. The whole point with AI inference is that it’s lightweight and energy effective (cheap) to run once a model is trained.

    Just like us humans…. We spend decades learning, training and educating ourselves. Once done, we only need some food and water once in a while to solve almost any problem quickly.

Comments are closed.