Microsoft CTO Kevin Scott says that the AI large language models are still exponentially scaling. It takes 6-24 months to reach each successive scaling node but the scaling is still working.
AI Inference is becoming larger than AI training.
The data for AI training is different than the reference data used by AI to generate specific answers. There is a lot of work determining what training data is most effective for AI training. There is research in this area so we know what data to synthesize for future training. There is an important aspect for quality training data.
There will be business models to access the best quality data that is proprietary.
The companies that are further along in their AI journey that many of the tasks reach an 80-20 or 98-2 area. It is difficult to get the last complete automation or complete trust.
Kevin Scott points out that supplementing the current frontier model is not the only way to solve it. The frontier models get better and less fragile. Do not get trapped locking into the current model. You need to architect to be able to use the next better model.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
I am fast coming to the conclusion that instead of one large Multi-modal AI, the way to go is several smaller LLMs that work on different training data, and then cross colloborate.
I am also following different types of AI that aren’t LLM/Chatbot. AIGO is an example of AI that is not LLM based, but is still advanced. Its’s more of a personal assistant than an answer machine.