Meta is ranked 23rd on the LLM leaderboards and is also behind several Chinese open source LLM models.
Meta Chief AI Scientist Yann LeCun explains why bigger models and more data alone can’t cross the gap to true intelligence—and what will move the field forward. Meta’s Yann LeCun says AGI cannot be reached by scaling up LLMs.
Meta LLMs are getting beaten by DeepSeek, hunyuan and Qwen (alibaba) who are spending less money. Meta’s Llama model ranks 23rd on the list of LLM models. Behind several Google Gemini models. OPenAI models, Anthropic models, XAI models, deepseek, Qwen and hunyuan models.
Meta shakes up AI team with delays and problems with Meta AI. CNBC’s Deirdre Bosa discussed reports of Meta’s AI team restructuring. They are splitting into AI research and AI products.




Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Human’s couldn’t solve Protein folding despite trying hard for 30 years but Alpha Fold did. It’s a neural network AI but not an LLM. The current wave of neural network AIs scoring much higher on the full range of tests and bench marks including solving problems they have never come across better than most humans could for any given problem or any human could across the range.
Le Cun isn’t citing any evidence for his opinion that scaling isn’t continuing to work other than his team falling behind.
No further breakthroughs are needed for the current AI models to advance themselves or replace most human jobs. For that, just rearranging teams of existing AIs to cover a few limits and gaps would be enough. Critics say models still hallucinate sometimes but so do humans and we lie and commit fraud and have malign intentions too. Other AIs can act as checks on this just as humans are not helpless in dealing with such limits in other humans.
Seems reasonable. Whilst the current state of AI is indeed awesome, while using it the past couple of years, I’m aware the results are mostly a page of collated results from stackoverflow or Wikipedia. True intelligence is solving a problem that you’ve not come across before. And LLMs are not the tool to do this with.