Grok was used in a Heinlein novel by a character who was on Mars.
Grok means to fully understand something.
Here is a preview of the voice mode. Voice mode will be released in about one week. It has voice inflection and tone.
NEWS: Here is a sneak peek at xAI’s new upcoming voice mode. pic.twitter.com/uYYpKMdnxr
— Sawyer Merritt (@SawyerMerritt) February 18, 2025
xAI started with 8K GPUs and is now over 100K GPU chips. It took 122 days to install the 100K chips.
It took 92 days to install another 100K chips to get to 200K chips.
Grok 3 completed pre-training in early January 2025
It is still improving. Already 1400 score for the LLM leaderboard. This should top the LLM Leaderboard.
It wrote a game and combined two video games.
They use big brain mode to solve combination video games.
They have a beta version of the reasoning model and mini-Grok 3 reasoning.
They are improving the knowledge after it was pre-trained.
They gave it a test on new tests that were only created in last days to prove they are not overfitting to existing tests.
They are working on math and competitive coding. Grok can correct its own mistakes and thinking.






— Elon Musk (@elonmusk) February 18, 2025

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
The reasoning and coding abilities are stunning when considering that a single short prompt can result in orbital calculations and an itterative type game. It is making me wonder with a larger model instance (very large input token count in the multi millions) could an instance be challenged with creating a new approach to a more efficienct code implementation of learning.
Very large input token counts require a lot more memory, however if you have a cluster of 200k nodes to play with and can shave off say 5,000 for one single instance to work on a problem….