Reka Core Makes World Class AI With 20 Developers

Reka Core is a new LLM that is competitive with OpenAI GPT4 and Anthropic Claude 3. Reka Core can identify an African Elephant from a picture by looking at its ears.

It was a model created by a team of twenty people. Reka, a California-based AI startup, appears to be ready to go head-to-head with the likes of Google and Microsoft. It has rolled out its latest multimodal and multilingual language model, Reka Flash. They received US$58 million funding round in June 2023. The company was founded in 2022 by former researchers from Google DeepMind and Meta, including Yi Tay, a Singaporean who serves as Reka’s chief scientist. Reka will soon have another round of funding and with these impressive results will get a multi-billion dollar valuation.

Reka Flash has 21 billion parameters. Reka Flash has been trained on text from over 32 languages. According to Reka’s evaluations, it outperforms larger models like Llama 2, Grok-1, and GPT-3.5 in aspects such as reasoning, code generation, and question answering.

Core is competitive with models from OpenAI, Anthropic, and Google across key industry-accepted evaluation metrics. Given its footprint and performance, on a total cost of ownership basis, Core delivers outsized value. The combination of Core’s capabilities and its deployment flexibility unlocks vast new use cases.

Core is comparable to GPT-4V on MMMU, outperforms Claude-3 Opus on our multimodal human evaluation conducted by an independent third party, and surpasses Gemini Ultra on video tasks. On language tasks, Core is competitive with other frontier models on well-established benchmarks.

It can identify a grotto in Lebanon from a picture while GPT-4 and Claude-3 could not.

Capabilities
Multimodal (image and video) understanding. Core is not just a frontier large language model. It has powerful contextualized understanding of images, videos, and audio and is one of only two commercially available comprehensive multimodal solutions.

128K context window. Core is capable of ingesting and precisely and accurately recalling much more information.

Reasoning. Core has superb reasoning abilities (including language and math), making it suitable for complex tasks that require sophisticated analysis.

Coding and agentic workflow. Core is a top-tier code generator. Its coding ability, when combined with other capabilities, can empower agentic workflows.

Multilingual. Core was pretrained on textual data from 32 languages. It is fluent in English as well as several Asian and European languages.

Deployment Flexibility. Core, like our other models, is available via API, on-premises, or on-device to satisfy the deployment constraints of our customers and partners.

While we release a first version today, we expect Core—along with our other models—to continue to break performance barriers as it undergoes further training. Check out our technical report here and example outputs here for more information.

Reka partners
In less than a year, Reka has become one of only two developers providing models that allow for comprehensive multimodal input. Its three models allow image, video, and audio input in addition to text. This enables broader and differentiated customer use cases for industries including e-commerce, social media, digital content and video games, healthcare, and robotics, to name a few.

A crucial part of delivering on our mission to make frontier multimodal models that benefit humanity is our various partners. We are proud to count amongst our partners leading global technology platforms and government organizations such as Snowflake, Oracle, and AI Singapore. They enable our customers, organizations, and individuals around the world to benefit from and build with Reka models by democratizing access to multimodal technology.