GPT5 Will Be Much Smarter and Better in All Tasks than GPT4

OpenAI GPT5 has likely started its final training run. An earlier smaller run of a test version of GPT5 gave Sam Altman one of his top 4 feelings of a leap in AI capability.

GPT5 will likely include the ability to explain in plain english all of the inference steps and for about 10,000 validation and verification runs for all of those steps to produce better and more reliable answers.

Sam Altman is having close discussions with the UAE.

Altman’s plans for the AI chip sector would research and develop and make Tensor Processing Units (TPUs). Unlike GPUs, TPUs are designed from the ground up to handle machine learning workloads.

In 2019, OpenAI penned a deal with Rain AI, a San Francisco startup building chips designed to replicate the way the human brain processes information. Altman led a Rain seed round a year earlier. Rain AI plans to ship its first hardware in October 2024. Rain NPUs (analog neuromorphic processing units) could yield 100 times more computing power for AI training with greater energy efficiency than GPUs. Nvidia AI GPUs are improving by 2-10 times every 6 months. The GPU capabilities are a fast moving target.

Rain’s innovative neuromorphic processing unit (NPU) chips are designed to emulate the intricacies of the human brain.

OpenAI is working Apple’s former chief design officer, Jony Ive, to design a new consumer hardware device that would function as a kind of post-smartphone AI interface.

Digital In-Memory Compute
AI workloads possess extraordinary compute and memory demands, and they are often limited by legacy computer architectures. Rain AI is pioneering the Digital In-Memory Computing (D-IMC) paradigm to address these inefficiencies to refine AI processing, data movement and data storage.

Unlike traditional In-Memory Computing designs, Rain AI’s proprietary D-IMC cores are scalable to high-volume production and support training and inference. When combined with Rain AI’s propriety quantization algorithms, the accelerator maintains FP32 accuracy.

Rain AI’s block brain floating point scheme ensures no accuracy loss compared to FP32. The numerical formats are co-designed at the circuit level with our D-IMC core, leveraging the immense performance gains of optimized 4-bit and 8-bit matrix multiplication. Our flexible approach ensures broad applicability across diverse networks, setting a new standard in AI efficiency.

AI accelerators often fail to compile workloads as a result of lack of hardware support. Rain AI harnesses the power of the RISC-V ISA, allowing AI developers unparalleled flexibility to implement any operator and compile any model. Rain has developed a proprietary interconnect between RISC-V and D-IMC cores, offering superior performance through a balanced pipeline.

On-device fine-tuning
AI models often fail upon deployment due to the inevitable mismatch in training and deployment environments. Fine-tuning solves this problem but requires devices to support high-performance training. Rain AI is co-designing fine-tuning algorithms (e.g., LORA) with hardware to facilitate efficient real-time training.

Result: Improve AI accuracy by >10% in realistic deployment environments

1 thought on “GPT5 Will Be Much Smarter and Better in All Tasks than GPT4”

  1. “chips designed to replicate the way the human brain processes information”

    Or is it “replicate the way WE THINK the human brain processes information”

    How much do we really know about that?

    Are “Neural Networks” just a guess about that?

Comments are closed.