Jo De Boeck, Executive Vice President and Chief Strategy Officer, imec & KU Leuven, Leuven, Belgium talked about the EU Chips Act and roadmaps and challenges that the EU sees for future computing. He notes that the next generation AI model GPT4 will likely be 500 times larger than OpenAI GPT3 (basis for ChatGPT).
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
For those unaware the 2nd version of GAA on the IEEE/Imec roadmap at the bottom obscured by the image label (great quality control there Brian 😂) is likely to be the Forksheet FET.
Forksheet is the natural followup to Nanosheet that folds performance benefits of earlier GAA designs into a more compact device with some extra performance tricks for superior scaling across the board.
Nanosheet areal density is actually worse than FinFET for the same pitch – so a design at the same pitch would perform better, but take up more area which is probably why TSMC are waiting to a lower pitch to transition from FinFET so customer designs do not balloon in size too much.
I imagine that with just 5 years projected on Imec’s roadmap that we will start to hear about TSMC’s plans for Forksheet within a year or 2 at most.
GPT4 used a lot more compute (finished training last year), but is likely not much bigger than Chat GPT3. We may already be seeing GPT 4 in Bing Sydney. An analysis of what is know and predicted based on state of the art in LLM training optimisation:
https://www.lesswrong.com/posts/qdStMFDMrWAnTqNWL/gpt-4-predictions
L5 autonomous is theoretically possible today with a mobile phone CPU and V2X. Keep in mind power consumption scales with driving speed. 33% energy penalty is BS pulled out of arse, not innovative. All the current autonomous tech is hobbled by the siloed approach without V2X, thus inherently exponentially increasing computing power required.