AMD makes first 7 nanometer CPU and GPU and performance competitive with Nvidia

Advanced Micro Devices launched its first 7-nm CPU and GPU at the lucrative target of the data center. The working chips deliver comparable performance to Intel’s 14-nm Xeon and Nvidia’s 12-nm Volta.

AMD is back from the near dead. AMD is competitive with Nvidia and Intel will get a significant market share.

A single 7-nm Epyc x86 processor narrowly beats a system with two Intel Skylake Xeons in a rendering job. AMD benchmarks have the 7-nm Vega GPU on par with a Nvidia V100 in inference tasks.

The existing 14-nm Epyc, launched in May 2017, boosted AMD’s negligible 0.5% share of x86 servers to 1.5%. With its customer relationships now back on track, the 7-nm version could push AMD up to “high single digits” in x86 server market share by mid-2019.

The Zen-based x86 chips have boosted AMD’s share of overall microprocessor units to 9.23% in the second quarter of this year.

AMD continued its proactive use of creative packaging to deliver a lower-cost Epyc. A single module includes up to eight 7-nm processor die linked with AMD’s Infinity fabric to a single 14-nm I/O chip with memory controller. The approach is an extension of the 14-nm Epyc that uses four die on a single package.

AMD is providing a head-to-head alternative to Nvidia’s Volta for machine learning and commercial graphics.

The 7-nm AMD Vega has 13.2 billion transistors. AMD said Vega delivers 25% more performance than the previous 14-nm chip. The high-end MI60 version of it for GPU computing have 64 compute units, 4,096 streaming processors, and up to 32 GBytes HBM2 memory as well as support for PCIe Gen 4.

The AMD chip delivers within 7% of a Nvidia Volta’s performance with less than half the die area.

Vega will deliver 29.5 Tera FP16 operations/second for AI training. In inference jobs, it can hit 59 TOPS for 8-bit integer and 118 TOPS for 4-bit integer tasks.

AMD claims that its 7-nm x86 chips will beat Intel’s 10-nm versions — now expected late next year.

11 thoughts on “AMD makes first 7 nanometer CPU and GPU and performance competitive with Nvidia”

  1. These cpu’s are for servers, machine learning and workstations, so I’m not sure why are you involving your casual Joe. The power efficiency and the computing power alone is huge. What took you 200 m2 1-2 years ago with 18 core xeon, would take 5 times less space, cost less power, require less maintenance and overall would be cheaper with the new 64 core amd cpu’s. With the modern tendences for everything to be vritualized the 7nm cpu’s/gpu’s and whatever will surprise us in the coming years is more than welcome.

    Reply
  2. Thing is… tho’ it is rather banal to poo-pooh AMD’s 7 nanometer ‘node’ (it isn’t really a 7 nm node, if we were to use the criteria of the early 2000s when actual line-widths were the metric), considering that by having entries so early in the 2018–2019 cycle means that AMD will be able to amortize its very high 7 nm learning curve into mass-production expertise and incremental improvements for some time to come. Indeed: I expect them to be using 7 nm thru 2019, 2020 and well into 2021 if only for their bargain-level chips of then. Thing is, on the exponential-growth curve AKA Moore’s Law, things are continuing to grow as predicted. This is a good thing. Much of what takes considerable time for the “ordinary mortals” amongst us is pretty unexciting stuff. And it turns out that that unexciting stuff really isn’t materially impacted by massive parallelism. Unfortunately. I, for instance, am still using a 2 core laptop as my primary computer. It does all sorts of things well, but that’s a function (mostly) of it having a fast, reliable SSD drive. Sure, here at VUUKLE, because of the insane overhead of V on Chrome (and worse on Safari), my po’ lil’ MacBook Air CPU fan is constantly whirring away (dâhmn it is annoying). And because of VUUKLE’s annoying overhead, my memory budget is shot. 4 GB just ain’t cutting it. It used to, not 6 months ago. Easily. Handily. Moreover, I’m no slouch as a computer user either. I use the whole Adobe Creative Suite from time to time, but I also write PERL code — every day — to do all nature of computational simulations. I use ‘R’ too for some stuff — its a great if opaque language — and from time to time, I still write a bit of C code. For stuff that REALLY needs to be fast in order not to waste days of computation time. But I am NOT running BitCoin mining in the background. I’m not doing protein folding. I’m not even able to competently run GPU intensive games. I’m rather older now than affords interest in them. And yet I acknowledge that (perhaps the majority) of the younger crowd is definitely addicted to constantly multitasking the same things, or watching “video in a window”, having wide panorama displays and secondaries too. And “getting into” a lot of cool multithread-adaptive software. Some people — surprisingly few though — even do 3D rendering, Ray Tracing; some for architectural design, some for engineering, some even for sophisticated mathematical analysis. Financial Monte Carlo analysis. Yet, most of that — really — doesn’t depend much on having 16, 24, 32 or 48 cores working “under the hood” for the power user. Maybe someday soon, it will. A.I.That’s my bet. Of when 100+ core 4 nanometer CPUs will be needed in your average professional laptop. Luckily, Moore’s Law seems to predict that i’ll be here.In less than 7 years.2025.Just saying,GoatGuy

    Reply
  3. Thing is… tho’ it is rather banal to poo-pooh AMD’s 7 nanometer ‘node’ (it isn’t really a 7 nm node, if we were to use the criteria of the early 2000s when actual line-widths were the metric), considering that by having entries so early in the 2018–2019 cycle means that AMD will be able to amortize its very high 7 nm learning curve into mass-production expertise and incremental improvements for some time to come.

    Indeed: I expect them to be using 7 nm thru 2019, 2020 and well into 2021 if only for their bargain-level chips of then.

    Thing is, on the exponential-growth curve AKA Moore’s Law, things are continuing to grow as predicted. This is a good thing. Much of what takes considerable time for the “ordinary mortals” amongst us is pretty unexciting stuff.

    And it turns out that that unexciting stuff really isn’t materially impacted by massive parallelism. Unfortunately.

    I, for instance, am still using a 2 core laptop as my primary computer. It does all sorts of things well, but that’s a function (mostly) of it having a fast, reliable SSD drive. Sure, here at VUUKLE, because of the insane overhead of V on Chrome (and worse on Safari), my po’ lil’ MacBook Air CPU fan is constantly whirring away (dâhmn it is annoying).

    And because of VUUKLE’s annoying overhead, my memory budget is shot. 4 GB just ain’t cutting it. It used to, not 6 months ago. Easily. Handily.

    Moreover, I’m no slouch as a computer user either. I use the whole Adobe Creative Suite from time to time, but I also write PERL code — every day — to do all nature of computational simulations. I use ‘R’ too for some stuff — its a great if opaque language — and from time to time, I still write a bit of C code. For stuff that REALLY needs to be fast in order not to waste days of computation time.

    But I am NOT running BitCoin mining in the background. I’m not doing protein folding. I’m not even able to competently run GPU intensive games. I’m rather older now than affords interest in them. And yet I acknowledge that (perhaps the majority) of the younger crowd is definitely addicted to constantly multitasking the same things, or watching “video in a window”, having wide panorama displays and secondaries too. And “getting into” a lot of cool multithread-adaptive software.

    Some people — surprisingly few though — even do 3D rendering, Ray Tracing; some for architectural design, some for engineering, some even for sophisticated mathematical analysis. Financial Monte Carlo analysis.

    Yet, most of that — really — doesn’t depend much on having 16, 24, 32 or 48 cores working “under the hood” for the power user.

    Maybe someday soon, it will.
    A.I.

    That’s my bet.
    Of when 100+ core 4 nanometer CPUs will be needed in your average professional laptop.

    Luckily, Moore’s Law seems to predict that i’ll be here.
    In less than 7 years.

    2025.

    Just saying,
    GoatGuy

    Reply
  4. Thing is… tho’ it is rather banal to poo-pooh AMD’s 7 nanometer ‘node’ (it isn’t really a 7 nm node, if we were to use the criteria of the early 2000s when actual line-widths were the metric), considering that by having entries so early in the 2018–2019 cycle means that AMD will be able to amortize its very high 7 nm learning curve into mass-production expertise and incremental improvements for some time to come. Indeed: I expect them to be using 7 nm thru 2019, 2020 and well into 2021 if only for their bargain-level chips of then. Thing is, on the exponential-growth curve AKA Moore’s Law, things are continuing to grow as predicted. This is a good thing. Much of what takes considerable time for the “ordinary mortals” amongst us is pretty unexciting stuff. And it turns out that that unexciting stuff really isn’t materially impacted by massive parallelism. Unfortunately. I, for instance, am still using a 2 core laptop as my primary computer. It does all sorts of things well, but that’s a function (mostly) of it having a fast, reliable SSD drive. Sure, here at VUUKLE, because of the insane overhead of V on Chrome (and worse on Safari), my po’ lil’ MacBook Air CPU fan is constantly whirring away (dâhmn it is annoying). And because of VUUKLE’s annoying overhead, my memory budget is shot. 4 GB just ain’t cutting it. It used to, not 6 months ago. Easily. Handily. Moreover, I’m no slouch as a computer user either. I use the whole Adobe Creative Suite from time to time, but I also write PERL code — every day — to do all nature of computational simulations. I use ‘R’ too for some stuff — its a great if opaque language — and from time to time, I still write a bit of C code. For stuff that REALLY needs to be fast in order not to waste days of computation time. But I am NOT running BitCoin mining in the background. I’m not doing protein folding. I’m not even able to competently run GPU intensive games. I’m rather older now than affords interest in them. And yet I acknowledge that (perhaps the majority) of the younger crowd is definitely addicted to constantly multitasking the same things, or watching “video in a window”, having wide panorama displays and secondaries too. And “getting into” a lot of cool multithread-adaptive software. Some people — surprisingly few though — even do 3D rendering, Ray Tracing; some for architectural design, some for engineering, some even for sophisticated mathematical analysis. Financial Monte Carlo analysis. Yet, most of that — really — doesn’t depend much on having 16, 24, 32 or 48 cores working “under the hood” for the power user. Maybe someday soon, it will. A.I.That’s my bet. Of when 100+ core 4 nanometer CPUs will be needed in your average professional laptop. Luckily, Moore’s Law seems to predict that i’ll be here.In less than 7 years.2025.Just saying,GoatGuy

    Reply

Leave a Comment