ASML 1 to 2 Nanometer Chips Will Power Next Generation Technological Revolution

EUV Lithography is enabling another reduction in chip dimensions by 5 to 10 times in line geometry and this will extend Moore’s law and improve processing speed, component density and reduce energy used.

ASML is a world leader in lithography equipment. ASML 2019Q3 net sales came in at EUR3 billion.

This will power 5G connectivity, Artificial Intelligence, Autonomous Driving, Big Data, and Emerging Memory.

EUV lithography requires lasers to hit droplets of tin 50,000 times per second. Each droplet has to be hit twice.

The previous immersion and quadruple patterning technology was reaching economic limits at about 7 nanometers. EUV is taking over 7 nanometers layers and enabling improved performance and economics to 2 nanometers and possibly to 1 nanometer.

32 thoughts on “ASML 1 to 2 Nanometer Chips Will Power Next Generation Technological Revolution”

  1. They are claiming 2 to 1 nanometers in the transistor plane by stacking the atoms in the z direction…really it’s still 10 nanometers if you look at vertical size of transistor features… You could say they are squeezing a little more life out of Moore law by making the Silicon thicker

  2. … if memory clock-rate also increases
    … if bus clock-rate also increases
    … if mass-storage I/O rate also increases
    … if network I/O also increases

    In proportion.

    Otherwise, the machine’s Amdah’s Law (abstracted) contention calculus doesn’t linearly scale with CPU clock.

    Just Saying,
    GoatGuy ✓

  3. There is a point of diminishing returns on increasing the number of cores. See Amdahl’s law. With an increase in clock speed all of those cores become even faster.

  4. The node size name is simply a marketing name. It does not represent any dimension of the transistor that you can measure. You should not interpret 7nm as something being literally 7nm wide. It is meant to convey that 7 nm is better than 10 nm or 14 nm. Like iphone 8 is meant to convey that it is newer and better than iphone 7.

  5. You also are definitely right. But of late I’ve noticed that almost all of the ‘tough computing’ problems that computer scientists have algorithmically addressed … seem to have perfectly workable ”parallel scaling” topologies. All the way up to the super-duper-crazy-big computers having a million plus cores.

  6. When writable CD-ROMs became available, people assumed they will last a very long time. Five, ten years later, those people discovered that those disks are unreadable. I was one of those people. Some people, not me, invested a great effort into storing their valuable data on recordable optical media. They learnt their lesson. In time, big boys will learn theirs. For me, there are Hitachi Ultrastar HDDs (the last of them anyway), then a vast and stupid chasm in the reliable storage market, then LTO tapes on the other side of that chasm. There is nothing else for reliable long-term storage, excluding the write-only media (such as optical after a not so long while).

  7. While it usually isn’t good to bet against tape (allegations that Amazon’s cloud cold storage service is a small army of Iron Mountain monkeys riding on bicycles to fetch tape in those cave bunkers not withstanding), a lot of the big boys are moving to blueray based media (WORM as well as moderate RW), though that’s pseudo nearline usually.

  8. In many ways you can’t reach a singularity.

    A tech singularity is defined as that point where tech is advancing so fast that you can’t predict what happens next.
    But AT THE TIME you can, because it’s only a little bit more advanced than now.

    It may be unpredictable to people a few decades earlier, but we can still predict next year (at least tech wise, politics is another matter).

    If we look at previous tech singularities: the rise of agriculture or the industrial revolution, yes to a hunter-gather predicting what a farming based empire could do, or (much better documented) attempts by people in the 1700s to predict the year 2000 were completely missing the point. (See Adam Smith for eg.) But attempts by someone in 1800 to predict 1810 were still OK.

    The singularities are really only clear in retrospect. Though it was fairly clear by say the early 1800s that their ability to predict tech and economy a hundred years in advance had just gone out the window. Much as we can’t.

    So a singularity is at best a situation where the reliable horizon shrinks right down. But it never reaches zero, if only because your prediction tech increases too.

  9. The latest I know is WO something optical on inorganic (silica) medium…

    I expect no less from digital storage retention, or it is written in the sand

    So the alternatives are writing in sand or writing on sand. Gotcha.

  10. Aren’t there issues with building things that small other than the lithography? For instance, reliability and power loss?

  11. Tape is alive and well, (latest LTO-8 is 8th generation of tape, 12TB), but it is a small market with rather costly and scarce drives. Even tapes are scarce sometimes. Same with HDDs! If all known-bads are excluded, there is exactly one-half of a manufacturer left in the world: that part of Western Digital that used to be Hitachi. Datacenter stats provide objective proof of that dire state of storage affairs.
    The write-once is permanently ‘near implementation’. There used to be WO SD cards – gone. WO optical – never made it to market. The latest I know is WO something optical on inorganic (silica) medium, with centuries or millenia retention. I do not see it available. Actually, I am on a permanent lookout for a tape solution, but every two years they change generation (LTO-8 is 8th), and compatibility is maintained only between two or so: LTO-7 and LTO-8, so anything on LTO-6 from five years ago is in a precarious state: as long as drive works, drivers compatibility holds, formats are supported, and an increasing factor of luck. I can accept the point of ‘life is change’, but there are paper books with triple-digit retention in libraries, and I expect no less from digital storage retention, or it is written in the sand. That is why I do not entrust any valuable data to NAND, and spend a pretty penny on the last of Hitachi’s HDD masterpieces, for as long as they last. That is also a precarious state. The progress towards write-only memory is self-limiting, but now mandatory.

  12. Its true. Some kind of phase-change memory is really the answer. Even if ‘phase change’ is ‘write-once’ in nature. Might not satisfy many applications, especially database, but certainly satisfies store-and-forget-until-needed purposes. I mean, who wouldn’t be happy with a Star Trek “data cube” holding exabytes of ‘whatever’, essentially forever and without probability of losing anything (except thru physical destruction of course).

    Every personal computer for sure — even one’s smart phone — could benefit greatly from a huge amount of write-once file-store memory. Not to replace conventional read-and-write-until-it-wears-out NAND or other tunneling-electrons memory. That kind is awesome for the present uses. Nor, might I add for the even higher read-write expectations that remain (but not for long I’m gathering) best addressed by rotating magnetic disk memory.  Its a hierarchy.

    The only tech which really isn’t likely to see a resurgence is tape. It has bits-in-a-bulk advantages potentially over almost others, especially with the simplicity of bulking the medium, but not the read-write electronics.  But even there, I think it more likely also to benefit from ‘write once’ tech. Itsy-bitsy laser pits blasted in a thin opti-magnetic film.  

    Just Saying,
    GoatGuy ✓

  13. You’re right, except, well… OK, it definitely applies to a class of problems that are “embarrassingly parallel,” like in situations where each core/worker can grab its own chunk of data, be it spacially or temporally. But there are definitely problems, entire classes thereof, out there, where you can’t get your speed-up so easily, if any at all. So yeah, the issue of frequency is still pertinent.

    https://cs.stackexchange.com/questions/19643/which-algorithms-can-not-be-parallelized

  14. Our world is still run by many single threaded tasks, so I wouldn’t dismiss high frequencies yet. But as you rightly point out we are copying the brain – lots of processors @ lower speeds.

  15. Ah… that was supremely short-sighted, good fellow. 

    IF we were talking about computers (as in prior to 1998) having only 1 CPU, 1 ‘math coprocessor’ (by then in the CPU and not separate as was the way of the 1970s and 1980s), no graphics processors, you of course then would be kind-of-right.  

    The internal instruction parallelism opportunities continue to improve ‘IPC — instructions per clock — so comparing the at MOST ¼ IPC of the 8080 or 8086 or 80286 to todays present 4.5 to 5 IPC is fairly inaccurate too. 5 GHz × 4.5 IPC = 22.5 equivalent 1-IPC gigahertz.

    So even there things are improving. Moreover, when I use my parallel-processing libraries to compute something REALLY BIG on a modern system with 8+ cores, guess what … the program completes in between ⅙ and ⅐ the time of a single core system.

    I’m looking forward to eventually build a 2 processor ThreadRipper AMD box having 128 cores and 512 GB of memory. I expect runtimes at most ¹⁄₁₀₀ that of a simple 1 core system of comparable clock frequency.  And probably better.

    Just saying, Mark.  
    If it were GHz and nothing else, we’d have capped out 8+ years ago.

    GoatGuy ✓

  16. moore’s law is not the end…. they can still make larger and larger integrated circuits even when moore’s law end…. one possibility is to keep growing the transistor area per generation and reducing power until the point that everybody gets a super computer on their desktop… today we have a tiny slice of silicon of maybe an inch in our computer… after a number of years of doubling of area…everybody ends up with a continuous slice of silicon of 8 inches in their laptop with 10,000 CPU’s in it…

  17. in 10 years, they will need to rename moore’s law to: “moore’s law of exponential decay for transistor shrinkage”..

  18. intel is just behind because TSMC secretly decided to skip 10nm and go straight to 7nm…because they know the end of shrinking is coming anyways… their 10nm is really 7nm with the transistor parameters blowen up… intel on the otherhand doesn’t want to move too fast…they invented this whole moore’s law ideas to spread out the things more so they would make more money off it in the long run… they probably could have skips a few generations along the way but, why do it when you can make more money selling incremental improvements instead of giant leaps? would you rather replace your PC 4 times or once? if i’m intel I would prefer you to replace your PC 4 times with small incremental steps instead of one giant leap upgrade…its the old tortoise and the hare battle…

  19. If it’s so good why Intel has so much trouble with production of 10 nm processors ? Intel could just buy technology from ASML and ship 10 nm chips ?

  20. The cerebellum contains a supermajority of the brain’s neurons, yet is this tiny little guy that is least associated with intelligence. If you’re serious about intelligence, you need to stop obsessing about components and start thinking about interconnects.

  21. a silicon atom is 2.22 angstrom wide… that’s 0.22 nanometer/atom…so if the transitor gate width is 2nm / 0.22 nm = 9 atoms wide in 2024! current state of the art in 2019 at 7 nm give a transistor gate width of 32 atoms.

  22. they are going to be building these things out of quarks in 10 years if they keep going at the present rate…

  23. I wonder if NAND will have more that 10 write cycles at 1nm. It used to be 100k long ago, then it was 1k, now it is a few hundred or less. Also what kind of retention time the “storage memory” would have in a CMOS device where electrons can tunnel in, out or through it as if dielectric is not even there. Also its radiation sensitivity will be such that a memory chip may be a fairly good and very cheap muon detector, and a spectacular neutron detector. That is very bad, by the way – muon flux is mostly stopped only at about 1km under rock, and even now a single neutron capture can flip tens of bits in memory, and cause thousands of bit errors in logic. Reliability of such electronics may turn out to be sufficient only for teenager gadgets. Any control system would simply not be in control of anything, not even its own power consumption. Where is mass-produced MRAM after all those years?

Comments are closed.