Taiwan Semiconductors Future Lithography

TSMC’s 5nm is in mass production. The 3nm will also be put into production in 2022. The more advanced 2nm process is also reported to have made significant progress. The 2nm process will start mass production around 2023 to 2024.

TSMC thinks risk trial production yield in the second half of 2023 can reach 90%. The 3nm and 5nm processes use FinFET. TSMC 2nm process uses a new multi-bridge channel field effect transistor (MBCFET) architecture.

TSMC plans to switch to GAAFET (gate all around) for 2nm chips. FINFET doesn’t surround a channel on all sides. GAA surrounds a channel using a Gate. The latter method makes current leakage almost negligible.

The N5 node that TSMC is working with can use 5nm for up to 14 layers. The 3nm process node could deliver up to a 15% hike in power at the same transistor count as 5nm, and up to a 30% reduction in power use (at the same clock speeds and complexity).

Dutch lithography company ASML says that at 3nm, lithography can be used on more than 20 layers.

Intel is lagging TSMC in reducing transistor size. Intel has published a roadmap that reaches 1.4 nanometers in 2029.

SOURCES- Technews Taiwan, Intel, Phonearena
Written By Brian Wang, Nextbigfuture.com

10 thoughts on “Taiwan Semiconductors Future Lithography”

  1. smaller nodes used to mean more transistors per square unit. With reduced energy consumption and even increased clocks. Since the nanometers here mean different things we will have to wait for the actual die to see what are the results.

  2. "In Moore We Trust"
    That graph apparently represents getting to smaller and smaller nodes (1.4 nm by 2030).
    Other than the obvious reduced wattage (assumed), it never says what advances we obtain from getting smaller nodes.

    Higher clock speeds?
    More threads?
    The ability to read minds?

    Just smaller nodes for the sake of smaller nodes.

  3. Right.  
    Exactly. 

    The 'cores' are quite a bit more 'purpose built' than general purpose CPU cores, but still … the point is that a MIX of computing elements may well spell the future of computing.  

    My own 'pet technology wish' was the same kind of purpose-built cores, but imbedded around the edges of otherwise normal DRAM chips. 8 bits only, and purpose-built to look for patterns. Patterns are awesomely powerful ideas in/for the AI algorithm space.  

    Being a lifelong competent computer scientist, I certainly recognize that conventional computers can be 'kind of sort of' coërced into very-large data, very generalized pattern searching which is somewhat efficient. Indexed-by-key patterns, held in memory, can be searched millions of times faster than linear string searching. But at a set-up and maintenance computing cost which is very high, and not 'profitable' (in a CPU cycles point of view) unless used massively over-and-over-again. One-off's are a føøl's game. 

    Said grains-of-salt imbedded in DRAM chips could search ALL memory in essentially constant time. For anything. No matter how complex. Now … THAT could be a game changer.  

    Add more DRAM chips?  No problem… constant time remains constant time.  

    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  4. And then there are video cards with GPUs. For example, the Gigabyte GeForce RTX 3090 has 10496 GPU cores and 24 GB RAM. It's clocked at 1.725GHz .

  5. Amdahl's law would only apply to a single task. There are quite a few reasons that it's useful to have spare processor cores to handle multiple tasks at the same time.

  6. Certainly at some level (Amdahl's Law notwithstanding) parallelism doesn't linearly scale with added parallel functional units. … except for certain (economically important) super-parallel algorithms.  

    Put differently: I've forgotten how old you are, but at least in year 2000, the 'big big push' was for servers (and my workstation!) to have 2 to 4 CPUs instead of a single one. Much work had been done 'proving' the edges-and-trusses of Amdahl's Law; much had been written about the ultimate limit of parallelism. Diminishing returns and all that. 

    Yet, funny thing is that here we are in 2020, and anyone with $5,000 to $10,000 can buy an EPYC tea plate processor with 64 cores and 128 simultaneous threads. And motherboards that take 2 of them.  

    Point is, back in 2000, it was largely held that parallelism above 4 to 8 'cores per box' was to reach the futile degree. More wouldn't significantly improve computing speed.  

    What gives?

    Well, rather incredible amounts of 1st, 2nd and 3rd level cache. That, and equally incredible block-burst RAM memory bandwidth. Between the two, 64 cores ÷ 128 threads can achieve something like 60% of theoretical peak for nearly random code, and up to 95% of TP for small-algorithm computationally intense, parallel-friendly workloads.  

    What then is Amdahl's limit?

    Dunno.

    I think I'll live long enough to see 1024 core chips.  
    And 64 to 256 core monsters in my workstation.

    ⋅-⋅-⋅ Just saying, ⋅-⋅-⋅
    ⋅-=≡ GoatGuy ✓ ≡=-⋅

  7. Its time to forget about density and focus on speed. Amdahl's laws say there is a diminishing return on parallelizing code. CMOS has reach EOL.

  8. Seems like nobody expects to hit a wall in the foreseeable future, but the entire advanced industry is on Taiwan. Hopefully TSMC will get fabs built elsewhere so a single black swan event can’t cripple it.

  9. Intel is lagging TSMC in reducing transistor size. Intel has published a roadmap that reaches 1.4 nanometers in 2029.
    -> I can also publish a roadmap.
    Intel has lost all their authority in the past 10 years

    Juts saying

Comments are closed.