The Sunday of IEDM is always two full-day short courses. One is on the future of memory technology, one is on the future of logic technology. This year the logic one was titled Boosting Performance, Ensuring Reliability, Managing Variation in Sub-5nm CMOS . I have to admit I can think of alternatives for how to spend my weekend, but I was there the whole day. Andy Wei of TechInsights pointed out that there is a sort of Moore's Law for IEDM short course topics: 2010 15nm CMOS 2011 Beyond 14nm 2012 Post 14nm 2013 10nm and 7nm 2014 7nm 2015 Emerging 5nm 2016 5nm Options 2017 Sub-5nm I covered the 2015 short course here in IEDM Examines Options for 5nm...Academics and Industry Examine the Options and the 2016 one in IEDM: The Big Decisions for 5nm . I guess when I cover IEDM 2018, the short course will be on the key decisions for 3nm. Transistors I won't try and cover an entire day's course (there were about 350 slides presented). There were two big presentations on transistsor reliability (from Intel) and interconnect reliability (from GLOBALFOUNDRIES) that I will mostly skip over. Unless you are a device physicist (and I'm the first to admit I'm not) then you basically want the process people to fix what they can and then the EDA tools to take account of degradation over the lifetime of the chips. Of course, with automotive being a big driver (!) reliability of advanced processes has become a much more visible issue than in the past. Gen Tsutsui (who works in IBM research, not the army) opened with a table that showed the general areas for consideration in the transistor area. Fin Fin pitch, fin height Stacked nanosheet, stacked nanowire SiGe/Ge FinFET Fin doping Gate Spacer/epi Low-k (air gap) spacer S/D epi strain Junction abruptness, epi doping, thermal budget POC strain from dielectric RMG (replacement metal gate) gate stack engineering Trench silicide RhoC, contact area MOL, BEOL Barrier metal thinning, low resistance material In the old days, transistor performance improved through the scaling itself, at constant power density (Dennard scaling). But that ended around 90nm and specific performance boosters had to be introduced: external stressors, HiK metal gate, FinFET. The big question is what is next? The obvious thing to try and do with the fins is to scale further on height and pitch. But that means that the contact resistance goes up, and S/D epi resistance increases. The second thing to consider is a nanowire gate-all-around FET, but AC performance is worse with just a single nanowire. Two or three wires are required, and it seems to be best if they are flattened (not circular), so-called silicon nanosheets. The third possibility is Si/SiGe CMOS (where the nFET is a Si fin, the pFET is SiGe). All three approaches look to an AC performance increase of 10-20% (maybe more for nanosheets). But all three have some manufacturing challenges such as channel strain retention and resistance reduction. Interconnect Zsolt Tokei of imec led the interconnect section. One big trend in interconnect is that there is starting to be a technology convergence between interconnect for logic and memory. Traditionally, memory interconnect has been "cheap" with a low manufacturing cost but poor performance, especially limited current. But emerging memories require higher current, so increasingly is switching to copper (which logic has had for nearly twenty years) with similar manufacturing details. New materials (particularly cobalt and ruthenium) are emerging. Both logic and memory are challenged by TDDB dielectric breakdown. Another trend is adding more and more functionality into the stack such as MEMS, capacitors and other passives, and memory (RRAM, MRAM, eDRAM). The trend towards neuromorphic computing (in this context, just meaning that compute elements and memory elements are not on separate die) is also pushing towards a "universal" device and interconnect. To keep on Moore's Law, BEOL metal pitch has to scale to meet the 50% area shrink. In fact, if the FEOL scaling slows down, then metal has to be squeezed even tighter to compensate. But the costs rise steeply due to multiple patterning. To get density of logic up, thre need to be scaling boosters such as super-vias, contact over active gate, buried power rails, or complementary FETs (where the P and N transistors are stacked). One of the big challenges in interconnect are the liners of vias. To a first approximation, these have a fixed thickness and so as the size of the vias shrinks, the amount of the area of the via that is copper goes down and the resistance goes up. Also, the bottom of the via has a layer of the barrier that the current has to flow through, and this becomes one of the limiting factors. Another challenge is to create a fully self-aligned via (that is, the via is aligned onto both the metal underneath and the metal on top, so there is not a really bad worst-case resistance when there is misalignment. To summarize: Memory and logic interconnect converging Scaling boosters are required along with aggressive pitch (20nm and beyond) scaling Conductors transition from Cu and W towards alternative metals (cobalt and ruthenium), which do not require their own liner/barrier New opportunities (at system level, and perhaps with devices) will drive different interconnect strategies Later, Cathryn Christiansen of GLOBALFOUNDRIES pointed out that the three big trends in scaling are that resistance goes up, time-dependent dielectric breakdown lifetimes go down, electro-migration lifetime goes down. All bad. Her summary is that we need to develop performance boosters for reliability, not just performance and yield, to get a process that has adequate product lifetimes. Design Technology Co-Optimization Andy Wei of TechInsights talked about DTCO beyond 5nm. He started by pointing out that all the talk of Moore's Law being dead seems to be exaggerated, scaling seems to be going faster than ever, "racing to the end of the CMOS roadmap". One particular race he discussed is the race to level 5 automation for cars. Other edge devices, not just cars, need to get smarter. As Andy put it "multiple color/style offerings is typically a feature of a very mature and non-innovative technology." In the past, the benefits of CMOS scaling were big decreases in all the metrics we care about (50% area, 40% power, 15% performance improvement, 35% cost reduction) meaning that if scaling is possible it has to be done since it is impossible to compete with a competitor who is at the next node (this assumes high volume though, since the ROI of any NRE becomes positive at high enough volume). In that area, what counted as DTCO (which Andy says is a fancy name for good engineering practice) was backroom meetings between lithographers and the design manual (DRC decks, SPICE decks) team. The designers would basically keep the same architecture and adjust from one generation to the next. The litho teams didn't have much of a clue about cell design. That has changed. One key decision about any process is what is the balance between FEOL and BEOL, which involves looking at actual placed-and-routed libraries (and memory scaling and analog scaling) to evaluate alternatives. Features of the process, like buried power rails, interact heavily with cell design. The number of tracks in a library interacts with the number of fins and reliability considerations. A big barrier is the 40nm pitch barrier, where the current generation of processes are. LELE can still print reasonable variable pitch, and SADP can still print reasonable variable pitch for power rails and power grid. Beyond this, design has to get more restrictive and/or EDA gets a lot more complex. EUV will (eventually) help somewhat, but it is already too late. So a foundry 5nm process needs to look like: Bi-directional EUV usage not scalable without higher numerical aperture Need SADP/SAQP line/cut and SADP/SAQP trench/block with multi-LE cut and via Continue Fin SAQP and depopulation Gate SADP will slip to two cut masks SAQP in M2 and Mx means innovations in power rails Avoid quadruple patterned vias Allow later EUV replacement of multi-LE cut and vias And after 5nm? All the buzzwords are here: III-V materials (old school), TFETs (underpowered), carbon nanotubes, directed self assembly (DSA), quantum computing. Andy's summary: DTCO is good engineering practice and, on silicon, it translates to balancing: performance/power of transistors and interconnect, area efficiency, cost of the technology, time to market Some amazing technologies are being squeezed out currently in 14nm and 10nm, with 7nm right around the corner, and 5nm based on nanowire technology Sub-5nm must fit within the CMOS manufacturing and design infrastructure, DTCO will assure it is not revolutionary Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧