The Designer's Guide Community Forum
https://designers-guide.org/forum/YaBB.pl
Design >> Analog Design >> Design flow for high yield
https://designers-guide.org/forum/YaBB.pl?num=1361286950

Message started by weber8722 on Feb 19th, 2013, 7:15am

Title: Design flow for high yield
Post by weber8722 on Feb 19th, 2013, 7:15am

Hi,

a customer points me to this nice book chapter: http://www.edn.com/ContentEETimes/Documents/Bailey/Variation-aware%20Ch4b.pdf

A quite normal flow for analog/RF/MS design is to check the circuit on PVT corners and on statistical behavior (using MC). Of course doing a full MC run at each PVT corner could be very time-consuming so to get in a first step an overall performance spread I usually combine PVT worstcase with a +-3sigma-spread obtained from MC run (e.g. using 200 runs and mismatch only) - for each performance, e.g.

Voffset spec is 10mV
So I run MC mismatch at typical, pick the worst sample as statistical corner (maybe giving +5mV). Then really run simulation on this (plus just the nominal one wo. mismatch) on PVT (maybe combined with MC process) and get e.g. a simulated worst-case Voffset of 9mV - incl. mismatch and corners. If the MC count is not too small and the picked MC sample is close to the 3-sigma point, then also the obtained 9mV should be close to the 3-sigma point, so I am in spec overall on Voffset with 3-sigma yield.

However, in this book a benchmark is given (without any details, unfortunately), that indicates that "my" algorithm FAILS very often - and their "new" algorithm is much better. I wonder why? Of course there could be special things like non-Gaussian distributions, or if you look to extreme large supply or temperature ranges or so, but does anybody has real examples. Also the book mentions that many things become more difficult if we have many specs, but isn't that a big advantage if using plain MC, compared to all kind of sensitivity based, more trickier algorithms?

What do you think?

Bye Stephan

Title: Re: Design flow for high yield
Post by Frank Wiedmann on Feb 20th, 2013, 4:35am

I guess the problem with your approach is that you can't be sure to hit a 3-sigma point for any given specification with a relatively low number of Monte-Carlo simulations (as shown by the benchmark). The corner extraction method proposed in the book seems to be much more efficient in this respect. By the way, some more material from this book is available at http://www.edn.com/electronics-blogs/practical-chip-design/4405439/Book-excerpt--Variation-Aware-Design-of-Custom-Integrated-Circuits---Final.

Title: Re: Design flow for high yield
Post by Trent McConaghy on Feb 21st, 2013, 1:54pm

Hello Stephan and Frank,

Thank you for your interest in 3-sigma corner extraction, described in the book excerpt linked above. I am the lead author of that book (http://www.amazon.com/Variation-Aware-Design-Custom-Integrated-Circuits/dp/146142268X ).

Very often, the aim is to meet specs, or get acceptable performances, despite process variation. Implicitly, the "despite process variation" means that overall yield should end up at 3 sigma. Many high-level flows might address this aim. One flow is to use PVT corners. However, it’s not accurate because it ignores mismatch variation, and has a poor model of global variation. Designers could traditionally get away with this flow because mismatch was smaller in older processes. Another high level flow is to run Monte Carlo (MC) on each candidate design, which is accurate but too slow. Another flow is to use sensitivity analysis, but that scales poorly to large circuits and has accuracy issues.

What we really want is a high-level flow that is simultaneously fast, accurate, and scalable. Imagine if we had accurate statistical corners, such that when the circuit meets specs on those corners, overall yield is 3 sigma (e.g. as measured by MC). The high-level flow would consist of (1) extracting the corners (2) designing on the corners (3) verifying, and if needed, re-looping for a final tuning. This is a “sigma-driven corners flow”.

In this flow, each step needs to be fast, accurate, and scalable:

  • Step (1) Extracting corners. This can be fast, accurate, and scalable; but it's easy to get wrong. I'll describe more below.
  • Step (2) Designing on corners. It's fast because the designer only needs to simulate a handful of corners for each new candidate design (new sizes & biases). It's accurate as long as the corners are accurate. It's scalable because being corner-based is independent of the size of the circuit.
  • Step (3) Verifying. This step is important in case the corners lose some accuracy as the design is changed. It's a "safety" check. In this step, if the circuit meets 3-sigma overall yield with confidence, great! If not, then one can extract new corners, tweak the design, and do a final verify.

Let's discuss step (1) of the “sigma-driven corners” flow more. Consider one approach: draw some MC samples, then pick the worst case. If you solve on that worst-case, will it give a 3-sigma (99.73%) yield? It depends "if the picked MC sample is close to the 3-sigma point" as Stephan described it. He’s right (in the case of 1 spec).

So, how close can those MC samples get?

  • Let’s say you took 30 MC samples, and picked the worst case. Since picking worst case is different from finding statistical bounds, it could be way off, as Fig 4.17 of the book excerpt shows. For example, Fig. 4.17's far left box plot shows that 50% of the extracted corners were in the range between 80.0% and 97.5%. That is, if you changed the circuit to meet spec on such a corner, your yield might only come out at 80.0% or 97.5%; even though you’d wanted yield to come out at 99.73% (for 3-sigma). So it’s too inaccurate for 30 MC samples. Fig 4.17 shows how it’s inaccurate even with 200 MC samples.
  • Let’s say you took 1,000,000 MC samples, giving many samples near 3-sigma, and picked the sample closest to the 3-sigma point. This will give a fairly accurate corner (if 1 output, though not >1, more on this later). But of course it’s too slow.
  • The minimum number of samples to start to get close to 3-sigma is 1000-2000. However, that’s still a lot of simulations just to get corners. And what about when you have >1 spec? The only way that a 3-sigma corner for each spec will lead to an overall yield of 3 sigma is if exactly the same MC samples fail for each of the outputs. For example, solving on two 3-sigma corners with different failing MC samples per spec leads to an overall yield of (0.9973 * 0.9973) = 0.99461 = 99.46% yield = 2.783 sigma. On five corners, it leads to an overall yield of 0.9973^5 = 98.6% yield = 2.47 sigma.

The cool thing is, there is a way to extract statistical corners such that when you solve on them, the circuit ends up with 3-sigma overall yield. The approach leverages MC samples, but with just the right bit of additional computation afterwards to target the performance distributions’ statistical bounds. It works on >1 outputs at once, on non-Gaussian PDFs, and includes environmental conditions. It typically uses about 100 simulations to get accurate corners, even on extremely large circuits. Pages 82-83 of the book excerpt give details on how it’s done, and why it’s fast, accurate, and scalable. Pages 83-85 gives benchmark setup details, and benchmark result numbers.

This corner extraction approach fits into the context of the “sigma-driven corners” flow described earlier, which enables engineers to design for high performance and yield, despite large process variation. This approach has been successfully used in industrial design settings for several years now. Here’s an example from a couple years ago: http://www.deepchip.com/items/0492-05.html. Due to that success, the “sigma-driven corners” flow has become a standard flow for many leading-edge teams, and adoption continues to grow.

Thanks again for your interest in 3-sigma corner extraction approach. I’d be happy to address any further questions or concerns.

Kind regards,

Trent McConaghy
Co-founder & CTO
Solido Design Automation

Title: Re: Design flow for high yield
Post by weber8722 on Feb 25th, 2013, 12:59am

Hi,

thanks for so fast replies. Let me state clearly: Of course PVT analysis is not enough. So I always add also statistical corners to them to see the full picture!

You claim this:
....as Fig 4.17 of the book excerpt shows. For example, Fig. 4.17's far left box plot shows that 50% of the extracted corners were in the range between 80.0% and 97.5%. That is, if you changed the circuit to meet spec on such a corner, your yield might only come out at 80.0% or 97.5%; even though you’d wanted yield to come out at 99.73% (for 3-sigma). So it’s too inaccurate for 30 MC samples. Fig 4.17 shows how it’s inaccurate even with 200 MC samples.


My comment: If you are 50% at want to go to 99,73%, you probably need to do it in multiple steps! No designer or algorithm in the world can do it in one step! First step is to analyse: Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much. Then improve on the worst things. like

if Vdd, PSRR,.. is a problem improve e.g. your current sources, add cascodes, etc.).
if mismatch is a problem, make critical elements larger
if TC is a problem, change bias sheme, etc.

This way you will end up in maybe 90% yield - which is easy to prove with small MC run. Next look at worst case deterministice AND stat. corners, and tweak your circuit to maybe get 95% yield.
Then run MC with maybe 50 runs (of course you can use 150 if simulation time is low anyway), to get mean and sigma quite accurate within few percent. There is usually no need to know these much more accurate, because you would rely far too much on modeling incl. stat. modeling!
Then pick e.g. some worst case samples like two giving -9mV and +10mV. There is no real problem if these values are 3-sigma or 2.8sigma or 3.2sigma!! And if you have multiple gooals - as usual - it makes the verification and design even more robust.
All-in-all making that multiple steps also means that the designer learns a lot on the technology and his design, a great side-effect!!

You also claim:
Let’s say you took 1,000,000 MC samples, giving many samples near 3-sigma, and picked the sample closest to the 3-sigma point. This will give a fairly accurate corner (if 1 output, though not >1, more on this later). But of course it’s too slow.

Why using 1M samples for 3-sigma at all?? Usually you get samples between 2.7 to 3.3 sigma already from a much much shorter MC runs, like 50-100!! Even if you would get only a 2.6sigma sample, you may either multiply the stat. variables elongations or just pick it and add some margin (like 0.4sigma) to the spec.

So all in all I would say a slightly improved MC like low-discrepancy sampling would help most, and is numerically much less risky than most other faster algorithms I have seen (although some algorithms collect nicely ideas/steps a designer or factory test engineer would do manually, like stopping the analysis on this sample if fast to measure specs are hurted already!!). And existing flows are quite practical already, so I expect no wonders (beside getting faster computers).

You also claim:
The minimum number of samples to start to get close to 3-sigma is 1000-2000. However, that’s still a lot of simulations just to get corners. And what about when you have >1 spec?

One huge advantage of MC is that it keeps full speed (although it is not fast) if we have multiple specs; MC is becoming even slightly faster in multiple-spec case (less netlisting & randomization overhead)! BUT, usual "fast" high-yield estimation algorithms need often near-linearily more time if we need to treat treat N specs!
Also MC is able to give much more design insights, like offering correlations, QQ plots, etc. just looking at sigma and yield is not enough! Yield is -if you think more - a quite stupid think! For instance it does not take into account if spec hurting is by 1mV or 100mV!!

BTW, what I do not like so much in the Solido book is the benchmark. They use quite trivial blocks like opamp, bandgap and "others" - without any details. I wonder what would happen in more non-trivial cases like a flash-ADC or segmented-DAC? On these, some high-yield estimation algorithms work quite bad, because the number of highly relevant statistical variables is very big (like >2000 even for a 6-bit flash-ADC front-end in a simpler older pdk).

Bye Stephan

Title: Re: Design flow for high yield
Post by weber8722 on Feb 25th, 2013, 8:48am

Hi Trent,

most I wonder about this statement in the book:

"MC always needs 1,400–5,000 samples for accurate 3-r verification."


I think this is because of far too limited use of your MC results!!

If you run e.g. an MC analysis with 100 samples only, find yield is 100% (no fails) and you have e.g. a mean of 0V and a sigma of 10mV and your spec limit is 60mV, you would have an estimated Cpk of 2 (=6sigma). Of course, there is some uncertainty (on mean, sigma, thus Cpk), but not a so huge uncertainty, that being below 3 sigma is realistic at all!!
If you would repeat the MC-100 runs several times, you can get an eatimation on the accuracy of the Cpk estimation, but it will clearly much smaller than +-1 (or +-3sigma).

This is a simple example, showing that you do not need so many samples for a very frequently needed 3-sigma yield check.

If I would look only to the yield results, I would need indeed many more samples, but it would be stupid not to look at the whole picture.

I also wonder why looking to the yield is such a holy grail at all? If you talk with the customer, you would probably find that he can relax the spec a bit anyway.
Or you have to design a subblock, whose specs can be traded to the specs of other blocks easily. So many designers start designs without clear specs and testbenches. So better strive for "no surprises", i.e. a robust design, know and control your means and sigmas.

Bye Stephan

Title: Re: Design flow for high yield
Post by loose-electron on Feb 25th, 2013, 2:09pm

Bfeore you go too crazy with all the statistical analysis, ask yourself a few simple questions:

How dependent are you on

absolute values
matching
process variance
power sensitivity
Capacitiance variance (all levels)

Maybe I have been in this business too long but when I do chip reviews, the things that kill yield are usually pretty obvious.

Title: Re: Design flow for high yield
Post by Lex on Feb 26th, 2013, 1:19am

Hey Stephan

Coming back to your initial question, I think just like Frank, that you need some confidence that you have found the correct 3 sigma point. And actually the box plot in figure 4.17 shows what practically happens for a run of 30x(MC simulation of N=30,65,100,200 samples). Cold numbers that speak for themselves, I'd say.

Somehow I think you are aware of this since you wrote " If the MC count is not too small and the picked MC sample is close to the 3-sigma point", so there is some inspection by you as designer which will probably up that yield a bit, but hey, it is difficult to put a reliable number on that. (Actually this is also discussed in the book).

I'd say try to replicate 4.17 yourself: make some nice simulation where you extract the WC for quite some times, get the spread and see for yourself how that compares with the results of a large number number MC analysis.

Title: Re: Design flow for high yield
Post by weber8722 on Feb 26th, 2013, 6:12am


Lex wrote on Feb 26th, 2013, 1:19am:
.... figure 4.17 shows what practically happens for a run of 30x(MC simulation of N=30,65,100,200 samples). Cold numbers that speak for themselves, I'd say.



Yes, I will do these experiments! But remember: 4.17 takes just the yield into account - which is a poor use of the valuable simulation data. Also 30 samples are indeed not much, but with 65 it looks already promising (in this case). And if you would not use (slow) random-MC, but an enhanced MC method like LDS you will be quite happy. I hope to present some data/pictures soon.

Bye Stephan

Title: Re: Design flow for high yield
Post by weber8722 on Feb 27th, 2013, 5:58am


Frank Wiedmann wrote on Feb 20th, 2013, 4:35am:
I guess the problem with your approach is that you can't be sure to hit a 3-sigma point for any given specification with a relatively low number of Monte-Carlo simulations (as shown by the benchmark).



Frank, you are fully right, and it brings us to one key problem of what a designer can do  :): If you cannot be sure, you  need to add a margin, like taking not a 3-sigma sample to prove 3-sigma yield (with certain confidence level), but e.g. a 4-sigma sample!
Of course this "magic" margin depends on how big your MC sample count was! For 100 samples that margin should be app. 1-sigma. For 25 samples it would be app. 2-sigma, thus often a bit too much to over-design by this amount! However, for 400 MC samples it would be a margin of app. 0.5-sigma, so very practible, espcially if you are aware of the fact that the transistor models and its stat. modeling is by far not  3-sigma accurate in all conditions and characteristics.

Bye Stephan

Title: Re: Design flow for high yield
Post by weber8722 on Feb 27th, 2013, 6:01am


loose-electron wrote on Feb 25th, 2013, 2:09pm:
Bfeore you go too crazy with all the statistical analysis, ask yourself a few simple questions:

How dependent are you on

absolute values
matching
process variance
power sensitivity
Capacitiance variance (all levels)


Fully right, before putting all together in a big PVT+MC analysis, best check each effect individually and focus on the most critical one, might be temperature, supply, mismatch, absolute process values, etc.

Title: Re: Design flow for high yield
Post by Frank Wiedmann on Feb 27th, 2013, 6:49am


weber8722 wrote on Feb 27th, 2013, 5:58am:
Frank, you are fully right, and it brings us to one key problem of what a designer can do  :): If you cannot be sure, you  need to add a margin, like taking not a 3-sigma sample to prove 3-sigma yield (with certain confidence level), but e.g. a 4-sigma sample!
Of course this "magic" margin depends on how big your MC sample count was! For 100 samples that margin should be app. 1-sigma. For 25 samples it would be app. 2-sigma, thus often a bit too much to over-design by this amount! However, for 400 MC samples it would be a margin of app. 0.5-sigma, so very practible, espcially if you are aware of the fact that the transistor models and its stat. modeling is by far not  3-sigma accurate in all conditions and characteristics.

I'm not sure how you got your numbers, but in the book, there is Fig. 4.27 that shows the average number of samples required to verify a design to 3 sigma with 95% statistical confidence as a function of the actual sigma level of the design. According to this graph, if your design has an actual sigma level of 3.7, you need 1500 samples on average to verify it to 3 sigma.

Title: Re: Design flow for high yield
Post by Trent McConaghy on Feb 27th, 2013, 3:17pm

Hi Lex and Jerry,

Thanks for your comments.

> [Lex wrote]  the box plot in figure 4.17 shows what practically happens ... Cold numbers that speak for themselves, I'd say.
Agreed:)

> [Jerry wrote] Before [sic] you go too crazy with all the statistical analysis, ask ... How dependent are you on: absolute values, matching, process variance, power sensitivity, Capacitance variance
> Maybe I have been in this business too long but when I do chip reviews, the things that kill yield are usually pretty obvious.

Agreed. Each of those questions is about statistical process variation. If you know for sure that FF/SS corners are adequate, then statistical analysis isn’t needed.

No one wants to "go crazy" with statistical analysis. The ideal is to keep designing as before, with corners, with minimal statistics. What's cool is that this is possible, with the sigma-driven corners flow: push a button, and get "true" 3-sigma corners that represent the circuit's performance bounds. Then the designer can focus his valuable time designing against these true corners, leveraging all his experience and insight. At the end, push a button to verify that the circuit is ok for yield (and if needed, do a quick re-loop). The first and the last steps are as short and painless as possible, so that the designer can spend his time on what matters - the design itself.

A good MC-based tool will report the effect of each global process variable, local process variable, and environmental variable with respect to each output. This can come for free from the MC run.

Kind regards,

Trent

Title: Re: Design flow for high yield
Post by Trent McConaghy on Feb 27th, 2013, 3:19pm

Hi Stephan,

Thanks for your comments.

> If you are 50% at want to go to 99,73%, you probably need to do it in multiple steps! No designer or algorithm in the world can do it in one step! First step is to analyse: Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much. Then improve on the worst things. like
> if Vdd, PSRR,.. is a problem improve e.g. your current sources, add cascodes, etc.).
> if mismatch is a problem, make critical elements larger ..

Agreed. A sigma-driven corners flow allows for re-loops if the target yield is not hit in the first round. The MC tool will give insight into impacts (see above), and the designer can leverage his experience and intuition to improve the design (see above).

> Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much ... Then improve on the worst things ... This way you will end up in maybe 90% yield - which is easy to prove with small MC run. Next look at worst case deterministice AND stat. corners, and tweak your circuit to maybe get 95% yield. Then run MC ... get mean and sigma ...  then pick e.g. some worst case samples

One could certainly use such an ad-hoc method if they wish. But with the right MC tool, the designer can get (a) straight to designing on appropriate corners, (b) with impacts for free, and (c) using a repeatable, non ad-hoc flow that both beginners and experts can use.

> Usually you get samples between 2.7 to 3.3 sigma already from a much much shorter MC runs, like 50-100!!

Here’s the math. A (one-tailed) sigma of 3.0 is a probability of failure of 1.35e-3 (yield of 99.86%). This means you need 1/1.35e-3 = 740 MC samples to get a single failure, on average. For a reasonably confident yield estimate, you’ll need >5 failures. In the book, when I say 1400-5000 samples, that is based on verifying 3-sigma yield to 95% confidence (under a pass/fail distribution). The actual number of samples depends on how close the circuit is actually to 3.0 sigma. (Section 4.6.1 of the book. Thanks to Frank for pointing this one out already.)

> If you run e.g. an MC analysis with 100 samples only, find yield is 100% (no fails) and you have e.g. a mean of 0V and a sigma of 10mV and your spec limit is 60mV, you would have an estimated Cpk of 2 (=6sigma). Of course, there is some uncertainty (on mean, sigma, thus Cpk), but not a so huge uncertainty, that being below 3 sigma is realistic at all!!
> If you would repeat the MC-100 runs several times, you can get an estimation on the accuracy of the Cpk estimation, but it will clearly much smaller than +1 (or +3sigma).

If one estimates the distribution from mean and standard deviation (such as for Cpk), then the implicit assumption is that the distribution is Gaussian. If you are willing to assume this, then I agree with you, it is possible to have fewer samples. But if the distribution is not Gaussian (e.g. long-tailed, bimodal) then you will be drawing false conclusions about your circuit.  Nonlinearities / non-idealities in circuits lead to non-Gaussian distributions. The book gives examples of non-Gaussian distributions in Figs. 4.1 (VCO of PLL), 4.29 (folded cascode amp with gain boosting), 5.1 (bitcell), 5.2 (sense amp), 5.32 (flip flop), 5.40 (flip flop), and 5.45 (DRAM bit slice).

> why looking to the yield is such a holy grail at all? ... can relax the spec a bit anyway.
There is always a tradeoff between yield and specs, of course. When process variation matters, you can't design for one and ignore the other. Approaching the problem in a true-corners fashion enables the designer to deal with variation, without “going crazy” on statistical analysis.

> One huge advantage of MC is that it keeps full speed [if] multiple specs...
> MC is able to give much more design insights, like offering correlations, QQ plots, etc. ...

Thanks for arguing the case for MC. I agree! MC scales very well, because its accuracy is independent of dimensionality. As I discuss earlier, and in detail in the book, MC plays a key role in corner extraction (but not the only role), statistical verification, and gaining insight.

> an enhanced MC method like LDS you will be quite happy... I hope to present some data/pictures soon.
Low-discrepancy sampling (LDS) methods can help, but they are not a panacea.

If one wants 3-sigma corners using the worst-case sample, LDS does not somehow make picking worst-case a statistical decision. Therefore a worst-case approach can still be way off.

For 3-sigma yield verification, the benefits of LDS are modest. Here's why. Recall that 3 sigma has 1 failure in 740, on average. In a "perfect" sampler you would get exactly one failure on every run of 740 samples; in "less-perfect" samplers you might get 0, 1, 2, or more failures on each run. You still need 1400-5000 samples to start to get enough failures to have high statistical confidence. (Sec 4.5.9.)

Section 4.5 and Appendix 4.B of the book describe LDS in detail. Since off-the-shelf LDS approaches scale poorly beyond 10-50 dimensions, we developed a scalable LDS technique. Sections 4.5.7 and 4.5.8 show benchmarks with thousands of variables. Since 2009, LDS is the default way that MC samples are drawn in Solido tools. (Pseudo-random sampling is still available, of course.)

Kind regards,

Trent


Title: Re: Design flow for high yield
Post by weber8722 on Apr 11th, 2013, 5:17am

Hi Trent,

thanks for your detailed response!!! :) I have to admit, I learned a lot on statistics in the last months.

On getting 3-sigma samples I was indeed a bit too optimistic with 100 samples. but with n=512 the mean of the spread (max-min) is really near-exactly 6.12*sigma, so >+-3 (i.e. often you would get even one sample at +3sigma and another at -3 - or beyond), so the chance to get e.g. at least 2.7sigma is really quite high even e.g. for 128 samples, like >=50%. Although there is indeed no guaranty!

On yield estimation: That is indeed a problem of the counting method Y=npass/nfail - you would need fail-samples!! However, if the distribution is near-Gaussian, then of course the Cpk is a better indicator, i.e. gives you earlier a feeling on yield. Also for Cpk confidence intervals are well-known, and tighter than for the counting method. Just because it exploits more information then just pass vs fail.

On Cpk and non-Gaussian distributions: Indeed the Cpk can be misleading, but it can be too optimistic or too pessimistic, but it is quite easy to correct the Cpk by including the skew and kurtosis of the distribution into account - I wonder why the scientific papers on this are not so well-known to engineers. With a full data analysis you can get really a lot out of a moderate MC run for your design.

On LHS+LDS: I have to gain experiences, sometimes they give a good speedup, sometimes just only 1.5X or so - hard to predict and to trust. In newest papers I have seen that some authors use LDS/LHS but with variable clustering to adress the dimension-problem quite nicely.

In Cadence Virtuoso LDS is anounced for mmsim12 and ic616 - need to wait some weeks.

Bye Stephan

Title: Re: Design flow for high yield
Post by Lex on Apr 12th, 2013, 2:58am

Hey Stephan

Intrigued by this thread, I did a 15,000 MC run on a LDO circuit of mine. (the 15,000 number was chosen as it fitted in running a complete night).

What I observed was that most of my performance parameters were not Gaussian. Only the current consumption and the offset voltage were linear in a quantile plot and hence Gaussian (for the chosen sample size). But for example startup time, PSRR, output noise, UGF, DC loopgain, phase margin  were not linear (in a quantile plot) and hence not Gaussian.

And all that fiddling around a Gaussian distribution is very nice, but it cannot handle multimodal behavior of your circuit, and is therefore insufficient for high yield analysis.

Title: Re: Design flow for high yield
Post by love_analog on Jul 20th, 2014, 8:19am

Hi Stephan/All
Re-activating this thread since I am still confused. sorry!

This is how I do "practical" design. Lets see our offset spec if 10mV.
I will take the worst case PVT corners and run MC on that. If you have designed circuits you know what offset depends upon so you know what is the worst case corner.
I will run as many MC samples as needed till my offset doesn't change appreciably. So for instance I run 200 samples I see my max offset is 4mV at slow corner. I run 400 samples I see max offset is 6mV. I run 500 samples and I see it at 6.2mV. I will basically say I am done since I am far from spec.
If I am close to spec (say spec was 6.5mV), I will increase device size to get some more margin. Engineering judgement (no science)

Now what is wrong with this approach ?

Title: Re: Design flow for high yield
Post by loose-electron on Jul 21st, 2014, 11:20am

One of the things I find interesting about this discussion is that everyone is focusing on statistical model distribution.

If you want good yields you need to deal with the extremes of your process and find the capability for the circuit architecture to compensate beyond those extremes and still work within desired specs.

When you fix those items withing the circuit architecture (think gain margin, offset trimmers, noise margin, capacitors sized beyond the expected process extremes, etc) you will have a design that is not dependent on process variance but has the necessary circuit elements to deal with it. It grows the chip a bit, but if you got an ADC with poor yield sitting in the middle of some huge SOC device, growing the ADC a little is better than tossing the the huge SOC away because you decide to not include offset alignment circuits, or sized your sampling capacitors on the edge of your KT/C requirements.

Think big picture.

If your ADC/DAC/PLL/Whatever is a small part of the whole chip, growing that small part so that you get good overall yield is a no brainer.

Yield on final silicon (high volume) is what it is all about.  


Title: Re: Design flow for high yield
Post by love_analog on Jul 22nd, 2014, 7:42pm

Jeff
Sorry. I don't understand your response.

We are trying to make a design of ADC (say) with high yield so you don't have to throw your SoC away.
Are you saying forget about sizing - just make it huge so that you don't have to worry about meeting spec?


Title: Re: Design flow for high yield
Post by carlgrace on Jul 22nd, 2014, 10:45pm

He's not saying "make it huge", he's saying put in enough design margin (where that can be making devices larger but also putting in "chicken bits" such as offset aligners, bias programmability and the like) that the chip will still work even with worst case process.

Like Jerry says, it is usually obvious where the danger zones are in a given design.  Add a bit of extra margin to those areas and sleep better.

Title: Re: Design flow for high yield
Post by loose-electron on Jul 24th, 2014, 6:55pm


carlgrace wrote on Jul 22nd, 2014, 10:45pm:
He's not saying "make it huge", he's saying put in enough design margin (where that can be making devices larger but also putting in "chicken bits" such as offset aligners, bias programmability and the like) that the chip will still work even with worst case process.

Like Jerry says, it is usually obvious where the danger zones are in a given design.  Add a bit of extra margin to those areas and sleep better.


Agreed!

Lets take the case of some big SOC where the ADC/DAC/PLL/Whatever occupies 5% of the chip area and the other 95% of the chip is a big pile of digital verilog.

2 things will kill this IC:

1 - defect density statistics in the digital parts.
2- analog parts not operating within spec (noise, offset, whatever)

Defect density is pure statistical silicon yield issues and you can not do anything about it. (That's a foundry problem.)

Put design margin into the analog part in the form of noise margins in the design and architecture items to deal with gain and offset limitations.

At the end of the day to put that design margin in, you probably became 7% of the chip area, and upped the current a little bit (think noise margin and impedance scaling).

But, again for the sake of argument lets say your yield at final test goes from 70% to 95%, and that last 5% of loss is due to wafer defects and the chip being so big.

Who is Jeff? 8-)

Title: Re: Design flow for high yield
Post by Lex on Jul 29th, 2014, 7:53am

Adding circuitry not only increases area but also design time, verification time, debug time, etc.. In general it costs money and time (time which can be expressed again in money).

To justify the costs of the 'adding circuits', one should present the figures of increasing reliability. Doing a good job with alignment circuits etc., everybody knows the reliability is likely to improve, but the question is, by how much? Knowing those statistical distributions, we can actually quantify this and justify it.

E.g. the following discussion can be the result:
Designer: "With the alignment circuits, our reliability can go from 3.2sigma to 4.6sigma, but it costs us X1 amount of area, X2 amount of design time and X3 amount of debug time".
Manager: 3.2 to 4.6 sigma increases number of good devices by X4, but costs me X5. Okay after calculation, so that's a profit of X6. Sure, designer, go ahead, make me some nice alignment circuits =)

Title: Re: Design flow for high yield
Post by loose-electron on Jul 30th, 2014, 1:40pm


Lex wrote on Jul 29th, 2014, 7:53am:
Adding circuitry not only increases area but also design time, verification time, debug time, etc.. In general it costs money and time (time which can be expressed again in money).

To justify the costs of the 'adding circuits', one should present the figures of increasing reliability. Doing a good job with alignment circuits etc., everybody knows the reliability is likely to improve, but the question is, by how much? Knowing those statistical distributions, we can actually quantify this and justify it.

E.g. the following discussion can be the result:
Designer: "With the alignment circuits, our reliability can go from 3.2sigma to 4.6sigma, but it costs us X1 amount of area, X2 amount of design time and X3 amount of debug time".
Manager: 3.2 to 4.6 sigma increases number of good devices by X4, but costs me X5. Okay after calculation, so that's a profit of X6. Sure, designer, go ahead, make me some nice alignment circuits =)


If you go that path the product will be obsolete before you are out the door with it.

Frequently the design cycle is happening before silicon actually exists. I was doing designs on 45nm when the actual existence of any real 45nm wafers was about 6 months in the future.

That's how this business works...

Title: Re: Design flow for high yield
Post by Lex on Aug 1st, 2014, 8:56am


loose-electron wrote on Jul 30th, 2014, 1:40pm:

Lex wrote on Jul 29th, 2014, 7:53am:
...


If you go that path the product will be obsolete before you are out the door with it.

Frequently the design cycle is happening before silicon actually exists. I was doing designs on 45nm when the actual existence of any real 45nm wafers was about 6 months in the future.

That's how this business works...


That argument is pretty weak. Models are never a fixed thing. Even 'old' PDKs get updates. It does not make statistical simulations irrelevant.

Point is that with this kind of software, quantification of justification comes within hand's reach. To me that's pretty powerful, especially when pushing the boundaries.

Title: Re: Design flow for high yield
Post by loose-electron on Aug 1st, 2014, 5:44pm

How good are these statistical models you are using folks?

Results are only as good as the model being used.


Title: Re: Design flow for high yield
Post by loose-electron on Aug 1st, 2014, 6:03pm


Quote:
a
That argument is pretty weak. Models are never a fixed thing. Even 'old' PDKs get updates. It does not make statistical simulations irrelevant.

Point is that with this kind of software, quantification of justification comes within hand's reach. To me that's pretty powerful, especially when pushing the boundaries.



The difference between our perspectives on this are pretty evident. Difference is I have been on the team for a number of different foundry process, done model development, parasitic extraction, device validation and been responsible for correlation and tracking issues.

Never said they were irrelevant, but the analogy here is doing 8 place resolution math when your input data is only good to 3 places, and actually believing there is some significance in those last 5 decimal places.

Oh,  and tweaking the PDK is a last path effort generally. When possible you tweak the process to meet the models already out there. Why? Let's say you are TSMC with 10,000 designers running silicon on your foundry line. Major changes to the silicon you put out renders a lot of designs that don't yield as originally designed. You try and keep the R/square, oxide thickness (C/square) and similar aligned with the nominal model, to keep all those designs coming thru the fab functional.

Consequently you adjust foundry parameters to stay aligned with the nominal model. PDK tweaks do happen, but  first path of choice is something you as an end user never are even aware of.

Title: Re: Design flow for high yield
Post by Lex on Aug 4th, 2014, 5:05am

Maybe I'm misunderstanding you, but the way I interpret your comment is like: we'll do what we can to improve yield qualitatively (e.g. some growing here and there, alignment here and there, etc.) and then just shoot and hope for the best (yield).

The problem I have with that is that it sounds like a 'slippery slope'. Growing some circuit is fine, but where to stop? Same for alignment circuits etc. At a point, I'd question myself, is it a matter of diminishing returns or does the yield still significantly improves? Then comes the logical (to me at least) question: how can I verify whether it still make sense?

I share your skepticism on models, simulated accuracy etc. and I think that it is important. But I don't share the opinion that their results are lost in noise. For example, a simple corner in an ADC like a SF, Cmax is also a high sigma event, but I bet you still take in consideration, don't you?

Title: Re: Design flow for high yield
Post by loose-electron on Aug 4th, 2014, 2:20pm

never said it was irrelevant and I never said put trim and alignment on everything.

However, as a designer you are going to be able to figure out where matching, gain variance and other things are critical to making things work as desired.

Those are the places where you need to provide adjustment capability.

Don't needlessly add things to adjust items that are not important.

However if you got something that needs a certain noise performance, putting 3dB of margin in there would be wise, or if the gain of a circuit needs to be good to  0.5dB and the resistor ratio matching variance will only get you to .8dB you need to deal with it.

Title: Re: Design flow for high yield
Post by carlgrace on Aug 5th, 2014, 5:56pm

Contrary to Lex's obvious assumption I think it needs to be said that trim circuits can save design time and money.

I used to work at a well-known, large analog IC manufacturer and we always added trim bits because we knew that the "corners" were mostly hogwash.  

With TSMC, we almost always got very close to nominal, no worries.  Every once in a while we got some screwy wafers.  I evaluated dies from different wafer runs for a communications SOC I was working on and while I saw a bit of variation I didn't see anything to convince me that there were any Gaussian processes at work there.

I think the idea that you can add trimming bits and go from 2.8sigma yield to 4.2 sigma yield at 90% is a meaningless statement devoid of any purpose.

Want to design a successful chip cheaply on an accelerated schedule?  Design for nominal, put in some margin so it doesn't fail at the most likely corners, and put in some trim bits that can be set during wafer sort or through SPI/I2C.

Also, I'm very interested in your last sentence, Lex.  Have you ever actually seen a wafer exhibit anything close to a SF corner?  Can you suggest any physical effect during wafer fab that could yield such a unicorn?

Title: Re: Design flow for high yield
Post by Lex on Aug 6th, 2014, 8:53am

Hey Carl,

To me happened a couple of times that a single parameter went out of specs. Poly resistance and a metal layer accuracy was bad. Needless to say most of the times hovered around typ. Also here good experience at TSMC. I share your skepticism on the corners.

Concerning the unicorn, you probably better ask your fab guys, but for SF, my suggestion would be that they have different doping and also different response to stress and radiation. Regarding this 'differential corner', also EOL effects can be interesting as well.

When working in image sensors, I'm positive you can be convinced that the Gaussian processes are at work :). Since there are 1000's of ADC's on board, in that regard, 2.8 sigma is 1 poor ADC in 200, while 4.2 sigma is 1 poor ADC in 37000. I2C/SPI works only on the whole array of ADCs, so it'll help you with die/lot corners, but not with each ADC individually. Statistics have some meaning here, I'd say.

In your case, having a single ADC on a chip with individual trim bits, there is obviously a lot of coverage. 'How much?' is apparently the forbidden question here... then just hope that it is good enough that it doesn't pop up after ramp up.

Title: Re: Design flow for high yield
Post by loose-electron on Aug 6th, 2014, 12:34pm

Image sensors are really not a good place to look at this. When you look at the output of an image sensor, nothing lines up until you run a software image calibration (White-Black and the Bayer color balancing)

Consequently with image sensors you are putting a LOT of calibration and alignment into the game, as a function of the image processing software.

So Lex, sorry but you got the trim and calibration system there already.

Title: Re: Design flow for high yield
Post by Lex on Aug 13th, 2014, 2:46am

Sigh.. point was not whether to implement or not. It was about justification by quantification.

Sure we have alignment circuits on chip. We can turn them on/off. We know the costs in terms of performance, but also the benefits. There have been cases where total performance was better when turned off, so that's why its verified on and off.

May I ask what kind of Cpk you guys got back from your designs/chips?

The Designer's Guide Community Forum » Powered by YaBB 2.2.2!
YaBB © 2000-2008. All Rights Reserved.