The Designer's Guide Community
Forum
Welcome, Guest. Please Login or Register. Please follow the Forum guidelines.
Jul 17th, 2024, 1:31pm
Pages: 1 2 3 
Send Topic Print
Design flow for high yield (Read 859 times)
weber8722
Community Member
***
Offline



Posts: 95

Design flow for high yield
Feb 19th, 2013, 7:15am
 
Hi,

a customer points me to this nice book chapter: http://www.edn.com/ContentEETimes/Documents/Bailey/Variation-aware%20Ch4b.pdf

A quite normal flow for analog/RF/MS design is to check the circuit on PVT corners and on statistical behavior (using MC). Of course doing a full MC run at each PVT corner could be very time-consuming so to get in a first step an overall performance spread I usually combine PVT worstcase with a +-3sigma-spread obtained from MC run (e.g. using 200 runs and mismatch only) - for each performance, e.g.

Voffset spec is 10mV
So I run MC mismatch at typical, pick the worst sample as statistical corner (maybe giving +5mV). Then really run simulation on this (plus just the nominal one wo. mismatch) on PVT (maybe combined with MC process) and get e.g. a simulated worst-case Voffset of 9mV - incl. mismatch and corners. If the MC count is not too small and the picked MC sample is close to the 3-sigma point, then also the obtained 9mV should be close to the 3-sigma point, so I am in spec overall on Voffset with 3-sigma yield.

However, in this book a benchmark is given (without any details, unfortunately), that indicates that "my" algorithm FAILS very often - and their "new" algorithm is much better. I wonder why? Of course there could be special things like non-Gaussian distributions, or if you look to extreme large supply or temperature ranges or so, but does anybody has real examples. Also the book mentions that many things become more difficult if we have many specs, but isn't that a big advantage if using plain MC, compared to all kind of sensitivity based, more trickier algorithms?

What do you think?

Bye Stephan
Back to top
 
 
View Profile   IP Logged
Frank Wiedmann
Community Fellow
*****
Offline



Posts: 678
Munich, Germany
Re: Design flow for high yield
Reply #1 - Feb 20th, 2013, 4:35am
 
I guess the problem with your approach is that you can't be sure to hit a 3-sigma point for any given specification with a relatively low number of Monte-Carlo simulations (as shown by the benchmark). The corner extraction method proposed in the book seems to be much more efficient in this respect. By the way, some more material from this book is available at http://www.edn.com/electronics-blogs/practical-chip-design/4405439/Book-excerpt-....
Back to top
 
 
View Profile WWW   IP Logged
Trent McConaghy
New Member
*
Offline



Posts: 3

Re: Design flow for high yield
Reply #2 - Feb 21st, 2013, 1:54pm
 
Hello Stephan and Frank,

Thank you for your interest in 3-sigma corner extraction, described in the book excerpt linked above. I am the lead author of that book (http://www.amazon.com/Variation-Aware-Design-Custom-Integrated-Circuits/dp/14614... ).

Very often, the aim is to meet specs, or get acceptable performances, despite process variation. Implicitly, the "despite process variation" means that overall yield should end up at 3 sigma. Many high-level flows might address this aim. One flow is to use PVT corners. However, it’s not accurate because it ignores mismatch variation, and has a poor model of global variation. Designers could traditionally get away with this flow because mismatch was smaller in older processes. Another high level flow is to run Monte Carlo (MC) on each candidate design, which is accurate but too slow. Another flow is to use sensitivity analysis, but that scales poorly to large circuits and has accuracy issues.

What we really want is a high-level flow that is simultaneously fast, accurate, and scalable. Imagine if we had accurate statistical corners, such that when the circuit meets specs on those corners, overall yield is 3 sigma (e.g. as measured by MC). The high-level flow would consist of (1) extracting the corners (2) designing on the corners (3) verifying, and if needed, re-looping for a final tuning. This is a “sigma-driven corners flow”.

In this flow, each step needs to be fast, accurate, and scalable:
  • Step (1) Extracting corners. This can be fast, accurate, and scalable; but it's easy to get wrong. I'll describe more below.
  • Step (2) Designing on corners. It's fast because the designer only needs to simulate a handful of corners for each new candidate design (new sizes & biases). It's accurate as long as the corners are accurate. It's scalable because being corner-based is independent of the size of the circuit.
  • Step (3) Verifying. This step is important in case the corners lose some accuracy as the design is changed. It's a "safety" check. In this step, if the circuit meets 3-sigma overall yield with confidence, great! If not, then one can extract new corners, tweak the design, and do a final verify.

Let's discuss step (1) of the “sigma-driven corners” flow more. Consider one approach: draw some MC samples, then pick the worst case. If you solve on that worst-case, will it give a 3-sigma (99.73%) yield? It depends "if the picked MC sample is close to the 3-sigma point" as Stephan described it. He’s right (in the case of 1 spec).

So, how close can those MC samples get?
  • Let’s say you took 30 MC samples, and picked the worst case. Since picking worst case is different from finding statistical bounds, it could be way off, as Fig 4.17 of the book excerpt shows. For example, Fig. 4.17's far left box plot shows that 50% of the extracted corners were in the range between 80.0% and 97.5%. That is, if you changed the circuit to meet spec on such a corner, your yield might only come out at 80.0% or 97.5%; even though you’d wanted yield to come out at 99.73% (for 3-sigma). So it’s too inaccurate for 30 MC samples. Fig 4.17 shows how it’s inaccurate even with 200 MC samples.
  • Let’s say you took 1,000,000 MC samples, giving many samples near 3-sigma, and picked the sample closest to the 3-sigma point. This will give a fairly accurate corner (if 1 output, though not >1, more on this later). But of course it’s too slow.
  • The minimum number of samples to start to get close to 3-sigma is 1000-2000. However, that’s still a lot of simulations just to get corners. And what about when you have >1 spec? The only way that a 3-sigma corner for each spec will lead to an overall yield of 3 sigma is if exactly the same MC samples fail for each of the outputs. For example, solving on two 3-sigma corners with different failing MC samples per spec leads to an overall yield of (0.9973 * 0.9973) = 0.99461 = 99.46% yield = 2.783 sigma. On five corners, it leads to an overall yield of 0.9973^5 = 98.6% yield = 2.47 sigma.

The cool thing is, there is a way to extract statistical corners such that when you solve on them, the circuit ends up with 3-sigma overall yield. The approach leverages MC samples, but with just the right bit of additional computation afterwards to target the performance distributions’ statistical bounds. It works on >1 outputs at once, on non-Gaussian PDFs, and includes environmental conditions. It typically uses about 100 simulations to get accurate corners, even on extremely large circuits. Pages 82-83 of the book excerpt give details on how it’s done, and why it’s fast, accurate, and scalable. Pages 83-85 gives benchmark setup details, and benchmark result numbers.

This corner extraction approach fits into the context of the “sigma-driven corners” flow described earlier, which enables engineers to design for high performance and yield, despite large process variation. This approach has been successfully used in industrial design settings for several years now. Here’s an example from a couple years ago: http://www.deepchip.com/items/0492-05.html. Due to that success, the “sigma-driven corners” flow has become a standard flow for many leading-edge teams, and adoption continues to grow.

Thanks again for your interest in 3-sigma corner extraction approach. I’d be happy to address any further questions or concerns.

Kind regards,

Trent McConaghy
Co-founder & CTO
Solido Design Automation
Back to top
 
« Last Edit: Feb 22nd, 2013, 9:54am by Trent McConaghy »  
View Profile WWW   IP Logged
weber8722
Community Member
***
Offline



Posts: 95

Re: Design flow for high yield
Reply #3 - Feb 25th, 2013, 12:59am
 
Hi,

thanks for so fast replies. Let me state clearly: Of course PVT analysis is not enough. So I always add also statistical corners to them to see the full picture!

You claim this:
....as Fig 4.17 of the book excerpt shows. For example, Fig. 4.17's far left box plot shows that 50% of the extracted corners were in the range between 80.0% and 97.5%. That is, if you changed the circuit to meet spec on such a corner, your yield might only come out at 80.0% or 97.5%; even though you’d wanted yield to come out at 99.73% (for 3-sigma). So it’s too inaccurate for 30 MC samples. Fig 4.17 shows how it’s inaccurate even with 200 MC samples.


My comment: If you are 50% at want to go to 99,73%, you probably need to do it in multiple steps! No designer or algorithm in the world can do it in one step! First step is to analyse: Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much. Then improve on the worst things. like

if Vdd, PSRR,.. is a problem improve e.g. your current sources, add cascodes, etc.).
if mismatch is a problem, make critical elements larger
if TC is a problem, change bias sheme, etc.

This way you will end up in maybe 90% yield - which is easy to prove with small MC run. Next look at worst case deterministice AND stat. corners, and tweak your circuit to maybe get 95% yield.
Then run MC with maybe 50 runs (of course you can use 150 if simulation time is low anyway), to get mean and sigma quite accurate within few percent. There is usually no need to know these much more accurate, because you would rely far too much on modeling incl. stat. modeling!
Then pick e.g. some worst case samples like two giving -9mV and +10mV. There is no real problem if these values are 3-sigma or 2.8sigma or 3.2sigma!! And if you have multiple gooals - as usual - it makes the verification and design even more robust.
All-in-all making that multiple steps also means that the designer learns a lot on the technology and his design, a great side-effect!!

You also claim:
Let’s say you took 1,000,000 MC samples, giving many samples near 3-sigma, and picked the sample closest to the 3-sigma point. This will give a fairly accurate corner (if 1 output, though not >1, more on this later). But of course it’s too slow.

Why using 1M samples for 3-sigma at all?? Usually you get samples between 2.7 to 3.3 sigma already from a much much shorter MC runs, like 50-100!! Even if you would get only a 2.6sigma sample, you may either multiply the stat. variables elongations or just pick it and add some margin (like 0.4sigma) to the spec.

So all in all I would say a slightly improved MC like low-discrepancy sampling would help most, and is numerically much less risky than most other faster algorithms I have seen (although some algorithms collect nicely ideas/steps a designer or factory test engineer would do manually, like stopping the analysis on this sample if fast to measure specs are hurted already!!). And existing flows are quite practical already, so I expect no wonders (beside getting faster computers).

You also claim:
The minimum number of samples to start to get close to 3-sigma is 1000-2000. However, that’s still a lot of simulations just to get corners. And what about when you have >1 spec?

One huge advantage of MC is that it keeps full speed (although it is not fast) if we have multiple specs; MC is becoming even slightly faster in multiple-spec case (less netlisting & randomization overhead)! BUT, usual "fast" high-yield estimation algorithms need often near-linearily more time if we need to treat treat N specs!
Also MC is able to give much more design insights, like offering correlations, QQ plots, etc. just looking at sigma and yield is not enough! Yield is -if you think more - a quite stupid think! For instance it does not take into account if spec hurting is by 1mV or 100mV!!

BTW, what I do not like so much in the Solido book is the benchmark. They use quite trivial blocks like opamp, bandgap and "others" - without any details. I wonder what would happen in more non-trivial cases like a flash-ADC or segmented-DAC? On these, some high-yield estimation algorithms work quite bad, because the number of highly relevant statistical variables is very big (like >2000 even for a 6-bit flash-ADC front-end in a simpler older pdk).

Bye Stephan
Back to top
 
 
View Profile   IP Logged
weber8722
Community Member
***
Offline



Posts: 95

Re: Design flow for high yield
Reply #4 - Feb 25th, 2013, 8:48am
 
Hi Trent,

most I wonder about this statement in the book:

"MC always needs 1,400–5,000 samples for accurate 3-r verification."


I think this is because of far too limited use of your MC results!!

If you run e.g. an MC analysis with 100 samples only, find yield is 100% (no fails) and you have e.g. a mean of 0V and a sigma of 10mV and your spec limit is 60mV, you would have an estimated Cpk of 2 (=6sigma). Of course, there is some uncertainty (on mean, sigma, thus Cpk), but not a so huge uncertainty, that being below 3 sigma is realistic at all!!
If you would repeat the MC-100 runs several times, you can get an eatimation on the accuracy of the Cpk estimation, but it will clearly much smaller than +-1 (or +-3sigma).

This is a simple example, showing that you do not need so many samples for a very frequently needed 3-sigma yield check.

If I would look only to the yield results, I would need indeed many more samples, but it would be stupid not to look at the whole picture.

I also wonder why looking to the yield is such a holy grail at all? If you talk with the customer, you would probably find that he can relax the spec a bit anyway.
Or you have to design a subblock, whose specs can be traded to the specs of other blocks easily. So many designers start designs without clear specs and testbenches. So better strive for "no surprises", i.e. a robust design, know and control your means and sigmas.

Bye Stephan
Back to top
 
 
View Profile   IP Logged
loose-electron
Senior Fellow
******
Offline

Best Design Tool =
Capable Designers

Posts: 1638
San Diego California
Re: Design flow for high yield
Reply #5 - Feb 25th, 2013, 2:09pm
 
Bfeore you go too crazy with all the statistical analysis, ask yourself a few simple questions:

How dependent are you on

absolute values
matching
process variance
power sensitivity
Capacitiance variance (all levels)

Maybe I have been in this business too long but when I do chip reviews, the things that kill yield are usually pretty obvious.
Back to top
 
 

Jerry Twomey
www.effectiveelectrons.com
Read My Electronic Design Column Here
Contract IC-PCB-System Design - Analog, Mixed Signal, RF & Medical
View Profile WWW   IP Logged
Lex
Senior Member
****
Offline



Posts: 201
Eindhoven, Holland
Re: Design flow for high yield
Reply #6 - Feb 26th, 2013, 1:19am
 
Hey Stephan

Coming back to your initial question, I think just like Frank, that you need some confidence that you have found the correct 3 sigma point. And actually the box plot in figure 4.17 shows what practically happens for a run of 30x(MC simulation of N=30,65,100,200 samples). Cold numbers that speak for themselves, I'd say.

Somehow I think you are aware of this since you wrote " If the MC count is not too small and the picked MC sample is close to the 3-sigma point", so there is some inspection by you as designer which will probably up that yield a bit, but hey, it is difficult to put a reliable number on that. (Actually this is also discussed in the book).

I'd say try to replicate 4.17 yourself: make some nice simulation where you extract the WC for quite some times, get the spread and see for yourself how that compares with the results of a large number number MC analysis.
Back to top
 
 
View Profile   IP Logged
weber8722
Community Member
***
Offline



Posts: 95

Re: Design flow for high yield
Reply #7 - Feb 26th, 2013, 6:12am
 
Lex wrote on Feb 26th, 2013, 1:19am:
.... figure 4.17 shows what practically happens for a run of 30x(MC simulation of N=30,65,100,200 samples). Cold numbers that speak for themselves, I'd say.



Yes, I will do these experiments! But remember: 4.17 takes just the yield into account - which is a poor use of the valuable simulation data. Also 30 samples are indeed not much, but with 65 it looks already promising (in this case). And if you would not use (slow) random-MC, but an enhanced MC method like LDS you will be quite happy. I hope to present some data/pictures soon.

Bye Stephan
Back to top
 
 
View Profile   IP Logged
weber8722
Community Member
***
Offline



Posts: 95

Re: Design flow for high yield
Reply #8 - Feb 27th, 2013, 5:58am
 
Frank Wiedmann wrote on Feb 20th, 2013, 4:35am:
I guess the problem with your approach is that you can't be sure to hit a 3-sigma point for any given specification with a relatively low number of Monte-Carlo simulations (as shown by the benchmark).



Frank, you are fully right, and it brings us to one key problem of what a designer can do  :): If you cannot be sure, you  need to add a margin, like taking not a 3-sigma sample to prove 3-sigma yield (with certain confidence level), but e.g. a 4-sigma sample!
Of course this "magic" margin depends on how big your MC sample count was! For 100 samples that margin should be app. 1-sigma. For 25 samples it would be app. 2-sigma, thus often a bit too much to over-design by this amount! However, for 400 MC samples it would be a margin of app. 0.5-sigma, so very practible, espcially if you are aware of the fact that the transistor models and its stat. modeling is by far not  3-sigma accurate in all conditions and characteristics.

Bye Stephan
Back to top
 
 
View Profile   IP Logged
weber8722
Community Member
***
Offline



Posts: 95

Re: Design flow for high yield
Reply #9 - Feb 27th, 2013, 6:01am
 
loose-electron wrote on Feb 25th, 2013, 2:09pm:
Bfeore you go too crazy with all the statistical analysis, ask yourself a few simple questions:

How dependent are you on

absolute values
matching
process variance
power sensitivity
Capacitiance variance (all levels)


Fully right, before putting all together in a big PVT+MC analysis, best check each effect individually and focus on the most critical one, might be temperature, supply, mismatch, absolute process values, etc.
Back to top
 
 
View Profile   IP Logged
Frank Wiedmann
Community Fellow
*****
Offline



Posts: 678
Munich, Germany
Re: Design flow for high yield
Reply #10 - Feb 27th, 2013, 6:49am
 
weber8722 wrote on Feb 27th, 2013, 5:58am:
Frank, you are fully right, and it brings us to one key problem of what a designer can do  :): If you cannot be sure, you  need to add a margin, like taking not a 3-sigma sample to prove 3-sigma yield (with certain confidence level), but e.g. a 4-sigma sample!
Of course this "magic" margin depends on how big your MC sample count was! For 100 samples that margin should be app. 1-sigma. For 25 samples it would be app. 2-sigma, thus often a bit too much to over-design by this amount! However, for 400 MC samples it would be a margin of app. 0.5-sigma, so very practible, espcially if you are aware of the fact that the transistor models and its stat. modeling is by far not  3-sigma accurate in all conditions and characteristics.

I'm not sure how you got your numbers, but in the book, there is Fig. 4.27 that shows the average number of samples required to verify a design to 3 sigma with 95% statistical confidence as a function of the actual sigma level of the design. According to this graph, if your design has an actual sigma level of 3.7, you need 1500 samples on average to verify it to 3 sigma.
Back to top
 
 
View Profile WWW   IP Logged
Trent McConaghy
New Member
*
Offline



Posts: 3

Re: Design flow for high yield
Reply #11 - Feb 27th, 2013, 3:17pm
 
Hi Lex and Jerry,

Thanks for your comments.

> [Lex wrote]  the box plot in figure 4.17 shows what practically happens ... Cold numbers that speak for themselves, I'd say.
Agreed:)

> [Jerry wrote] Before [sic] you go too crazy with all the statistical analysis, ask ... How dependent are you on: absolute values, matching, process variance, power sensitivity, Capacitance variance
> Maybe I have been in this business too long but when I do chip reviews, the things that kill yield are usually pretty obvious.

Agreed. Each of those questions is about statistical process variation. If you know for sure that FF/SS corners are adequate, then statistical analysis isn’t needed.

No one wants to "go crazy" with statistical analysis. The ideal is to keep designing as before, with corners, with minimal statistics. What's cool is that this is possible, with the sigma-driven corners flow: push a button, and get "true" 3-sigma corners that represent the circuit's performance bounds. Then the designer can focus his valuable time designing against these true corners, leveraging all his experience and insight. At the end, push a button to verify that the circuit is ok for yield (and if needed, do a quick re-loop). The first and the last steps are as short and painless as possible, so that the designer can spend his time on what matters - the design itself.

A good MC-based tool will report the effect of each global process variable, local process variable, and environmental variable with respect to each output. This can come for free from the MC run.

Kind regards,

Trent
Back to top
 
 
View Profile WWW   IP Logged
Trent McConaghy
New Member
*
Offline



Posts: 3

Re: Design flow for high yield
Reply #12 - Feb 27th, 2013, 3:19pm
 
Hi Stephan,

Thanks for your comments.

> If you are 50% at want to go to 99,73%, you probably need to do it in multiple steps! No designer or algorithm in the world can do it in one step! First step is to analyse: Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much. Then improve on the worst things. like
> if Vdd, PSRR,.. is a problem improve e.g. your current sources, add cascodes, etc.).
> if mismatch is a problem, make critical elements larger ..

Agreed. A sigma-driven corners flow allows for re-loops if the target yield is not hit in the first round. The MC tool will give insight into impacts (see above), and the designer can leverage his experience and intuition to improve the design (see above).

> Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much ... Then improve on the worst things ... This way you will end up in maybe 90% yield - which is easy to prove with small MC run. Next look at worst case deterministice AND stat. corners, and tweak your circuit to maybe get 95% yield. Then run MC ... get mean and sigma ...  then pick e.g. some worst case samples

One could certainly use such an ad-hoc method if they wish. But with the right MC tool, the designer can get (a) straight to designing on appropriate corners, (b) with impacts for free, and (c) using a repeatable, non ad-hoc flow that both beginners and experts can use.

> Usually you get samples between 2.7 to 3.3 sigma already from a much much shorter MC runs, like 50-100!!

Here’s the math. A (one-tailed) sigma of 3.0 is a probability of failure of 1.35e-3 (yield of 99.86%). This means you need 1/1.35e-3 = 740 MC samples to get a single failure, on average. For a reasonably confident yield estimate, you’ll need >5 failures. In the book, when I say 1400-5000 samples, that is based on verifying 3-sigma yield to 95% confidence (under a pass/fail distribution). The actual number of samples depends on how close the circuit is actually to 3.0 sigma. (Section 4.6.1 of the book. Thanks to Frank for pointing this one out already.)

> If you run e.g. an MC analysis with 100 samples only, find yield is 100% (no fails) and you have e.g. a mean of 0V and a sigma of 10mV and your spec limit is 60mV, you would have an estimated Cpk of 2 (=6sigma). Of course, there is some uncertainty (on mean, sigma, thus Cpk), but not a so huge uncertainty, that being below 3 sigma is realistic at all!!
> If you would repeat the MC-100 runs several times, you can get an estimation on the accuracy of the Cpk estimation, but it will clearly much smaller than +1 (or +3sigma).

If one estimates the distribution from mean and standard deviation (such as for Cpk), then the implicit assumption is that the distribution is Gaussian. If you are willing to assume this, then I agree with you, it is possible to have fewer samples. But if the distribution is not Gaussian (e.g. long-tailed, bimodal) then you will be drawing false conclusions about your circuit.  Nonlinearities / non-idealities in circuits lead to non-Gaussian distributions. The book gives examples of non-Gaussian distributions in Figs. 4.1 (VCO of PLL), 4.29 (folded cascode amp with gain boosting), 5.1 (bitcell), 5.2 (sense amp), 5.32 (flip flop), 5.40 (flip flop), and 5.45 (DRAM bit slice).

> why looking to the yield is such a holy grail at all? ... can relax the spec a bit anyway.
There is always a tradeoff between yield and specs, of course. When process variation matters, you can't design for one and ignore the other. Approaching the problem in a true-corners fashion enables the designer to deal with variation, without “going crazy” on statistical analysis.

> One huge advantage of MC is that it keeps full speed [if] multiple specs...
> MC is able to give much more design insights, like offering correlations, QQ plots, etc. ...

Thanks for arguing the case for MC. I agree! MC scales very well, because its accuracy is independent of dimensionality. As I discuss earlier, and in detail in the book, MC plays a key role in corner extraction (but not the only role), statistical verification, and gaining insight.

> an enhanced MC method like LDS you will be quite happy... I hope to present some data/pictures soon.
Low-discrepancy sampling (LDS) methods can help, but they are not a panacea.

If one wants 3-sigma corners using the worst-case sample, LDS does not somehow make picking worst-case a statistical decision. Therefore a worst-case approach can still be way off.

For 3-sigma yield verification, the benefits of LDS are modest. Here's why. Recall that 3 sigma has 1 failure in 740, on average. In a "perfect" sampler you would get exactly one failure on every run of 740 samples; in "less-perfect" samplers you might get 0, 1, 2, or more failures on each run. You still need 1400-5000 samples to start to get enough failures to have high statistical confidence. (Sec 4.5.9.)

Section 4.5 and Appendix 4.B of the book describe LDS in detail. Since off-the-shelf LDS approaches scale poorly beyond 10-50 dimensions, we developed a scalable LDS technique. Sections 4.5.7 and 4.5.8 show benchmarks with thousands of variables. Since 2009, LDS is the default way that MC samples are drawn in Solido tools. (Pseudo-random sampling is still available, of course.)

Kind regards,

Trent

Back to top
 
 
View Profile WWW   IP Logged
weber8722
Community Member
***
Offline



Posts: 95

Re: Design flow for high yield
Reply #13 - Apr 11th, 2013, 5:17am
 
Hi Trent,

thanks for your detailed response!!! Smiley I have to admit, I learned a lot on statistics in the last months.

On getting 3-sigma samples I was indeed a bit too optimistic with 100 samples. but with n=512 the mean of the spread (max-min) is really near-exactly 6.12*sigma, so >+-3 (i.e. often you would get even one sample at +3sigma and another at -3 - or beyond), so the chance to get e.g. at least 2.7sigma is really quite high even e.g. for 128 samples, like >=50%. Although there is indeed no guaranty!

On yield estimation: That is indeed a problem of the counting method Y=npass/nfail - you would need fail-samples!! However, if the distribution is near-Gaussian, then of course the Cpk is a better indicator, i.e. gives you earlier a feeling on yield. Also for Cpk confidence intervals are well-known, and tighter than for the counting method. Just because it exploits more information then just pass vs fail.

On Cpk and non-Gaussian distributions: Indeed the Cpk can be misleading, but it can be too optimistic or too pessimistic, but it is quite easy to correct the Cpk by including the skew and kurtosis of the distribution into account - I wonder why the scientific papers on this are not so well-known to engineers. With a full data analysis you can get really a lot out of a moderate MC run for your design.

On LHS+LDS: I have to gain experiences, sometimes they give a good speedup, sometimes just only 1.5X or so - hard to predict and to trust. In newest papers I have seen that some authors use LDS/LHS but with variable clustering to adress the dimension-problem quite nicely.

In Cadence Virtuoso LDS is anounced for mmsim12 and ic616 - need to wait some weeks.

Bye Stephan
Back to top
 
 
View Profile   IP Logged
Lex
Senior Member
****
Offline



Posts: 201
Eindhoven, Holland
Re: Design flow for high yield
Reply #14 - Apr 12th, 2013, 2:58am
 
Hey Stephan

Intrigued by this thread, I did a 15,000 MC run on a LDO circuit of mine. (the 15,000 number was chosen as it fitted in running a complete night).

What I observed was that most of my performance parameters were not Gaussian. Only the current consumption and the offset voltage were linear in a quantile plot and hence Gaussian (for the chosen sample size). But for example startup time, PSRR, output noise, UGF, DC loopgain, phase margin  were not linear (in a quantile plot) and hence not Gaussian.

And all that fiddling around a Gaussian distribution is very nice, but it cannot handle multimodal behavior of your circuit, and is therefore insufficient for high yield analysis.
Back to top
 
 
View Profile   IP Logged
Pages: 1 2 3 
Send Topic Print
Copyright 2002-2024 Designer’s Guide Consulting, Inc. Designer’s Guide® is a registered trademark of Designer’s Guide Consulting, Inc. All rights reserved. Send comments or questions to editor@designers-guide.org. Consider submitting a paper or model.