Trent McConaghy
New Member
Offline
Posts: 3
|
Hi Stephan,
Thanks for your comments.
> If you are 50% at want to go to 99,73%, you probably need to do it in multiple steps! No designer or algorithm in the world can do it in one step! First step is to analyse: Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much. Then improve on the worst things. like > if Vdd, PSRR,.. is a problem improve e.g. your current sources, add cascodes, etc.). > if mismatch is a problem, make critical elements larger ..
Agreed. A sigma-driven corners flow allows for re-loops if the target yield is not hit in the first round. The MC tool will give insight into impacts (see above), and the designer can leverage his experience and intuition to improve the design (see above).
> Check if your circuit is sensitive to T, Vdd, load, process, mismatch, etc. and how much ... Then improve on the worst things ... This way you will end up in maybe 90% yield - which is easy to prove with small MC run. Next look at worst case deterministice AND stat. corners, and tweak your circuit to maybe get 95% yield. Then run MC ... get mean and sigma ... then pick e.g. some worst case samples
One could certainly use such an ad-hoc method if they wish. But with the right MC tool, the designer can get (a) straight to designing on appropriate corners, (b) with impacts for free, and (c) using a repeatable, non ad-hoc flow that both beginners and experts can use.
> Usually you get samples between 2.7 to 3.3 sigma already from a much much shorter MC runs, like 50-100!!
Here’s the math. A (one-tailed) sigma of 3.0 is a probability of failure of 1.35e-3 (yield of 99.86%). This means you need 1/1.35e-3 = 740 MC samples to get a single failure, on average. For a reasonably confident yield estimate, you’ll need >5 failures. In the book, when I say 1400-5000 samples, that is based on verifying 3-sigma yield to 95% confidence (under a pass/fail distribution). The actual number of samples depends on how close the circuit is actually to 3.0 sigma. (Section 4.6.1 of the book. Thanks to Frank for pointing this one out already.)
> If you run e.g. an MC analysis with 100 samples only, find yield is 100% (no fails) and you have e.g. a mean of 0V and a sigma of 10mV and your spec limit is 60mV, you would have an estimated Cpk of 2 (=6sigma). Of course, there is some uncertainty (on mean, sigma, thus Cpk), but not a so huge uncertainty, that being below 3 sigma is realistic at all!! > If you would repeat the MC-100 runs several times, you can get an estimation on the accuracy of the Cpk estimation, but it will clearly much smaller than +1 (or +3sigma).
If one estimates the distribution from mean and standard deviation (such as for Cpk), then the implicit assumption is that the distribution is Gaussian. If you are willing to assume this, then I agree with you, it is possible to have fewer samples. But if the distribution is not Gaussian (e.g. long-tailed, bimodal) then you will be drawing false conclusions about your circuit. Nonlinearities / non-idealities in circuits lead to non-Gaussian distributions. The book gives examples of non-Gaussian distributions in Figs. 4.1 (VCO of PLL), 4.29 (folded cascode amp with gain boosting), 5.1 (bitcell), 5.2 (sense amp), 5.32 (flip flop), 5.40 (flip flop), and 5.45 (DRAM bit slice).
> why looking to the yield is such a holy grail at all? ... can relax the spec a bit anyway. There is always a tradeoff between yield and specs, of course. When process variation matters, you can't design for one and ignore the other. Approaching the problem in a true-corners fashion enables the designer to deal with variation, without “going crazy” on statistical analysis.
> One huge advantage of MC is that it keeps full speed [if] multiple specs... > MC is able to give much more design insights, like offering correlations, QQ plots, etc. ...
Thanks for arguing the case for MC. I agree! MC scales very well, because its accuracy is independent of dimensionality. As I discuss earlier, and in detail in the book, MC plays a key role in corner extraction (but not the only role), statistical verification, and gaining insight.
> an enhanced MC method like LDS you will be quite happy... I hope to present some data/pictures soon. Low-discrepancy sampling (LDS) methods can help, but they are not a panacea.
If one wants 3-sigma corners using the worst-case sample, LDS does not somehow make picking worst-case a statistical decision. Therefore a worst-case approach can still be way off.
For 3-sigma yield verification, the benefits of LDS are modest. Here's why. Recall that 3 sigma has 1 failure in 740, on average. In a "perfect" sampler you would get exactly one failure on every run of 740 samples; in "less-perfect" samplers you might get 0, 1, 2, or more failures on each run. You still need 1400-5000 samples to start to get enough failures to have high statistical confidence. (Sec 4.5.9.)
Section 4.5 and Appendix 4.B of the book describe LDS in detail. Since off-the-shelf LDS approaches scale poorly beyond 10-50 dimensions, we developed a scalable LDS technique. Sections 4.5.7 and 4.5.8 show benchmarks with thousands of variables. Since 2009, LDS is the default way that MC samples are drawn in Solido tools. (Pseudo-random sampling is still available, of course.)
Kind regards,
Trent
|