The Designer's Guide Community Forum
https://designers-guide.org/forum/YaBB.pl
Simulators >> Circuit Simulators >> Monte Carlo
https://designers-guide.org/forum/YaBB.pl?num=1207036402

Message started by emad on Apr 1st, 2008, 12:53am

Title: Monte Carlo
Post by emad on Apr 1st, 2008, 12:53am

Hello All:

Is the "process only" option of Monte Carlo simulation a replacement for traditional process corners simulation? If corners are at the 3 sigma deviations from nominal, wouldn't a process-only Monte Carlo simulation spare a corner analysis?

Of course Monte Carlo takes a lot longer but at least in small critical circuits, a Monte Carlo would assure the designer that the circuits work between corners as well as at corners.

Another question is whether anybody knows if the difference between "process only" and "process and mismatch" options of a single transistor vt would be exactly Avt/sqrt(WL) or would it be weighed in a different manner?

Emad


Title: Re: Monte Carlo
Post by ACWWong on Apr 1st, 2008, 2:51am


emad wrote on Apr 1st, 2008, 12:53am:
Is the "process only" option of Monte Carlo simulation a replacement for traditional process corners simulation? If corners are at the 3 sigma deviations from nominal, wouldn't a process-only Monte Carlo simulation spare a corner analysis?

yes, i suppose it could replace process corner simulation, should a sufficient number of MC runs are undertaken. Latin-hypercube sampling should allow better process coverage in fewer runs.


emad wrote on Apr 1st, 2008, 12:53am:
Another question is whether anybody knows if the difference between "process only" and "process and mismatch" options of a single transistor vt would be exactly Avt/sqrt(WL) or would it be weighed in a different manner?

Avt/sqrt(WL) is a threshold offset voltage between two transistors... i.e. mismatch not process. "Process" means all threshold voltage shift together due to process effects such as tox and doping variations etc. and does not use the mismatch equation you quote. For small transistors in deep submicron technologies, mismatch is just as important (if not more so) than process variation.

hope this helps
aw

Title: Re: Monte Carlo
Post by emad on Apr 2nd, 2008, 1:21am

Hello AW

Thanks for your reply. It was helpful.

However, if you run MC simulation on a single transistor (process + mismatch) you'll find that the spread is larger than running it in process-only mode. My understanding that the model has separate set of parameters for process and mismatch. Process model considers variability which is function of geometry. Accordingly, if you use MC simulation on two resistors of the same size and geometry and choose process only option, the two resistors will come back perfectly in every simulation run. If you choose different geometries, the resistors will be different because under etching for example will vary depending on the geometry.

I tend to believe that if you run mismatch-only on a single device, the parameters of the device are varied with the predicted spread regardless of the fact that there is no other device to measure its mismatch against.

Do I make sense? Am I totally off?

Thanks

Emad

Title: Re: Monte Carlo
Post by ACWWong on Apr 2nd, 2008, 3:11am

Yes for a single device, process and mismatch will give a larger spread than process alone. If the device is big, then the difference will be small. This is because "process" will shift the vth (of all devices equally according to fast/slow etc. so some devices may shift more than others, but identical devices by the same amount) to a new mean value. Having been shifted, an additional offset is added due to mismatch from the mean, which is based on the single device size. Having said that two devices of a certain size will have mismatch in vth which should fit the A/sqrt(WL) in terms of the delta vth between them in "mismatch" run at any skewed "process".

hope this helps,

aw



Title: Re: Monte Carlo
Post by vivkr on Apr 2nd, 2008, 6:41am

Hi Emad, Alan,

Just a couple of points from my experience:

1. While  the "process only" option does allow you to do kind of corner analysis, you may not be able to vary too many other parameters. I believe that you only can vary
one of the parameters such as supply or temperature. Of course, if you can do some scripting, then maybe you can get around this.

2. Normally, the process variation is modelled as a uniform distribution. If you are only worried about whether your device works at the various possible points, then this
is no problem, but if you are trying to mimic a real process, then maybe you should ask yourself if this is a realistic scenario. I would expect that the process corner itself
varies in a somewhat Gaussian manner around the typical process point.

3. Good coverage of worst case corners is usually achieved by using corner analysis as opposed to monte carlo with the process option. I think it highly unlikely that a design
which passes all corners fails somewhere in between.

4. Latin hypercube sampling may help in generating better worst case scenarios in monte carlo analysis but only if it is implemented. I once checked it in my technology and
found that the fab had used the same model for laying hypercube sampling as for the conventional monte carlo analysis. So, this is just a dummy switch in my design kit.

5. One reason I use the "mismatch only" option is to do monte carlo analysis at the worst case corner for my design. This gives you some idea about the spread. Of course,
this is a worst worst case. I would not take a 3-sigma guardband around my worst case corner for instance, especially if that corner were one that did not occur with a high
probability such as (strong PMOS, weak NMOS).

Regards
Vivek

Title: Re: Monte Carlo
Post by emad on Apr 2nd, 2008, 1:46pm

Hello Vivek and Alan:

Thanks a bunch for the replies.

Vivek:

I have one point to differ with you on: In linear circuits, I would agree with you that it is highly unlikely that a circuit that passes in corners would fail in between. However, I believe this is a possible scenario in switching circuits where race conditions can happen. For example, in frequency dividers. Another example is in multi-domain clocking which happens often in PLL calibration blocks and more recently in ADPLL.

I have one question to both Alan and you: You mentioned hupercube sampling and in fact, I'm unable to find any option setting within ADE MC setup to allow me to choose a sampling method. Where can you set that switch on?

Thanks again.

Emad

Title: Re: Monte Carlo
Post by sheldon on Apr 3rd, 2008, 8:35pm

Emad,

  Some additional comments:

1) LHS is only supported in recent versions of Spectre and IC5141, forget which release,
    contact your local support. They would know.

2)  In good statisitical models, the distributions are chosen to match the effect to be
    modeled, usually parameters have gaussian distributions though some parameters
    use other distributions, for example Beta is often modeled with a log-normal
    distribution.

3) I would kind of go the other way on circuits passing corners and failing Monte Carlo.
   It seems to me that unless the definition of corners is fairly precise this will happen
   fairly often. That is, the digital concept of corners, Fast/Typical/Slow, does not make
   sense in the analog world. I think this is what Vivek is indirectly refering to when
   he talks about worst-case corners. Corners need to account for the impact of process
   variation on the analog characteristics of transistors.

4) From the design methodology view point, I don't think that corner analysis and
    Monte Carlo analysis are "competitors", they complement each other.

   If you are interested in verifying that your design meets specification, then corner
   analysis is the appropriate tool for the job. If a corner fails and you want to assess the
   impact  of the failure, then Monte Carlo is useful. It will give you insight into the
   yield impact of the failure.

  If you are designing a circuit, then Monte Carlo analysis is good. It  provides insight
  into the relationship between design parameters and specifications, correlation plots,
  and the design margin, how tight is the distribution and is it centered properly. This
  information is difficult to extract from corner analysis.    

  The challenge is as the number of process corners increases, is it more efficient to use
   corner analysis or just run Monte Carlo on everything. My feeling is that the answer to
   that question is specific to your project.

   Last point, is there is lot you can do with statistical analysis, if you have good models.
   The challenge has always been, particularly for CMOS processes, getting good models,
   that is, models that accurately reflect process distributions.

                                                                                        Best Regards,

                                                                                            Sheldon

Title: Re: Monte Carlo
Post by Geoffrey_Coram on Apr 4th, 2008, 7:26am


sheldon wrote on Apr 3rd, 2008, 8:35pm:
   Last point, is there is lot you can do with statistical analysis, if you have good models.
   The challenge has always been, particularly for CMOS processes, getting good models,
   that is, models that accurately reflect process distributions.


Indeed.  The foundries like to give you scrap limits -- if the parameters are outside, you don't have to pay for the wafer.  But these are usually much wider than the actual process distribution (because they don't want to scrap), and their legal department doesn't want you to have the actual distributions, lest you try not to pay if the wafers end up outside the recent historical 3-sigma points.

To be fair, it's probably also difficult to actually measure the data.

Title: Re: Monte Carlo
Post by emad on Apr 4th, 2008, 10:09am

I think for larger volumes you may be able to get more info about the statistics of each fab. For a given technology node, your chip can be manufactured in multiple fabs owned by the same foundry. The statistics you receive is typically wider than the statistics of any individual fab since it represents the aggregate spread of all fabs combined. Therefore your circuit is doomed to be over-designed.

Emad

Title: Re: Monte Carlo
Post by kDaniu on Apr 23rd, 2008, 12:52am

corners (worst case f.e. with a ss transistors, low TEMP etc) showing for you a possible low level of perfomance
MC simulation has another meaning (stability of your scheme for inside transistor variation). especially the process variation are from die to die and from the chip to chip on the die. mismatch variation are smallest then process variation and contain the transistor parameters variation inside some/every chip. for some areas this kind of variation (mismatch) are more important ...

kDaniu

Title: Re: Monte Carlo
Post by jshi on May 11th, 2008, 10:17pm

Hi, Gurus:

I am new to this forum. This is a great website for designers.

I have questions on Spectre simulator w.r.t simulations of meta-stable nodes in circuit. The circuit is a sub circuit of a larger one. Basically it is a NOR based latch. Suppose inputs are A and B, and respective outputs are X and Y. Suppose A, B both at 1 in the same time, this set both X and Y to 0. Now suppose both A and B change to 0 at the SAME time, outputs X and Y is now at meta-stable point. X/Y can be 0/1 or 1/0 after meta-stable condition is resolved.  I understand we should avoid meta-stability by all means in design, however in this case metastability is used.

Now my question is in Spectre single transient simulation, can Spectre show both outputs with equal probability? seems to me Spectre didnt do that.

Then I also tried monteCarlo simulation. I am using TSMC65lp process BSIM4.4 model, by turning on both process
variation/mismatch, the probability of node X being "1' and bing "0" is 4 vs. 1, it is NOT 1 vs 1 as I have expected. Is this expected? Has anyone use TSMC65lp process and run monteCarlo sims?

Thanks in advance for your reply.

Best Regards,

Jeff

Title: Re: Monte Carlo
Post by jshi on May 28th, 2008, 12:29pm

HI, I was able to show meta-stability when both inputs are EXACTLY the same. However any delay difference 1ps will have spectre not to show metastability, however in real life, it is in metastable state, ie output can be 1 or 0 with EQUAL probablity. Has anybody have experience in this? Thanks in advance for your replay,

Jeff

jshi wrote on May 11th, 2008, 10:17pm:
Hi, Gurus:

I am new to this forum. This is a great website for designers.

I have questions on Spectre simulator w.r.t simulations of meta-stable nodes in circuit. The circuit is a sub circuit of a larger one. Basically it is a NOR based latch. Suppose inputs are A and B, and respective outputs are X and Y. Suppose A, B both at 1 in the same time, this set both X and Y to 0. Now suppose both A and B change to 0 at the SAME time, outputs X and Y is now at meta-stable point. X/Y can be 0/1 or 1/0 after meta-stable condition is resolved.  I understand we should avoid meta-stability by all means in design, however in this case metastability is used.

Now my question is in Spectre single transient simulation, can Spectre show both outputs with equal probability? seems to me Spectre didnt do that.

Then I also tried monteCarlo simulation. I am using TSMC65lp process BSIM4.4 model, by turning on both process
variation/mismatch, the probability of node X being "1' and bing "0" is 4 vs. 1, it is NOT 1 vs 1 as I have expected. Is this expected? Has anyone use TSMC65lp process and run monteCarlo sims?

Thanks in advance for your reply.

Best Regards,

Jeff


The Designer's Guide Community Forum » Powered by YaBB 2.2.2!
YaBB © 2000-2008. All Rights Reserved.