The Designer's Guide Community
Forum
Welcome, Guest. Please Login or Register. Please follow the Forum guidelines.
Apr 19th, 2024, 11:19am
Pages: 1
Send Topic Print
Simulating Data Converter Performance (Read 11285 times)
sheldon
Community Fellow
*****
Offline



Posts: 751

Simulating Data Converter Performance
Apr 11th, 2005, 8:06am
 
Greetings,

  I have been simulating a high dynamic range A/D Converter and had some questions/observations that I would like to share.

  The following analysis was performed using the standard Cadence tools: ADE, Spectre, and Calculator.

1) Philosophical question:
    It seems like Spectre's Fourier Integral based analysis
    is the best method for simulating the dynamic
    performance of D/A Converters. Since the FFT samples
    the D/A Converter output, it is difficult to evaluate the
    effect of glitch impulse on the output spectrum. While
    the Fourier Integral evaluates the entire data set it
    is more accurate. Is this observation correct?

2)  When simulating the dynamic response of a D/A
    Converter, SFDR, THD, SINAD, etc. the output response
    tends to suffer from the "picket fence" effect, that is,
    energy tends to be concentrated in the harmonics
    of the clock and input frequencies. This phenomena
    occurs even if the normal precautions are taken, that
    is, the input and clock frequencies are non-harmonically
    related. Is there a method to eliminate this phenomena
    from simulations?

3) For high dynamic range Data Converters, the  
   interpolation error is an issue. Strobing the output
   helps somewhat, so does setting the maxstep to < 1/2
    the FFT step size.  Are there any other approaches that
    can be used to improve the accuracy of the FFT?

4) A question on the difference between Nyquist rate
   ADCs and Delta-Sigma ADCs. When using synchronuos,
   non-coherent sampling to simulate SFDR/THD of a
    Nyquist rate ADC, the FFT "Rectangular" window has
    more dynamic range than the "Hanning" window.
    However, for Delta-Sigma Modulators, the "Rectangular"
    window gives inconsistent results, that is, the SFDR/
    THD are a function of the data window used for the FFT.
    while the "Hanning" gives consistent results.

    Is this difference because Nyquist converters have are
    "memoryless" while Delta-Sigma Modulators have
    "memory"?  Since the state of a Delta-Sigma Modulator
    is dependent on the previous state of the system there
    is effective spectral leakage DSM that does not occur
    for Nyquist-rate ADCs.

5) Some questions on the Fourier component from
   analogLib  and Spectre Fourier analysis:
   a) When using Fourier Analysis, the fundamental
        frequency is defined relative to the last simulation
        time point. If the simulation time is selected such that
        simulation stop time - (1/fundamental frequency),
        then the initial time points will be ignored and
        start-up effects will be ignored. Is this correct?
    b) Is there a parameter equivalent to refnormharm
        the numerator of the Fourier Integral? When
        testing Data Converters, the reference tone is
        seldom at the fundamental freuquency.
    c) When simulating a Delta-Sigma Modulator, what
        is the appropriate type of interpolation to use?
        Since there are a large number of points and
        the curve is highly non-linear, that is a stream
        of 1s and 0s, is linear or quadratic interpolation
        better?

        Looking forward to your comments.

                                                         Best Regards,

                                                            Sheldon
Back to top
 
 
View Profile   IP Logged
Ken Kundert
Global Moderator
*****
Offline



Posts: 2384
Silicon Valley
Re: Simulating Data Converter Performance
Reply #1 - Apr 13th, 2005, 11:08am
 
Sheldon,
I'll try to respond to your questions and comments as best I can.

1) Spectre's Fourier Integral based approach is the only one that is not based on the DFT, and so there is no implied sampling process. Whenever there is a sampling process, the signal at times in between the sample points are ignored. Whether this is desirable depends on how the overall system responds to your signal. If the signal is observed by a discrete-time circuit, then sampling is preferred because it naturally ignores the signal in between the sample points as your overall system would. With DACs, which generally drive continuous time "interpolation" filters, sampling is undesirable because the glitches and other artifacts that would occur in between the sample points are observed by the filter and do affect the overall system performance. It is possible to accurately use sampling in these systems, but the sampling rate must be very fast so as to be above the Nyquist frequency for the highest frequency signal present in the continuous-time signal. This high sample frequency tends to sharply slow simulation and analysis time.

2) I don't understand the issue you are concerned about in this question. It seems to me that the energy in the signals should be concentrated at the clock and input frequencies (and their harmonics and intermodulation product frequecies). Why should it not?

3) Strobing eliminates interpolation error by forcing the simulator to place solution points at the DFT sample points. However, there are other error mechanisms present. You should remember that the DFT naturally allows you to resolve very small signals in the presence of very large signals. Transient analysis does not naturally accurately render small signals when large signals are present. The error generated as a result of the large signals often ends up at the same frequency of the small signal of interest. It generally takes considerable skill, patience, and luck to get good results when doing a Fourier analysis of transient results. One thing that really helps is if you can arrange for all timesteps to be exactly equally spaced, each using the same integration method. So when using Fourier analysis, I will use method=gear2only or method=traponly. I also set lteratio large and instead control the time step with maxstep (to choose maxstep, I would likely run a transient analysis with lteratio set normally and look for the smallest timestep the simulator used, and then use that for maxstep).

These responses are taking quite a bit of time to formulate. Let me give you these responses now, and try to come back to the other two in a bit.

-Ken
Back to top
 
 
View Profile WWW   IP Logged
sheldon
Community Fellow
*****
Offline



Posts: 751

Re: Simulating Data Converter Performance
Reply #2 - Apr 22nd, 2005, 4:31am
 
Ken,

   About #2

If you use coherent sampling with an ADC, for example,
fs=100M and fin=10M, then only 10 states get exercised
and the FFT shows the picket fence effect. That is, it  
looks like you are looking at "real" spectrum through a
picket fence. There are tones at 10M, 20M, 30M, ... and
nothing else. The solution for an ADC is to use
non-coherent sampling, for example, fs=100MHz and
fin=(3/256)*fs for a 256 point FFT.

The problem is that when simulating D/A Converter,
non-coherent sampling does not seem to work, that
is, the FFT always shows the picket fence effect even
when the input and sample frequencies are correctly
defined.

Have you ever run across this effect and do you have
any insight into how to eliminate it?

There seem to be two possible approaches:
1) Include the effect of DAC clock jitter in the simulation
2) Increase the number of points in the FFT to be
   greater than the DAC number of bits.

#1 seems to be the more reasonable approach, #2 is
painful because of the simulation time required.

                                              Best Regards,

                                                 Sheldon
Back to top
 
 
View Profile   IP Logged
Ken Kundert
Global Moderator
*****
Offline



Posts: 2384
Silicon Valley
Re: Simulating Data Converter Performance
Reply #3 - May 1st, 2005, 12:07pm
 
Sheldon,
    I do not have an explaination for what you are seeing, but I am also not sure I completely understand the situation. Let me explain it back to you and you can tell me if I am getting it.

First consider the ADC. Here you clock it at 100MHz and drive its input with a 10MHz signal. You see responses only at 10MHz and its harmonics. That is because the clock frequency sets the sampling frequency, and hence the Nyquist rate. And with a the 100MHz sampling frequency being a multiple of the 10MHz input frequency, the harmonics of input will always alias down to harmonic of the input. This is the picket fence phenomenon you refer to. No matter how many samples you use (no matter what your sampling interval) you only see energy at the input frequency and its harmonics.

If you were instead to drive the input at 9MHz or 11MHz, then the beat frequency between the input and the sampling frequency would be 1MHz, and you would see the very high frequency harmonics alias down on a 1MHz interval rather than a 10MHz interval. So for example, the 11th harmonic of a 9MHz input signal would be at 99MHz, which would alias to 1MHz.

The thing I don't understand is why you have substantial amounts of energy at the high harmonics? Why is the aliasing significant?

All of the above should also be true for DACs, but again, why would you have substantial energy at the high harmonics?

-Ken
Back to top
 
 
View Profile WWW   IP Logged
vivkr
Community Fellow
*****
Offline



Posts: 780

Re: Simulating Data Converter Performance
Reply #4 - Dec 25th, 2005, 12:37am
 
Hi Sheldon,

Ref. 4 in your original post, the difference between the effects of the Rectangular window and the Hanning windown when applied to Nyquist-rate and oversampled delta-sigma converters is more due to the nature of the spectrum that they are applied to.

The rectangular window (in frequency domain) rolls off very slowly, and hence permits leakage of out-of-band signals into the signal band. While the problem is less severe in Nyquist-rate converters, in delta-sigma converters, the out-of-band shaped quantization noise has a considerable amount of energy concentrated in it, and even the least bit of leakage can ruin your FFT.

The only way to improve results with a rectangular window is to keep increasing the number of points to get a measly 3 dB improvement per octave. A Hanning window on the other hand has very good leakage properties and gives much better FFT results for a smaller number of points.

Your argument about the "memory" inherent in the state of these converters is correct to a certain extent. However, it is the nonlinearity introduced by the quantizer, coupled with this memory that causes much of the misery. If you just removed that quantizer leaving behind a discrete-time filter, the effect would not be as severe.

Essentially, all that "random" switching between the 1s and 0s causes the problem. I would conjecture that the output of a delta-sigma ADC with a multi-bit quantizer would be a bit better behaved, although there too, the Hanning window is superior.

For more, please refer the Appendix A of "Understanding Delta-Sigma Converters" by Temes & Schreier.

Regards
Vivek
Back to top
 
 
View Profile   IP Logged
sheldon
Community Fellow
*****
Offline



Posts: 751

Re: Simulating Data Converter Performance
Reply #5 - Jan 2nd, 2006, 5:56pm
 
Vivek,

   Thanks for the suggestion, I will check out the reference.  

                                                   Best Regards,

                                                      Sheldon
Back to top
 
 
View Profile   IP Logged
Pages: 1
Send Topic Print
Copyright 2002-2024 Designer’s Guide Consulting, Inc. Designer’s Guide® is a registered trademark of Designer’s Guide Consulting, Inc. All rights reserved. Send comments or questions to editor@designers-guide.org. Consider submitting a paper or model.