The Designer's Guide Community
Forum
Welcome, Guest. Please Login or Register. Please follow the Forum guidelines.
May 3rd, 2024, 1:40am
Pages: 1
Send Topic Print
MDAC settling time (Read 6482 times)
aaron_do
Senior Fellow
******
Offline



Posts: 1398

MDAC settling time
Dec 01st, 2013, 10:38pm
 
Hi all,


for a pipelined ADC, people talk about the settling time requirements of the MDAC. For instance, it may be required to settle to within 10b accuracy.

For 40nm CMOS, I have noticed that the gate leakage of my OTA is not trivial, and this is making it difficult to really see what is the settling time of the OTA. i.e. If we cannot see the final value, how can we know the 10b accurate value. Anybody have any idea to get around this?

One thing which I think might be a good compromise (and maybe more meaningful in my opinion) is to use "sampled PAC" analysis from spectrerf. I can then look at the "10b accurate bandwidth" of my MDAC. Can anybody confirm whether this is a good design method?


thanks,
Aaron
Back to top
 
 

there is no energy in matter other than that received from the environment - Nikola Tesla
View Profile   IP Logged
aaron_do
Senior Fellow
******
Offline



Posts: 1398

Re: MDAC settling time
Reply #1 - Dec 2nd, 2013, 11:29pm
 
In case anybody was wondering, they are different specs. The step response seems to be more related to slow inputs which need to be accurately settled within the sampling period (half of it anyway). The bandwidth seems to be related to fast inputs which need to be accurately tracked...the RC of the S/H causes attenuation and phase shift at high frequencies.

So I guess since I'm designing for a communications system, I should be more worried about IM3.


Aaron
Back to top
 
 

there is no energy in matter other than that received from the environment - Nikola Tesla
View Profile   IP Logged
carlgrace
Senior Member
****
Offline



Posts: 231
Berkeley, CA
Re: MDAC settling time
Reply #2 - Dec 25th, 2013, 9:05pm
 
A well-designed pipelined ADC essentially operates on DC samples, so the transient settling time is everything.  I would focus on transient analysis to find out what your MDAC is capable of.  Keep in mind that if you are calibrating for gain errors then linear settling errors will be calibrated.  Then you will be limited by settling nonlinearity due to slewing.

In fact, the RC of the front-end S/H will most likely set your high frequency bandwidth limit.  WHen the S/H starts significantly phase shifting you will get distortion.  Check the resistance of your input sampling switch and see how it varies.

In a previous question you ask about removing the input S/H.  Keep in mind that relying on redundancy to fix sampling skew only works well at low frequencies.  At higher frequencies it becomes increasingly difficult to match the amplifier and the sub-ADC path and you will quickly eat up all your correction range with aperture errors.  So be careful.
Back to top
 
 
View Profile   IP Logged
aaron_do
Senior Fellow
******
Offline



Posts: 1398

Re: MDAC settling time
Reply #3 - Jan 5th, 2014, 5:37pm
 
Hi carlgrace,


Quote:
Keep in mind that relying on redundancy to fix sampling skew only works well at low frequencies.  At higher frequencies it becomes increasingly difficult to match the amplifier and the sub-ADC path and you will quickly eat up all your correction range with aperture errors.


thanks for the tip. Assuming my comparator threshold is well defined, this problem would only occur if my signal could change fast enough to eat up my entire redundancy overhead within the time difference between the subADC comparison instant and the MDAC hold instant right? So as a back-of-the-envelope calculation, for a 1.5b subADC,

Vref*sin(2πfnyquistterror) = Vref/8
terror = 0.02/fnyquist

which works out to 0.2ns for a 200MHz sampling rate. Does that sound right?

There are several publications where the subADC threshold is modulated with a PN sequence as part of a background calibration. I suppose this would make the problem worse?

So if I want to go the matching route, I need to make sure that the subADC comparator decision instant and the MDAC hold instant is well aligned right? And if I want to avoid matching, I need a front-end S/H. In your experience, is this the main source of high-frequency error?
Back to top
 
 

there is no energy in matter other than that received from the environment - Nikola Tesla
View Profile   IP Logged
carlgrace
Senior Member
****
Offline



Posts: 231
Berkeley, CA
Re: MDAC settling time
Reply #4 - Jan 6th, 2014, 9:36am
 
aaron_do wrote on Jan 5th, 2014, 5:37pm:
Hi carlgrace,


thanks for the tip. Assuming my comparator threshold is well defined, this problem would only occur if my signal could change fast enough to eat up my entire redundancy overhead within the time difference between the subADC comparison instant and the MDAC hold instant right? So as a back-of-the-envelope calculation, for a 1.5b subADC,

Vref*sin(2πfnyquistterror) = Vref/8
terror = 0.02/fnyquist

which works out to 0.2ns for a 200MHz sampling rate. Does that sound right?

There are several publications where the subADC threshold is modulated with a PN sequence as part of a background calibration. I suppose this would make the problem worse?

So if I want to go the matching route, I need to make sure that the subADC comparator decision instant and the MDAC hold instant is well aligned right? And if I want to avoid matching, I need a front-end S/H. In your experience, is this the main source of high-frequency error?


That sounds about right.  As you can see the matching of the MDAC and subADC paths becomes pretty tight at high frequency.

You're also correct that a background calibration like you described would make this problem worse.  That's because these algorithms depend on using the correction range to inject their test signals.  I imagine it is somewhat involved to specify the comparator offset in this case.

If you have a front-end SHA then this matching issue goes away.  But you have to pay with a lot of power.

However, keep in mind that if you eliminate the SHA you will increase the required power in the comparators because you will need to keep the offset down.  Dynamic offset typically dominates the offset of a CMOS comparator in deep submicron technology.  It is very difficult to simulate accurately so you often over-design with a class-A preamp and dedicated sampling caps for the subADC.  It's a delicate tradeoff to be sure.

I'm not sure what you mean by "main source of high-frequency error".  What error specifically are you talking about?  Distortion in the sampling switch limits the SFDR at high frequency.  Incomplete MDAC settling limits the SNDR.  If you don't have a SHA my guess is mismatch between the MDAC and subADC would limit your sampling frequency.  

If you're going for high accuracy too, don't forget jitter in your sampling clock can also limit your sampling rate.  Pipelined ADCs are fun.
Back to top
 
 
View Profile   IP Logged
aaron_do
Senior Fellow
******
Offline



Posts: 1398

Re: MDAC settling time
Reply #5 - Jan 6th, 2014, 5:49pm
 
Hi carlgrace,


thanks for all the help.

Quote:
I'm not sure what you mean by "main source of high-frequency error".


I was thinking that there was one problem that's much worse than all the others but clearly its not gonna be that easy.
Back to top
 
 

there is no energy in matter other than that received from the environment - Nikola Tesla
View Profile   IP Logged
RobG
Community Fellow
*****
Offline



Posts: 569
Bozeman, MT
Re: MDAC settling time
Reply #6 - Jan 12th, 2014, 7:32pm
 
aaron_do wrote on Dec 1st, 2013, 10:38pm:
Hi all,


for a pipelined ADC, people talk about the settling time requirements of the MDAC. For instance, it may be required to settle to within 10b accuracy.

For 40nm CMOS, I have noticed that the gate leakage of my OTA is not trivial, and this is making it difficult to really see what is the settling time of the OTA. i.e. If we cannot see the final value, how can we know the 10b accurate value. Anybody have any idea to get around this?


Are you talking gate leakage from the input pair? I would think the error would be common mode. I didn't see any gate leakage effects in my 40 nm pipelined design.

By the way, if the output doesn't settle it doesn't matter if it is from slow response time or gate leakage - it is in error!



Back to top
 
 
View Profile   IP Logged
aaron_do
Senior Fellow
******
Offline



Posts: 1398

Re: MDAC settling time
Reply #7 - Jan 13th, 2014, 12:24am
 
Hi RobG,


Quote:
Are you talking gate leakage from the input pair?


yes I was. Actually it was really a first-pass design, and after some modifications, the leakage was much smaller. I'm kind of looking at SAR ADCs for now, but if I go back to pipelined, then I'll look into the issue again.


thanks,
Aaron
Back to top
 
 

there is no energy in matter other than that received from the environment - Nikola Tesla
View Profile   IP Logged
Pages: 1
Send Topic Print
Copyright 2002-2024 Designer’s Guide Consulting, Inc. Designer’s Guide® is a registered trademark of Designer’s Guide Consulting, Inc. All rights reserved. Send comments or questions to editor@designers-guide.org. Consider submitting a paper or model.