Hi Reiner,
Thanks! Your advice is really close to our thought. The IIR model was suggested in many papers. And the V-dependent parasitic cap in the switch is also what we concerned a lot. High source impedence, indeed, put some limitation on the switch size. However the source impedence is even higher than our estimation due to the LC filter, and unfortunately we do not employ a reset phase of the sampling cap.
We find it hard to model the commercial LC filter since it's a black box to us. Is it pratical to scan the output impedence of the filter first and build a verilog-a model? Or is there any simple empirical LC model for reference?
rf-design wrote on Mar 9th, 2010, 1:32am:I think modelling could mean two approaches.
1. Modelling the charge kickback effect of the sampling cap
2. IIR discrete time modeling of this effect for the total transfer function
The first modelling simple. Use a ideal switch which is periodically connected to the LC filter output. The interesting point is now what happen to the sampling cap in the discharge phase. Is the cap discharged to zero or a quantized value or some other think. That determine the amount of charge flowing into the LC filter. And finally the IIR effect.
The IIR effect modelling guess a linear effect of the previous samples to actual sample. So it is like a discrete time filter on the toatl response. The aliasing rejection is not affacted by this effect. So if you can compensate or allow this linear filter effect you can avoid the buffer. But there are also voltage depend caps of the switch and switch parasitics. These lead to nonlinear effects if the source impedance is not very low and doe not have long time constants.
In high resolution, high speed pipeline ADC's the buffer often limit the performance. Some designs put a SiGe BiCMOS for the buffer on top of a CMOS ADC die.