vivkr
|
Hi,
We know that high-order delta-sigma modulators with a single-bit quantizer have limited overload capabilities, and a large enough input can destabilize these modulators. Typically, one does not expect better than -6 dBFS peak input for overload.
However, if one uses a CIFF (feedforward branches based) structure, one has the option of scaling the integrator outputs such that the last integrator will saturate first and the first integrator will saturate last (or not at all). So, for a given signal level (say - 6 dBFS), the first integrator has the smallest swing, the next one a bit more, the next one a bit more etc.
Now, if one adds signal limiting here which will naturally arise due to limited output swing of real integrators, then it is possible to see that the modulator gracefully degrades , reducing from a 4th order to progressively become a 2nd order, and maybe even a 1st order modulator as input signal levels are raised. All this is known.
Now, I am able to make a model in MATLAB with all this and see no instability even when I exceed fullscale for the modulator, in fact by a large factor, say 10x and more, simply because my integrators are defined to clip beyond fullscale.
So, where is the catch? Why do we speak of limited input signal handling capability for single-bit, higher-order modulators when it is possible to use the feedforward scheme to bypass this limitation? Naturally, my model is very simple, using sinewave inputs and the clipping is instantaneous with no large time for recovery from clipping (which will usually be the case in reality), but the principle seems to work.
Am I missing something?
I am able to see the effect of the clipping on the state histograms, so the last integrator is practically always operating under clipped conditions but the SQNR is good, the states are all well within reasonable levels.
Thanks, Vivek
|