The Designer's Guide Community
Forum
Welcome, Guest. Please Login or Register. Please follow the Forum guidelines.
May 4th, 2024, 7:25am
Pages: 1
Send Topic Print
CDR modeling for performance prediction and comparison (Read 9149 times)
KangSub
New Member
*
Offline



Posts: 9

CDR modeling for performance prediction and comparison
May 03rd, 2010, 8:45pm
 
Hi, i'm KangSub.

I have been studying how to estimate and compare the performance of the CDRs in design step.

I read the Kundert's recent paper "Verification of Bit-Error Rate in Bang-Bang Clock and Data Recovery Circuits", 2009. You can see this paper on the main page.

In that paper, To calculate the jitter transfer, the phase domain model is used. As a results, jitter transfer is equivalent to the transfer function of the phase domain model.(page 16, eq(20))

In my opinion, however, the jitter component included in the phase domain model is only random jitter(RJ) component(PD noise and VCO phase noise). So, deterministic jitter(DJ) component seems to be missed.

Are there anybody to explain about it? or another opinions?

Thanks.
Back to top
 
 
View Profile   IP Logged
Ken Kundert
Global Moderator
*****
Offline



Posts: 2384
Silicon Valley
Re: CDR modeling for performance prediction and comparison
Reply #1 - May 3rd, 2010, 11:10pm
 
The Random Jitter (RJ) and Deterministic Jitter (DJ) are handled separately. DJ is computed in a simple transient analysis. RJ is computed using the phase domain model. The two come together by way of Tslack when computing the Bit Error Rate (BER).

-Ken
Back to top
 
 
View Profile WWW   IP Logged
KangSub
New Member
*
Offline



Posts: 9

Re: CDR modeling for performance prediction and comparison
Reply #2 - May 4th, 2010, 1:27am
 
Thanks for your very quick reply, Mr. Kundert.

I have already understood the point that you mentioned above.

To calculate BER, we have to evaluate DJ by simulating the voltage domain model(behavioral or TR level circuits) for about 1000 cycle to build the jitter(DJ) histogram of the recovered clock. And we also have to find RJ by simulating the phase domain model that include the random noise(jitter) components of PD and VCO. We can calculate the BER using the standard deviation of RJ, and peak-to-peak value of DJ and 1-UI. It isn't wrong, is it?

As you pointed out, RJ and DJ are concerned separately.

And, that is the point of my question, too.

In phase domain model, only RJ component is included, but not DJ component.

In order to calculate the jitter transfer, I think that RJ and DJ have to be also included.

Therefore, it seems to be incorrect to use the only phase domain model (that the only RJ component are included in) for evaluate the jitter transfer.

That is my question.

Thank you very much for your attention.

-KangSub
Back to top
 
« Last Edit: May 4th, 2010, 6:20pm by KangSub »  
View Profile   IP Logged
Ken Kundert
Global Moderator
*****
Offline



Posts: 2384
Silicon Valley
Re: CDR modeling for performance prediction and comparison
Reply #3 - May 4th, 2010, 11:32am
 
Okay, I think I understand your point now. It is a good question, one that I have not spent enough time thinking about. My assumption was that the DJ is not strongly affected by the sinusoidal jitter applied to the input. By making that assumption I was able to use small-signal analysis. If my assumption is wrong, then we would need to apply sinusoidal jitter of a particular amplitude when performing a jitter transfer tests, and I don't believe the spec for jitter transfer gives an amplitude for the sinusoidal jitter, only its frequency.

-Ken
Back to top
 
 
View Profile WWW   IP Logged
KangSub
New Member
*
Offline



Posts: 9

Re: CDR modeling for performance prediction and comparison
Reply #4 - May 10th, 2010, 3:26am
 
Sorry for my late reply.

I can understand your assumption that DJ is not strongly affected by the sinusoidal jitter applied to the input because DJ arises from the system architecture and operation.

However, I can not understand your last sentence. Generally, in order to performing a jitter transfer test, do we have to apply sinusoidal jitter of a particular amplitude and frequency and sweep them? I cannot understand that which relationships exist between your assumption and a jitter transfer test and the spec for a jitter transfer.

Please, can you explain that more easily?

- KangSub
Back to top
 
 
View Profile   IP Logged
Ken Kundert
Global Moderator
*****
Offline



Posts: 2384
Silicon Valley
Re: CDR modeling for performance prediction and comparison
Reply #5 - May 10th, 2010, 9:15am
 
I just took a look at the Telecordia spec and found the following:
Quote:
For a system with a linear jitter transfer function, jitter transfer measurements can
be made (and identical results can be obtained) using sinusoidal jitter applied to the
input signal at any level up to the jitter tolerance level for that interface and that
specific jitter frequency. However, SONET systems typically do not have linear
jitter transfer functions (both by design and due to inherent factors such as the
limited number of stuff opportunity bits available in the asynchronous DSn to VT or
STS SPE mappings), and therefore the results obtained in any jitter transfer tests
are likely to depend on the particular input amplitudes used. In general, the primary
purpose of the jitter transfer requirements is to prevent performance degradations
by limiting the accumulation of jitter through a series of systems such that it does
not exceed the network interface jitter requirements (or the jitter tolerance of any
of the NEs involved). Thus, it is more important that a system meet the jitter
transfer criteria for relatively high input jitter amplitudes (e.g., amplitudes close to
the network interface jitter or jitter tolerance limits) than for very low input
amplitudes. Therefore, for testing the conformance of a system to the jitter transfer
requirements in this document (e.g., to R5-236 [338] or R5-237 [339]), the input
jitter amplitude range is limited to 0.1 to 1.0 times the amplitude given by the
appropriate jitter tolerance mask. (That is, the jitter transferred through the system
must be under the jitter transfer mask for any input jitter amplitude within this
range, but is not required to be under the jitter transfer mask for input amplitudes
outside of the range.)


This suggests that jitter transfer is not a simple linear transfer function as I assumed, which implies predicting jitter transfer using simulation would involve running a series of transient simulations where a sinusoidal jitter is applied to the input to the CDR over a range of frequencies and with an input amplitude equal to 0.1 to 1.0 of the jitter tolerance mask to determine the DJ, and then following that with corresponding phase domain model simulations to determine the RJ, and finally combining those to find the jitter transfer.

I have never done these measurements so I have no idea how much the DJ changes as you change the input amplitude of the input jitter. Do you have any experience with this? Is this a significant effect?

-Ken
Back to top
 
 
View Profile WWW   IP Logged
love_analog
Senior Member
****
Offline



Posts: 101

Re: CDR modeling for performance prediction and comparison
Reply #6 - Jul 29th, 2010, 7:36am
 
I always thought DJ was more of a power supply thing. Because of that we typ simulate the DJ with power supply noise on the PLL/CDR supply.
Don't do any frequency shaping analysis on it like we do for RJ. Never had to.
Back to top
 
 

loveanalog.blogspot.com
The Power of Analog
View Profile   IP Logged
Pages: 1
Send Topic Print
Copyright 2002-2024 Designer’s Guide Consulting, Inc. Designer’s Guide® is a registered trademark of Designer’s Guide Consulting, Inc. All rights reserved. Send comments or questions to editor@designers-guide.org. Consider submitting a paper or model.