Hi everybody,
Lately I designed and measured a direct conversion downconverter comprising of LNA (single ended input and output), mixer (single ended input – differential output) and a differential to single converter for the measurement.
Since the architecture is direct conversion the overall noise is selected in the spectre simulation to be NFdsb. According to textbooks and the simulation comparing single and double sideband NFs, the following expression occurs:
NFssb = NFdsb + 3
However during the measurement the agilent spectrum analyzer that I used has the ssb and dsb options but they have the opposite dependence. According to the textbook of the Y-factor method (
http://cp.literature.agilent.com/litweb/pdf/5952-3706E.pdf) page 23:
“In the simplest case, the SSB noise figure will be a factor of 2 (3.0dB) lower than the DSB measurement.”
This fact causes confusion to me. Can anyone share the reason why the textbook definition is opposite from the instrument feature?
Which option should I take to compare simulation with measurement?
Thanks in advance for the help.