Jess Chen wrote on Mar 2nd, 2006, 3:11pm:Steven,
You need to back up a bit for me. What two methods are you talking about? And are you working with OFDM preambles?
As for choice in algorithms, my personal belief is that the choice of method is primarily driven by:
1. when in the design flow the problem is identified and given attention and
2. whether the baseband and RF teams work at the same company.
For example, if the problem is identified late in the design and the RF and baseband chips are being designed together, the solutions tend to involve the micro processor and some complex algorithms. However, if the problem is recognized early, and/or the product is just an RF chip, the solutions tend to be more circuit oriented and the computational algorithm tends to be much simpler. My experience is with the former. Our orignal thought was to calibrate in the factory. After we saw how expensive that was, we enlisted help from the microprocessor to implement an automatic calibration/compensation scheme. Another driving force is schedule. We had access to some potentially more robust but also more complex algorithms but we lacked the time to pursue them.
-Jess
Hi Jess,
I am sorry I didn't express clear enough. The two methods I referred to are:
1. Using particular calibrating stage to perform IQ correction. This method may not be OFDM. One reference is
"Circuits and algorithms for wireless communications", Messerschmitt et al. During idle time, an image tone generator produces calibration.
2. For OFDM, the one I am looking at is
"Compensation schemes and performance analysis of IQ imbalance in OFDM receiver", Tarighat et al, IEEE trans on signal processing Aug. 2005. In which there is no particular calibration stage, I am not sure it only works on preamble though.
Maybe what I said should be categorized in the former case you pointed out: automatic calibration with the help of microprocessor. One thing I don't quite understand, when the problem is recognized early, the solution would be simple. Is simple in a sense that the calibration can be embedded into the chip? It seems some popular algorithms would be used in this case, right?
What I think is for wireless channel, the calibration is still running 'cause the mismatches are random but (asumed)slow varying. So the calibration is either on microprocessor side or the chip side. Am I right?
Thanks,
Steven