The Designer's Guide Community
Forum
Welcome, Guest. Please Login or Register. Please follow the Forum guidelines.
Sep 4th, 2024, 7:17pm
Pages: 1
Send Topic Print
Help with ∆Vbe based Temperature Sensor Implementation (Read 8493 times)
saket
Junior Member
**
Offline



Posts: 13

Help with ∆Vbe based Temperature Sensor Implementation
Jun 11th, 2013, 11:35am
 
Hi,

I specifically am referring to the paper "A Switched-Current, Switched-Capacitor Temperature Sensor in 0.6um CMOS."

In this paper, the author mentions about "offset" subtraction to bring the ∆Vbe voltage within the range of the ADC. This is done by subtracting a fixed DC voltage tapped  from a reference.

There are a few issues in this approach. First of all there will be glitches if there is any skew between the clocks (very likely) and second, by adding a tap from a reference, a non-ideality is introduced into the ∆Vbe equation.

The author also uses an SAR ADC and no anti-aliasing filters. This is also a bit of a concern as anti-aliasing filters are normally required in sampled data systems. The SAR ADC will work with a voltage reference (derived from a Bandgap) so that introduces an error as well. Is this the reason why Sigma-Delta Modulators are used in Temperature Sensors? Can SAR ADCs be used instead?

Finally, can someone suggest other ways of converting the ∆Vbe value to fit the ADC's range? Also, please do comment on the need for having an anti-aliasing filter.

Thanks in advance!
Back to top
 
 
View Profile   IP Logged
saket
Junior Member
**
Offline



Posts: 13

Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #1 - Jun 13th, 2013, 8:19am
 
Another question related to temperature sensors: why is the low pass filter frequency in most remote temperature sensors set to 65 kHz?

For example: ADT7485A, NCT65, ADT7467 etc
Back to top
 
« Last Edit: Jun 13th, 2013, 9:33am by saket »  
View Profile   IP Logged
ywguo
Community Fellow
*****
Offline



Posts: 943
Shanghai, PRC
Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #2 - Jun 14th, 2013, 8:47am
 
Hi Saket,

Why do you worry about glitches? That is a discrete-time system.

How is the non-ideality introduced in the ΔVBE equation?

If there is big interference in your system, you need anti-aliasing filter. I don't know whether there was anti-aliasing filter in Tuthill's work because he didn't mention it.

Quote:
Is this the reason why Sigma-Delta Modulators are used in Temperature Sensors? Can SAR ADCs be used instead?

I don't understand your question. That author did use SAR ADC instead of ΣΔ modulator for his temperature sensor.

Best Regards,
Yawei
Back to top
 
 
View Profile   IP Logged
saket
Junior Member
**
Offline



Posts: 13

Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #3 - Jun 15th, 2013, 6:19am
 
ywguo wrote on Jun 14th, 2013, 8:47am:
Hi Saket,

Why do you worry about glitches? That is a discrete-time system.

How is the non-ideality introduced in the ΔVBE equation?

If there is big interference in your system, you need anti-aliasing filter. I don't know whether there was anti-aliasing filter in Tuthill's work because he didn't mention it.

Quote:
Is this the reason why Sigma-Delta Modulators are used in Temperature Sensors? Can SAR ADCs be used instead?

I don't understand your question. That author did use SAR ADC instead of ΣΔ modulator for his temperature sensor.

Best Regards,
Yawei


Hi Yawei, thanks for your response. For a while I thought I wasn't going to get any Smiley

The reason I worry about glitches is if I decide to use an SAR ADC. Glitches at the output could lead to erroneous results, although it can be argued that if enough time is given for the output to settle, it should be ok.

More worrisome is the absence of an anti-aliasing filter. The wideband thermal noise will alias back in sampled data systems. Given that ∆Vbe increases by only 200-300uV per degree Kelvin, this noise could be an issue. That is why I asked about the need for anti-aliasing filters.

The non-ideality is introduced when an offset voltage is subtracted from the amplified ∆Vbe. Since the offset voltage comes from the bandgap, the amplified output will contain the bandgap error.

Finally, I wanted some comments on the use of the ADC. Usually a ΣΔ ADC is used in temperature sensors. Tuthill's paper mentions an SAR ADC. What is the advantage that a ΣΔ ADC has over other architectures? Why is it the preferred choice for temperature sensors?

Also, on a related note, are there any continuous time implementations  for a ΔVbe based Temperature sensor? Put another way, can the noise and bandgap errors be within limits to get a ±2 degrees precision?
Back to top
 
 
View Profile   IP Logged
ywguo
Community Fellow
*****
Offline



Posts: 943
Shanghai, PRC
Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #4 - Jun 15th, 2013, 9:40pm
 
Most answers to your questions depend on your requirements. You mentioned a ±2 degrees precision. What is the range of the operating temperature?

Have you did any calculation? I don't think the PTAT voltage increases by as small as 200~300μV per degree Kelvin if your temperature sensor has similar architecture as that of Tuthill.


Best Regards,
Yawei
Back to top
 
 
View Profile   IP Logged
saket
Junior Member
**
Offline



Posts: 13

Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #5 - Jun 17th, 2013, 3:28pm
 
ywguo wrote on Jun 15th, 2013, 9:40pm:
Most answers to your questions depend on your requirements. You mentioned a ±2 degrees precision. What is the range of the operating temperature?

Have you did any calculation? I don't think the PTAT voltage increases by as small as 200~300μV per degree Kelvin if your temperature sensor has similar architecture as that of Tuthill.

Best Regards,
Yawei


Hi Yawei,

Many thanks again for your response. The ±2 degrees precision is required over -40 to +95 ∘C. One point calibration is allowed.

I forgot to mention -- and it's completely my fault -- but the idea is to sense temperate in a remote location. That means I can't use 2 BJTs, only one (due to cost). This implies that I can't use Tuthill's switching scheme (which is patented anyway) and my ΔVbe will only increase by 200-300 uV per ∘K. That is, with a ratio of 20*I vs just I. I could increase the ratio, but can't really keep on increasing it to the point that the output voltage moves into mV.

I think, and I'm probably asking a rhetorical question here, that necessitates the use of some form of auto-zeroing or chopper stabilization to remove the amplifier offset. If any of these techniques is used, I don't think I can use a simple resistor-based non-inverting amplifier to amplify the signal.

The other thing is, with auto-zeroing, the thermal noise folds back into the band of interest. So the only option is chopping. And a switched-cap implementation to reduce the noise by increasing the size of the sampling capacitor.

Is there any other way to implement a remote temperature sensor using a ΔVbe scheme?



Back to top
 
 
View Profile   IP Logged
saket
Junior Member
**
Offline



Posts: 13

Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #6 - Jun 18th, 2013, 12:49am
 
Here's a conceptual representation from one of the data sheets I mentioned earlier.
Back to top
 

temp_sensor_concept.png
View Profile   IP Logged
RobG
Community Fellow
*****
Offline



Posts: 570
Bozeman, MT
Re: Help with ∆Vbe based Temperature Sensor Implementation
Reply #7 - Jul 6th, 2013, 3:39pm
 
A couple of things. Bakker turned his thesis into a book. If it is the one I'm thinking of (which had a picture of his son on the cover) it could be very helpful.

http://books.google.com/books?id=TTw7udu3EqIC&pg=PA63&lpg=PA63&dq=bakker+thesis+...

Somewhere in there he had a clever trick on how if you put a temp co in the bandgap reference it will remove the curvature from the temperature measurement.

I do not see why subtracting a constant voltage would cause an error.

Check the Westwick and Gilbert patents in the references of the paper I wrote a paper a while back using a switched current to generate a reference with a single diode (http://web.mit.edu/Magic/Public/papers/01661769.pdf). It was a constant voltage and different than what you need, but you might be able to see one way to do it if you read between the lines. If not, at least glance at the references.

Delta-sigmas are used because they can be very accurate and the noise is oversampled. Temperature changes so slow you don't need a fast sample rate. You can also set up a DSM loop where you sample the PTAT signal, but subtract off a Vbe+PTAT voltage based on the same bipolar instead of creating a separate bandgap reference.

Back to top
 
 
View Profile   IP Logged
Pages: 1
Send Topic Print
Copyright 2002-2024 Designer’s Guide Consulting, Inc. Designer’s Guide® is a registered trademark of Designer’s Guide Consulting, Inc. All rights reserved. Send comments or questions to editor@designers-guide.org. Consider submitting a paper or model.