9

enter image description here

above: Table 1 from Performance Highlights of the ALMA Correlators

The ALMA receivers use 3-bit ADCs for what would seem to be to be a high dynamic range application needing much finer quantization to get anything useful.

Then I found these sentences within the abstract of ADC bit number and input power needed, in new radio-astronomical applications:

Abstract- For the most part, so far radio astronomy observations have been performed in protected frequency bands, reserved by ITU for scientific purposes. This means that, ideally, only the amplified equivalent system noise is present at the end of the receiver chain (i.e. the ADC input). So, typically, only a few bits are necessary to describe the signal (VLBI signals are digitised with only 2 bits), but today astronomers, in order to get more sensitivity and to boldly observe where no one has observed before, would like to study the radio sky even outside the protected bands...

And I even found a 1-bit ADC in Performance Measurements of 8-Gsps 1-bit ADCs Developed for Wideband Radio Astronomical Observations.

I think I am just missing something obvious, but I can't understand how a measurement requiring high dynamic range gets by using few-bit ADCs.

edit: Is it possible that the actual conversion of analog to digital is done to a far higher precision than suggested by the number of bits?

uhoh
  • 31,151
  • 9
  • 89
  • 293
  • 2
    As a casual non-astronomer enthusiast, I have no clue what this is asking. But +1 for the nerdiest & most impressively complex question I've ever seen on here. – iMerchant Aug 25 '16 at 05:29
  • 2
    I'm not qualified but I suspect there's some Delta Sigma conversion (or similar) going on. A 1-bit ADC (really just a comparator) can be used at very high frequencies on an integrated signal, to give a high speed bitstream. (Instead of a much slower set of multi-bit samples.) Then the proportion of 1s in the bitstream indirectly gives you the analogue level. (I guess their 3-bit converter mentioned is some more exotic version of the common 1-bit method.) – Andy Aug 25 '16 at 11:58
  • So that's what a $\Sigma \Delta$ ADC is - thanks! That's starting to make a little sense. I think the baseband is 0-2GHz (or 2-4GHz - it may be shifted up somewhere anyway, it's 2GHz of bandwidth), and the sample rate is only twice that - 4G samples/sec - so it's not oversampling enough for a simple $\Sigma \Delta$ but maybe that's where the 3 bits come in. – uhoh Aug 25 '16 at 12:21
  • @Andy I've added a bounty. – uhoh Mar 14 '17 at 06:08
  • 1
    What you are probably seeing here is using the ADCs in a pipeline. You can do very high speed conversion by using pipeline ADCs. Here you pass your signal to a number of low-bit ADCs that do fast comparisons like in a sieve. In its most simplistic incarnation, each ADC is 1-bit and it a simple comparator, so the first one looks at it and says "is it greater or less than this" and passes it on to the next comparator. – Dave Oct 14 '18 at 16:23

4 Answers4

4

It is wasteful to sample with many bits because the signal to noise ratio at the ADC of a radio telescope is typically << 1, so using many bits would just be resolving noise. (An exception to this is when there is strong radio-frequency interference that needs to be resolved, but this is not a big problem for ALMA due to its location and observing frequencies).

High dynamic range measurements arise after averaging together many samples (or correlations of samples), which boosts the SNR to a meaningful level.

Using very few bits at the ADC does introduce quantization noise that reduces the efficiency of the instrument, but 3 bits is enough to achieve 96% efficiency [1].

[1] Convenient formulas for quantization efficiency

Ben Barsdell
  • 141
  • 1
  • Hey thank you for your attention to my long-lost question! Can you expand your explanation a bit so that I and other readers will be able to understand it better? I'll read the link about loss of efficiency due to quantization noise, but I can't stop worrying about possible loss of information or signal distortion due to quantization noise. Is there a simple way to understand why this doesn't introduce some kind of problem? As other systems use even 1-bit ADCs, there's something I'm totally missing here. Thanks!! – uhoh Dec 05 '16 at 00:08
  • Your linked article (Thompson 2007) mentions "...Radio Research Laboratory Report 51 of Harvard University, dated 1943, at which time it was classified." I looked here thinking that an early report might contain some basic insight, but it seems it's still unavailable! – uhoh Dec 05 '16 at 00:32
2

The resolution of ADCs is inversely related to their conversion time. Getting more bits requires the signal to travel through more circuitry, which takes time. This is why you can have those high-quality audio ADCs with 18 or 20 bit resolution, which operate at frequencies in kHz range, meaning each conversion can take several milliseconds. At 4GS/s you only have 250 picoseconds at your disposal, so you can only get 3 bits (and only 1 bit at 8GS/s).

how a measurement requiring high dynamic range gets by using few-bit ADCs?

This depends on the nature of the measurement, but the typical solution is to make successive measurements and calculate the average.

Dmitry Grigoryev
  • 208
  • 1
  • 10
  • 1
    Thanks but I need something a little more specific than "it depends on the nature of the measurement." We know the nature of the measurement here. I don't need a full blown analysis, but some kind of mathematical outline of how a few-bit ADC can make high dynamic range measurements necessary to see a weak source in the presence of many strong radio sources. 3-bit, 2-bit, 1-bit??? – uhoh Aug 25 '16 at 11:28
1

Intuitively you think of quantization as something that discards information. That may be true in the end, but it is not a useful way to look at it. Think the other way around, quantization adds an error-signal. If you know what this error signal looks like, it gives you opportunity to analyze how digital processing transforms the error and if it ends up interfering with your desired signal (and how much that interference will be).

ALMA is a phased array, it gets its precision from the correlation of phases if multiple receivers (likewise, phase is typically more important than amplitude in recent modulation schemes). The error function for phase typically is a sawtooth, as the phasor (of a theoretical clean signal) rotates. How the function looks exactly and what the fundamental frequency is, depends on properties of the ADC (and sometimes on AGC settings). The error signal frequency will be n times the received frequency, n=12 or n=8 being typical values. I would have to look into the details of ALMA, I'm not familiar with this one.

Now consider how this error function is sampled. There is no way to attenuate it before sampling, so aliased images of harmonics of this sawtooth end up in your digital data. You can calculate where these harmonics are and how strong they are. And you can shift them by altering the sampling rate (with a given fixed signal frequency). If you want to observe a certain bandwidth and you do optimize sampling rate, you may find that you have f.e. the 11th harmonic (with amplitude 1/11) somewhere in your signal, but you can avoid all the lower (and stronger) harmonics.

Investing in more bits for quantization reduces the amplitude of errors, raising the fundamental freq of the error function at the same time. You may find, that the contribution of quantization errors is already in the magnitude of other noise sources, so there is not much to gain for overall system performance. This is typically the case for direct code spread spectrum applications like GNSS systems.

Andreas
  • 111
  • 2
  • I would want to move this question over to the dsp site, but I do not have enough reputation in astronomy to even suggest that (except commenting my own answer). – Andreas Sep 22 '16 at 12:31
  • Why don't I ask a somewhat different question there, but please do not move this question!. Radio astronomy has some practical aspects that are specific to generating quantitative images of extended radio sources through Earth's atmosphere. See for example There is no "modulation scheme" here, and amplitude and phase are both important! – uhoh Sep 22 '16 at 13:02
  • 1
    @uhoh Sorry, I was looking at this a problem from my perspective too much. This question is of course about astronomy, though it has some relations to signal processing. The methodology to look at quantization applies nevertheless. When it comes to amplitude, the integration of data from several antenna gives you more precision than just 3bit. And I would think, that power can be averaged over time, because there is no temporal structure in the observed signal. This too will add precision. – Andreas Sep 22 '16 at 16:27
  • I've added a bounty - have another go? – uhoh Mar 14 '17 at 06:07
  • Your answer is starting to make some sense to me finally; it's written way over my head, but l've recently inhaled some helium (i.e. been working on an unrelated DFT problem) and your sampling lecture is gaining traction in my gray matter. – uhoh Nov 04 '20 at 04:54
1

I found an authoritative document stating that the ADC’s are certainly only 3 bits. See the ALMA Technical Handbook, https://almascience.nrao.edu/documents-and-tools/cycle7/alma-technical-handbook/view .

From Chapter 5.6.1:

A digitizer adds quantization noise to its input analog signal, with a consequent signal-to-noise reduction or sensitivity loss. The ALMA digitizer employs 3-bit (8-level) quantization, and additional re-quantization processes are applied in the correlators.

One could ask this question a different way, “What would the benefit be for adding additional bits (beyond 3) to ALMA’s ADCs?” You don’t get much higher sensitivity since the 3-bit ADCs are already ~96% efficient (as noted in Ben Barsdell’s excellent answer). You don’t get better angular resolution, since the angular resolution in interferometry is a function of the signal wavelength, distance to the emission source, and antenna location geometry (further distances between antenna pairs increases angular resolution). On the other hand, you get considerable additional computational load. The one good thing you get from adding bits to your ADC is that you can pick up a fainter signal in the presence of noise that would normally saturate your ADC. Hence the statement made by ALMA that they want to observe in frequency bands that aren’t restricted.

I agree that it’s un-intuitive that 3 bit ADCs are sufficient for such an incredible instrument as ALMA. But remember that Nyquist says you might have more data than you think you do:

A bandlimited continuous-time signal can be sampled and perfectly reconstructed from its samples if the waveform is sampled over twice as fast as it's highest frequency component.

ALMA can sample at Nyquist (for most radio telescopes it is set to 2.1x of the upper end of the observing frequency window) or twice Nyquist frequency. The digitized data is the raw data and doesn’t look like it has any information. But after the digitized data is run through an FFT, you get a spectrogram and there is a wealth of information that was in the raw data. Radio Astronomers almost never look at the raw data. The spectrogram gives them the RF signature and emitted power.

When I observed on the GBT, we were looking for gas clouds of formaldehyde near the center of the milky way. When cosmic formaldehyde gets dense enough, it starts to absorb the CMB. We could see the dips in the spectrogram corresponding to the rf quantum shifts in the molecules. Dense formaldehyde clouds are a sign of early star formation. Fun stuff.

Would reproducing a giant emitting Mona Lisa in space with a matlab radio telescope simulator with a low bit ADC convince you?

ALMA has low dynamic range over a single set of observations. So you can observe and detect faint radio emissions (like phosphine on Venus) with sensitivities in the microJansky range, but when you observe and detect powerful radio emissions (like solar radio flares) ALMA’s sensitivities need to be set in the megaJansky range. https://en.wikipedia.org/wiki/Jansky

An astronomer who is privileged enough to use ALMA has to set the sensitivity of the telescope prior to observing. If they set the sensitivity too high, they will saturate the ADCs and not get any usable data. If they set the sensitivity too low, they won’t detect the signal they are looking for! ALMA provides a calculator to help the astronomers: https://almascience.eso.org/proposing/sensitivity-calculator . Note the astronomer can choose sensitivity units from microJanskys to degrees Kelvin (which is about a megaJansky).

The typical way to change the sensitivity of a radio telescope is through the use of an attenuator https://en.wikipedia.org/wiki/Attenuator_(electronics) . If the signal you are observing is saturating your ADCs, you turn up the attenuator until the whole signal waveform is contained. For solar observations, they built specialized attenuators for ALMA, described here: https://digitalcommons.njit.edu/cgi/viewcontent.cgi?article=1223&context=theses .

Because ALMA has low dynamic range for a specified sensitivity, astronomers observing faint signals need to do so when there are no stronger emitters at the same frequency in the same part of the sky. If ALMA had high dynamic range, when Venus passed in front of the sun, perhaps an astronomer would be able to observe the sun’s radio emissions at the same time as observing phosphine radio emissions from Venus that were 12 orders of magnitude less powerful. For now, however, astronomers observing for phosphine on Venus would be well advised to do so at night when there are no other stars or planets nearby!

Finally to answer the title question, ALMA's ADCs are only 3 bits because ALMA does not require a high dynamic range. Instead, astronomers must correctly configure the telescope sensitivity to observe and detect the signals they are interested in.

Connor Garcia
  • 16,240
  • 4
  • 43
  • 96
  • I'm not sure this is the full story. In order to reconstruct an accurate map of sources throughout a 2D field of view at the same time from interferometry based on phase, and to reproduce subtle, continuous variations in intensity, 3 bit resolution does not seem to be enough to me. I think that the ADCs are producing much higher resolution than 8 levels through some signal processing tricks, possibly including things mentioned in comments under the question. – uhoh Nov 03 '20 at 23:08
  • 1
    @uhoh I tried to add a comment, but it was too long. Is it poor form to add some notes to my answer above? – Connor Garcia Nov 04 '20 at 03:15
  • Oh it is wonderful form to edit and improve your or any other Stack Exchange post any time! The only thing to avoid is altering a question after answers have been posted in a way that affects existing answers. But editing answers is welcomed and encouraged. We all work together to make a Stack Exchange site a collection of good answers to on-topic questions. Go for it! :-) – uhoh Nov 04 '20 at 04:50
  • 1
    @uhoh I will edit and improve my above answer, but it is going to take some time. Hopefully under a week. Your question is an excellent question, very hard to answer well. – Connor Garcia Nov 04 '20 at 05:57
  • Thanks for your interest and diligence! There's no rush, it's been here more than four years already. – uhoh Nov 04 '20 at 06:05
  • @uhoh I added quite a bit more to my answer above. It's unintuitive, but 3-bit sampling is plenty. The signal processing 'tricks' you are talking about might just be Nyquist rate sampling and an FFT, which I've described. Of course, I see the signal processing as just more algorithms rather than as tricks. If you want to understand creation of Mosaics, I suggest reading chapters 3.5 (Fields-of-view and Mosaics) and 3.6 (Spatial Filtering) from the Alma Tech Manual link above. – Connor Garcia Nov 08 '20 at 19:21
  • @uhoh: 3 bits is plenty as long as there is no radio interference (unlikely at ALMA wavelengths) in the band. The above answer however is not quite right. I doubt ALMA is changing its "sensitivity" (whatever that means). The signal power at the digitisers (ie ADC) needs to be at a specified (voltage) power level. So if a bright source is observed, or the weather is rubbish more attenuation needs to be added to lower the power level received by the digitiser. – Chris Feb 17 '22 at 03:50
  • @Chris okay to "Why are the ALMA receivers' ADCs only 3-bits?" I can certainly believe that "3 bits is plenty" is not false since that's what's used in this example, but to "Why...?" I need more than that. I'm used to 12 to 24 bit ADCs depending on application, 3 bits just seems low to perform interferometry with a useful dynamic range. Is there some source or some explanation that explains why 3 bits is sufficient? – uhoh Feb 17 '22 at 04:51
  • @Chris The sensitivity of a receiver can be thought of as the minimum signal strength necessary to detect. As one adds attenuation, the sensitivity goes down. Follow the links in my answer for more info. – Connor Garcia Feb 17 '22 at 17:07
  • @conner: The "sensitivity" does not change with attenuation. The attenuation is purely to get the distribution of the digital samples in the optimum range for SNR (and dynamic range if that needs to be considered). – Chris Feb 18 '22 at 05:11
  • 1
    @uhoh: 20 years ago we did interferometry with 1 bit - 3 bits is luxury! – Chris Feb 18 '22 at 05:12
  • @Chris An answer to "How to do astronomical interferometry with a 1-bit ADC?" would also be most welcome! It's the "How" part that I'm still not getting. Maybe it's obvious to everyone else, but it's still not to me. – uhoh Feb 18 '22 at 05:31