People receive information about celestial bodies using electromagnetic radiation, making observations in the optical, radio, UV, X-ray and gamma- ranges of the electromagnetic spectrum. The concept of “radiation intensity” (sometimes called “surface intensity”) is used in the theory of macroscopic EMR transfer for a quantitative understanding of the processes occurring in astrophysical sources. Let's imagine a small observation platform $dA$ in space filled with radiation from different sources. The orientation of the observation platform is characterized by the normal vector $\overrightarrow{n}$ to its surface. The radiation intensity in a given direction is the power of light energy passing through an area of a unit cross-section, calculated through the elementary solid angle $d\Omega$, the frequency interval $d\nu$ or the elementary wavelength $d\lambda$.
It turns out that if the angle between the observation platform and the selected observation direction is $\theta$, then $I_\nu=\frac{dE}{\cos(\theta) dA dt d\nu d\Omega}$ or $I_\lambda=\frac{dE}{\cos(\theta) dA dt d\lambda d\Omega}$.
Using these formulas, one important property of radiation intensity can be revealed: it does not depend on the distance from the source to which the observation site is located. This is due to the fact that the increase in distance $D$ is compensated by a decrease in the solid angle at which the source is visible, according to the same law.
It is further assumed that the radiation from the source is isotropic (although, as far as I understand, for real sources this is a rather rough approximation). For a source with isotropic radiation, the radiation flux is $F_{\nu,\lambda}=\pi I_{\nu,\lambda}$.
It is the “pointness” of the source and the isotropy of its radiation that is used to demonstrate that astronomers (using telescopes or other instruments) can only record the radiation flux (and not the intensity).
Let's imagine a spherically symmetric source ("star") with a radius $R$ and a distance $D$. Due to isotropy, the star will be visible as a disk of uniform brightness. The directly measured radiation flux from this star is by definition equal to $F_{detect}=I_{\nu,detect} d\Omega$, where $I_{\nu,detect}$ is the radiation intensity at the detector point, $d\Omega= \pi \frac{R^2}{D^2}$ is the solid angle at which the source is visible. Taking into account the flux per unit surface of the source for isotropic intensity $F_{\nu,isosource}=\pi I_{\nu,isosource }$ and then neglecting absorption (i.e. $I_{\nu,detect}=I_{\nu,isosource}$), we find for the measured quantity $F_{\nu,detect}=(\frac{R}{D})^2F_{\nu,isosource}$.
For a point source, the factor in brackets is $<<1$ and is a priori unknown. The transition from a directly measured quantity $F_{\nu,detect}$ to intensity $I_{\nu,detect}$ would be possible only if the angular size $\frac{R}{D}$ of the source was known, i.e. if it is not perceived as a point.
Thus, as far as I understand, such assumptions (about isotropy, pointness, etc.) allow us to simultaneously demonstrate what you asked about. But I don’t know how things would be taking into account all sorts of effects (anisotropy, absorption).