0

I followed this thread on perturbation of the Mandelbrot set iterations: Perturbation of Mandelbrot set fractal

I was wondering what accuracy these different variables need to be calculated to (high accuracy or like normal floating point) i.e.

  1. the original values of the reference point iterations, X_n in the link
  2. the A, B and C coefficients
  3. the approximated (perturbed) point

What do you do if the perturbed point requires more iterations than the reference point iterations? Do you always have to pick a reference point such that it has more iterations (before escaping) than the points you want to approximate?

Thanks

gornvix
  • 103

1 Answers1

2
  1. compute at high precision, store a low precision copy for series and pixels
  2. low precision
  3. low precision (this is the main point of the thing)
  4. pick another reference (this also applies to "glitches")
  5. pretty much, there's a few iterations of lee-way, but most interesting images have boundary points, so high iteration references are possible to find (escape iteration count can be made arbitrarily high close to the boundary)
Claude
  • 5,647
  • 1
    With answer 3, if you don't use a high precision how do you discern between points that are close together in a deep plot? Or is it that you just use a floating point value for the real (and imaginary) perturbation (delta in the link), and this suffices? – gornvix Sep 08 '20 at 01:29
  • 1
    if you know $p,q$ to say 10 digits, but the first 7 digits are the same, you only need 3 digits to represent the difference between p and q – Claude Sep 08 '20 at 01:44
  • 1
    But what when $\Delta c\sim 10^{-1200}$ for a really deep plot, this offset is not representable in double FP. One could some scaling parameter $\rho$ to set $\Delta c=\rho s$ and $\Delta z_n=\rho w_n$. How then to arrange to compute $\Delta z_{n+1}$ or here $w_{n+1}=2z_nw_n+\rho w_n^2+s$. In the reduction to double precision the second term would fall away. Is that admissible or is that caught and compensated in some way? But then one would have to reconnect the scales to compute $|z_n+\rho w_n|>2$. To get non-trivial results from this, one would have to adapt $rho$ dynamically. – Lutz Lehmann Sep 12 '20 at 07:38
  • @LutzLehmann one can use a number type with low precision but higher range (eg a double precision float as mantissa paired with another int32 as exponent, using ldexp/frexp functions to keep the mantissa small) – Claude Sep 12 '20 at 13:18
  • @LutzLehmann about admissibility, there are methods to check for "glitches" where the dynamics of the reference differ too much from the dynamics of the perturbed point, and pick a better reference if necessary. – Claude Sep 15 '20 at 12:57
  • Thank you, I have now seen your contributions in forums and blogs on this topic. As I understand it now, using series expansion and careful estimation of the error allows to skip the $Δz$ iteration up to iteration counts where the virtual or sparsely sampled $Δz$ array has "accumulated" enough "magnitude" so that it can be faithfully represented in floating point in the majority of the image. This all in float, only the iteration of the series coefficients needs some larger-exponent-small-mantissa data type. This setup holds for zoom factors up to $10^{600}$. – Lutz Lehmann Sep 15 '20 at 13:44
  • @LutzLehmann The reduction to double precision of $w_{n+1} = 2 z_n w_n + \rho w_n^2 + s$ does make the second term fall away, but this does not matter when $z_n$ is large (which is most of the iterations). One can analyse the reference orbit and see when $z_n$ is small, when it is necessary to do full-range iterations at those $n$ (and adapt $\rho$ afterwards). Reconnecting the scales for $|z_n + \rho w_n| > 2$ is not problematic in the same way as far as I can tell. – Claude May 05 '21 at 12:13
  • 1
    @LutzLehmann this rescaling algorithm is also described here https://fractalforums.org/programming/11/memory-bandwidth-trade-offs-for-perturbation-rendering/3717/msg23497#msg23497 – Claude May 05 '21 at 12:16