Issue
J. Eur. Opt. Society-Rapid Publ.
Volume 21, Number 1, 2025
Using wavefronts: detection and processing
Article Number 1
Number of page(s) 11
DOI https://doi.org/10.1051/jeos/2024045
Published online 22 January 2025

© The Author(s), published by EDP Sciences, 2025

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Adaptive optics (AO) technology has spurred many advancements by its enabling of real-time correction of optical distortion. This has led to remarkable achievements by ground-based astronomical imaging systems [1, 2] and growing interest on ground-to-satellite (laser-based) free-space optical communication (FSOC) links [3, 4]. At the core of such links is their ability to measure wavefront (phase) distortion across transverse profiles of received laser beams, with wavefront sensors [58], and then compensate for this distortion with deformable mirrors [4].

The recent works on AO-augmented FSOC links often relate to their wavefront sensors, as it is a critical AO element. Such wavefront sensors must provide fast and accurate characterizations of the received laser wavefronts over a wide range of elevation angles in the sky, at all times of day, and various wavefront sensors have been developed in this effort. In the earlier literature, the curvature wavefront sensor was introduced. It measured the local wavefront curvature, the Laplacian of the wavefront surface, and the radial tilt at the aperture edge to carry out its wavefront characterization [9]. Following this, a phase-shifting phase-difference interferometer was developed. It measured four π/2 phase-stepped interferograms on a camera and used a local reconstructor to return the phase [10]. In more recent years, the Fresnel sensor was introduced. It employed near-field diffraction methods to improve the wavefront detection under moderate to high turbulence conditions [11]. More recently, developments have been seen on holographic wavefront sensors, which apply holography to reconstruct the amplitude and phase [1215]. Nonetheless, through these developments, the Shack-Hartmann wavefront sensor [16] has remained the most common sensor in use. This is because its simple operation, with the deflections of focal spots measured under a lenslet array, offers well-established processing and robust packaging. However, FSOC links developed by ourselves [17] and others [18, 19] have shown such wavefront sensing to be challenging when the atmospheric turbulence transitions from weak to strong conditions.

In this work, we consider the self-referencing interferometer (SRI) as a viable technology for wavefront sensing in weak through strong turbulence conditions [10]. The SRI wavefront sensor takes the form of a Mach-Zehnder interferometer, which splits the input beam (having distorted wavefronts) into a signal beam (with tilt applied across its wavefronts) and a reference beam (with flat wavefronts). The signal and reference beams are then overlapped as an output beam, whose interference pattern characterizes wavefront distortion across the input beam. The levels of tilt and flattening applied to the signal and reference beams dictate the performance of the SRI wavefront sensor, to a large extent, and we focus on these characteristics in the optical design. We then put forward guiding principles for the subsequent image processing. This is done to help realize an SRI wavefront sensor with functionality that enables future FSOC links.

2 Analysis and design

The analysis and design of the SRI wavefront sensor is detailed in the following subsections by way of its optical design and image processing.

2.1 Optical design

The proposed study makes use of our testbed having an AO system matched to the SRI wavefront sensor.

The AO system is shown in Figure 1a. It is seeded by a laser module (TeraXion, PS-LM-1550.12-80-06) having a wavelength of 1550 nm and an output power of 4 mW. The beam is coupled out of the laser and collimated for propagation through five relays. The relays have their entrance and exit pupils coincide with the spatial light modulator (Hamamatsu, LCOS-SLM), tip-tilt mirror (Newport, FSM-300), and deformable mirror (Boston Micromachines Corp., 18W160#046). With such a system, the spatial light modulator can compensate for static distortion from the lenses and other elements, via a calibration routine, and apply dynamic distortion to mimic the time-varying effects of turbulence. Wavefront correction is then realized by the tip-tilt mirror, for tip-tilt (low-order) modes, and deformable mirror, for the remaining (high-order) modes. The SRI wavefront sensor is key to this correction as it characterizes the transverse phase profiles of the beam and directs their conjugates to the tip-tilt and deformable mirrors. The remainder of this work focuses on the SRI wavefront sensor, while details on the AO system can be found elsewhere [20].

thumbnail Fig. 1

Schematic of the (a) AO system and (b) SRI wavefront sensor. In (a), the 1550-nm laser beam (violet) propagates through five relays, for which the spatial light modulator, tip-tilt mirror, deformable mirror, and flat mirror (FM) are within the relays’ pupil planes. In (b), the 1550-nm input beam (violet) propagates into the SRI wavefront sensor and is split by the input beamsplitter (BS) into the signal beam (blue) and reference beam (red). These beams pass through confocal lens pairs, with a pinhole aperture in the focus of the reference beam, and are then overlapped by the output beamsplitter (BS). The output beam (violet) is then resolved on the camera’s image sensor. The four dotted lines across the beams in the SRI wavefront sensor designate the input pupil plane (violet), focal plane of the signal arm (blue), focal plane of the reference arm (red), and output pupil plane (black).

The exit pupil of the AO system is matched to the input pupil of the SRI wavefront sensor shown in Figure 1b. The SRI takes the form of a Mach-Zehnder interferometer with its input beamsplitter (Thorlabs, BP108) forming signal and reference arms. There is a primary lens with a focal length of f1 = 100 mm in each arm at a distance of f1 beyond the sensor’s input pupil, and a secondary lens with a focal length of f2 = 150 mm at a distance of f1 + f2 beyond the primary lens in each arm. The SRI also has a pinhole aperture with a diameter d at a distance of f1 beyond the primary lens in the reference arm. Diameters of d = 15 and 75 μm are considered in our theoretical analyses, while a pinhole aperture (Thorlabs, P75S) with a diameter of d = 75 μm is used for the experimental analyses. Beams from the signal and reference arms are overlapped by the output beamsplitter (Thorlabs, CM1-BP3) and resolved by an infrared camera (Xenics, Cheetah F051, CL-2078) with a 20-μm pixel size. The camera’s image sensor is at a distance of f2 beyond the secondary lens. Such a system has confocal pairing of primary and secondary lenses in each arm, with an input pupil plane before the input beamsplitter, a focal plane at a distance of f1 beyond each primary lens (coplanar with the pinhole aperture in the reference arm), and an output pupil plane at a distance of f2 beyond the secondary lens (coplanar with the camera’s image sensor).

There are two key considerations in the SRI. First, the beam in the reference arm must be effectively focused through the pinhole aperture, which acts as a spatial filter and forms a reference beam with flattened wavefronts on the camera’s image sensor. However, there is a tradeoff here in that smaller aperture diameters give especially flat wavefronts on the reference beam but larger aperture diameters transmit higher powers for the reference beam. Second, the input beamsplitter must be suitably angled to apply a linear tilt on the wavefronts of the signal beam. When the signal and reference beams are overlapped/imaged on the camera, we then see the tilted signal wavefronts and flattened reference wavefronts form fringes with a fringe spacing Λ. Figure 2 shows such imaged fringe patterns for applied tilts yielding spatial pitches of Λ = 387 μm in Figure 2a, 177 μm in Figure 2b, 117 μm in Figure 2c, and 87 μm in Figure 2d. The significance of the aperture diameters and spatial pitch, together, can be understood by defining and characterizing the input, signal, reference, and output beams.

thumbnail Fig. 2

Measured imaged intensity distributions of the output beam (overlapped reference and signal beams) on the camera’s image sensor as a function of the transverse dimensions xo and yo. The signal beam has varied degrees of horizontal tilt across it, yielding fringe spacings of Λ = (a) 387 μm, (b) 177 μm, (c) 117 μm, and (d) 87 μm.

The electric field of the input beam E ̃ i ( x i , y i ) $ \tilde{E}_{\mathrm{i}}({x}_{\mathrm{i}},{y}_{\mathrm{i}})$ is defined in the input pupil plane, which is denoted as a violet dotted line at the input of the SRI in Figure 1b. It consists of an input beam amplitude profile with a maximum E0 and radius ω, spanning out to e−1 of the maximum, and an input beam phase profile ϕ i(xi, yi). The electric field of the input beam can then be expressed as E ̃ i ( x i , y i )   = E 0 e - ( x i 2 + y i 2 ) / ω 2 e j ϕ i ( x i , y i ) , $$ \tilde{E}_{\mathrm{i}}({x}_{\mathrm{i}},{y}_{\mathrm{i}})\mathrm{\enspace }={E}_0{e}^{-({x}_{\mathrm{i}}^2+{y}_{\mathrm{i}}^2)/{\omega }^2}{e}^{\mathrm{j}{\phi }_{\mathrm{i}}\left({x}_{\mathrm{i}},{y}_{\mathrm{i}}\right)}, $$(1)where xi and yi are coordinates along the horizontal and vertical dimensions, respectively.

The electric field of the signal beam E ̃ s ( x f , y f ) $ \tilde{E}_{\mathrm{s}}({x}_{\mathrm{f}},{y}_{\mathrm{f}})$ is defined in the focal plane of the signal arm, which is denoted as a blue dotted line within this arm in Figure 1b. It consists of a focused signal beam amplitude profile Es(xf, yf) and focused signal beam phase profile ϕs(xf, yf), such that the electric field of the signal beam is E ̃ s ( x f , y f ) = E s ( x f , y f ) e j ϕ s ( x f , y f ) = e j 2 k 0 f 1 j λ 0 f 1 F { E ̃ i ( x i , y i ) e j 2 π ( f 1 / f 2 ) Λ x i } | u = x f / λ 0 f 1 v = y f / λ 0 f 1 . $$ \tilde{E}_{\mathrm{s}}\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right)={E}_{\mathrm{s}}\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right){\mathrm{e}}^{\mathrm{j}{\phi }_{\mathrm{s}}\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right)}=\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0{f}_1}}{\mathrm{j}{\lambda }_0{f}_1}\mathcal{F}{\left.\left\{\tilde{E}_{\mathrm{i}}\left({x}_{\mathrm{i}},{y}_{\mathrm{i}}\right){\mathrm{e}}^{\mathrm{j}\frac{2\mathrm{\pi }}{\left({f}_1/{f}_2\right)\mathrm{\Lambda }}{x}_{\mathrm{i}}}\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{f}/{\lambda }_0{f}_1}\\ v={y}_{\mathrm{f}/{\lambda }_0{f}_1}\end{array}}. $$(2a)

Here, xf and yf are coordinates along the horizontal and vertical dimensions, respectively, f1 and f2 are the focal lengths of the primary lens and secondary lens, respectively, k0 = 2π/λ0 is the magnitude of the wavevector at a free-space wavelength λ0, and F { · } $ \mathcal{F}\{\cdot \}$ is the Fourier transform operator with generalized transform variables u and v. The complex exponential inside the Fourier transform’s argument is due to the aforementioned angling of the input beamsplitter, which establishes a horizontal phase shift across the transverse profile of the signal beam. Thus, we can apply this tilt at differing degrees to alter the linear phase shift across the signal beam and thereby vary the fringe spacing Λ in the output beam.

The electric field of the reference beam E ̃ i ( x i , y i ) $ \tilde{E}_{\mathrm{i}}({x}_{\mathrm{i}},{y}_{\mathrm{i}})$ is defined in the focal plane of the reference arm, coplanar with the pinhole aperture, as denoted by a red dotted line in Figure 1b. It consists of a focused reference beam amplitude profile Er(xf, yf) and focused reference beam phase profile ϕr(xf, yf), which give E ̃ r ( x f , y f ) = E r ( x f , y f ) e j ϕ r ( x f , y f ) = e j 2 k 0 f 1 j λ 0 f 1 F { E ̃ i ( x i , y i ) } | u = x f / ( λ 0 f 1 ) v = y f / ( λ 0 f 1 ) ( 1 ( λ 0 f 2 ) 2 p ( x f , y f ) ) . $$ \tilde{E}_{\mathrm{r}}\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right)={E}_{\mathrm{r}}\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right){e}^{\mathrm{j}{\phi }_{\mathrm{r}}\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right)}=\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0{f}_1}}{\mathrm{j}{\lambda }_0{f}_1}\mathcal{F}{\left.\left\{\tilde{E}_{\mathrm{i}}\left({x}_{\mathrm{i}},{y}_{\mathrm{i}}\right)\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{f}/\left({\lambda }_0{f}_1\right)}\\ v={y}_{\mathrm{f}/\left({\lambda }_0{f}_1\right)}\end{array}}\left(\frac{1}{({\lambda }_0{f}_2{)}^2}p\left({x}_{\mathrm{f}},{y}_{\mathrm{f}}\right)\right). $$(2b)

The rightmost factor in parentheses characterizes the pinhole aperture in the reference focal plane by way of its transmission coefficient p(xf, yf) and the multiplicative constant 1/(λ0f2)2, where the latter constant is included to give a normalized point-spread function.

The electric field of the output beam E ̃ s ( x f , y f ) $ \tilde{E}_{\mathrm{s}}({x}_{\mathrm{f}},{y}_{\mathrm{f}})$ is defined in the output pupil plane, coplanar with the camera’s image sensor, as denoted by a black dotted line in Figure 1b. It is formed as the superposition of the signal and reference beams’ electric fields with an amplitude profile Eo(xo, yo) and phase profile ϕo(xo, yo). The electric field of this output beam can then be defined by E ̃ o ( x o , y o )   = e j 2 k 0 f 2 j λ 0 f 2 [ F { E ̃ s ( x f , y f ) } | u = x o / ( λ 0 f 2 ) v = y o / ( λ 0 f 2 ) + F { E ̃ r ( x f , y f ) } | u = x o / ( λ 0 f 2 ) v = y o / ( λ 0 f 2 ) ] = e j 2 k 0 f 2 j λ 0 f 2 [ F { e j 2 k 0 f 1 j λ 0 f 1 F { E ̃ i ( x i , y i ) e j 2 π ( f 1 / f 2 ) Λ x i } | u = x f / ( λ 0 f 1 ) v = y f / ( λ 0 f 1 )   } | u = x o / ( λ 0 f 2 ) v = y o / ( λ 0 f 2 ) + F { e j 2 k 0 f 1 j λ 0 f 1 F { E ̃ i ( x i , y i ) } | u = x f / ( λ 0 f 1 ) v = y f / ( λ 0 f 1 )   1 ( λ 0 f 2 ) 2 p ( x f , y f ) } | u = x o / ( λ 0 f 2 ) v = y o / ( λ 0 f 2 ) ] = - e j 2 k 0 ( f 1 + f 2 ) λ 0 2 f 1 f 2 [ ( λ 0 f 1 ) 2 E ̃ i ( ( - λ 0 f 1 ) x o λ 0 f 2 , ( - λ 0 f 1 ) y o λ 0 f 2 ) e j 2 π ( f 1 / f 2 ) Λ ( - λ 0 f 1 ) x o λ 0 f 2 +   ( λ 0 f 1 ) 2 E ̃ i ( ( - λ 0 f 1 ) x o λ 0 f 2 , ( - λ 0 f 1 ) y o λ 0 f 2 ) 1 ( λ 0 f 2 ) 2 P ( x o λ 0 f 2 , y o λ 0 f 2 ) ] = - e j 2 k 0 ( f 1 + f 2 ) f 2 / f 1 [ E ̃ i ( - x o f 2 / f 1 , - y o f 2 / f 1 ) e - j 2 π Λ x o + E ̃ i ( - x o f 2 / f 1 , - y o f 2 / f 1 ) 1 ( λ 0 f 2 ) 2 P ( x o λ 0 f 2 , y o λ 0 f 2 ) ] $$ \begin{array}{c}\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})\enspace =\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0{f}_2}}{\mathrm{j}{\lambda }_0{f}_2}\left[\mathcal{F}{\left.\left\{\tilde{E}_{\mathrm{s}}({x}_{\mathrm{f}},{y}_{\mathrm{f}})\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{o}}/({\lambda }_0{f}_2)\\ v={y}_{\mathrm{o}}/({\lambda }_0{f}_2)\end{array}}+\mathcal{F}{\left.\left\{\tilde{E}_{\mathrm{r}}({x}_{\mathrm{f}},{y}_{\mathrm{f}})\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{o}}/({\lambda }_0{f}_2)\\ v={y}_{\mathrm{o}}/({\lambda }_0{f}_2)\end{array}}\right]\\ \begin{array}{c}=\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0{f}_2}}{\mathrm{j}{\lambda }_0{f}_2}\left[\mathcal{F}{\left.\left\{\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0{f}_1}}{\mathrm{j}{\lambda }_0{f}_1}\mathcal{F}{\left.\left\{\tilde{E}_{\mathrm{i}}({x}_{\mathrm{i}},{y}_{\mathrm{i}}){\mathrm{e}}^{\mathrm{j}\frac{2\mathrm{\pi }}{({f}_1/{f}_2)\mathrm{\Lambda }}{x}_{\mathrm{i}}}\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{f}}/({\lambda }_0{f}_1)\\ v={y}_{\mathrm{f}}/({\lambda }_0{f}_1)\end{array}}\enspace \right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{o}}/({\lambda }_0{f}_2)\\ v={y}_{\mathrm{o}}/({\lambda }_0{f}_2)\end{array}}\right.\\ \left.+\mathcal{F}{\left.\left\{\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0{f}_1}}{\mathrm{j}{\lambda }_0{f}_1}\mathcal{F}{\left.\left\{\tilde{E}_{\mathrm{i}}({x}_{\mathrm{i}},{y}_{\mathrm{i}})\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{f}}/({\lambda }_0{f}_1)\\ v={y}_{\mathrm{f}}/({\lambda }_0{f}_1)\end{array}}\enspace \frac{1}{({\lambda }_0{f}_2{)}^2}p({x}_{\mathrm{f}},{y}_{\mathrm{f}})\right\}\right|}_{\begin{array}{c}u={x}_{\mathrm{o}}/({\lambda }_0{f}_2)\\ v={y}_{\mathrm{o}}/({\lambda }_0{f}_2)\end{array}}\right]\end{array}\\ \begin{array}{c}\begin{array}{c}=-\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0\left({f}_1+{f}_2\right)}}{{\lambda }_0^2{f}_1{f}_2}\left[({\lambda }_0{f}_1{)}^2\tilde{E}_{\mathrm{i}}\left(\left(-{\lambda }_0{f}_1\right)\frac{{x}_{\mathrm{o}}}{{\lambda }_0{f}_2},\left(-{\lambda }_0{f}_1\right)\frac{{y}_{\mathrm{o}}}{{\lambda }_0{f}_2}\right){\mathrm{e}}^{\mathrm{j}\frac{2\mathrm{\pi }}{\left({f}_1/{f}_2\right)\mathrm{\Lambda }}\left(-{\lambda }_0{f}_1\right)\frac{{x}_{\mathrm{o}}}{{\lambda }_0{f}_2}}\right.\\ \left.+\enspace ({\lambda }_0{f}_1{)}^2\tilde{E}_{\mathrm{i}}\left(\left(-{\lambda }_0{f}_1\right)\frac{{x}_{\mathrm{o}}}{{\lambda }_0{f}_2},\left(-{\lambda }_0{f}_1\right)\frac{{y}_{\mathrm{o}}}{{\lambda }_0{f}_2}\right)\otimes \frac{1}{({\lambda }_0{f}_2{)}^2}P\left(\frac{{x}_{\mathrm{o}}}{{\lambda }_0{f}_2},\frac{{y}_{\mathrm{o}}}{{\lambda }_0{f}_2}\right)\right]\end{array}\\ \begin{array}{c}=-\frac{{\mathrm{e}}^{\mathrm{j}2{k}_0({f}_1+{f}_2)}}{{f}_2/{f}_1}\left[\tilde{E}_{\mathrm{i}}\left(-\frac{{x}_{\mathrm{o}}}{{f}_2/{f}_1},-\frac{{y}_{\mathrm{o}}}{{f}_2/{f}_1}\right){\mathrm{e}}^{-\mathrm{j}\frac{2\pi }{\mathrm{\Lambda }}{x}_{\mathrm{o}}}\right.\\ \left.+\tilde{E}_{\mathrm{i}}\left(-\frac{{x}_{\mathrm{o}}}{{f}_2/{f}_1},-\frac{{y}_{\mathrm{o}}}{{f}_2/{f}_1}\right)\otimes \frac{1}{({\lambda }_0{f}_2{)}^2}P\left(\frac{{x}_{\mathrm{o}}}{{\lambda }_0{f}_2},\frac{{y}_{\mathrm{o}}}{{\lambda }_0{f}_2}\right)\right]\end{array}\end{array}\end{array} $$(3)where xo and yo are coordinates along the horizontal and vertical dimensions, respectively, ⊗ denotes the convolution operation, P(xo/(λ0f2),yo/(λ0f2))/(λ0f2)2 is the normalized point-spread function of the pinhole aperture, and Λ is the fringe spacing arising along the horizontal dimension (quantifying the degree of phase tilt applied across the signal beam).

Overall, the key parameters for the design of the SRI wavefront sensor arise within the first and second terms in the final expression of equation (3), and manifest through the signal and reference beams, respectively. Namely, the tilt applied to the signal beam imparts the fringe spacing Λ on the output image, which then defines the resolution of spatial features (and the order of modes seen) in the image. At the same time, the aperturing applied to the reference beam flattens its wavefronts in the output pupil plane, which lessens distortion in the image.

2.2 Image processing

The optical design presented in the prior section establishes an intensity distribution on the camera’s image sensor according to E ̃ o ( x o , y o ) E ̃ o ( x o , y o ) * $ {\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})}^{\mathrm{*}}$, where E ̃ o ( x o , y o ) $ \tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})$ is the electric field of the output beam across the horizontal xo and vertical yo dimensions, and * denotes the complex conjugate. We then process this image via Fourier fringe analysis [21] with four steps. In the first step, we apply a two-dimensional fast Fourier transform, F fft $ {\mathcal{F}}_{\mathrm{fft}}${·}, to the imaged intensity distribution to give F fft { E ̃ o ( x o , y o ) E ̃ o ( x o , y o ) * $ {\mathcal{F}}_{\mathrm{fft}}\{{\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})}^{\mathrm{*}}$}. This generates an image in reciprocal space with a large central peak at the origin, resulting from low-spatial-frequency (averaged) characteristics across the imaged intensity distribution, as well as negative and positive (side) peaks, displaced horizontally off the origin by 1/Λ. The latter two peaks are due to the horizontal tilt applied to the signal beam and its resulting fringe (sinusoidal) pattern on the imaged intensity distribution. In the second step, we apply a circular reciprocal-space filter ΦRS to have it pass only the positive (side) peak. This yields the reciprocal-space distribution F fft { E ̃ o ( x o , y o ) E ̃ o ( x o , y o ) * } Φ RS $ {\mathcal{F}}_{\mathrm{fft}}\{{\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})\tilde{E}_{\mathrm{o}}({x}_{\mathrm{o}},{y}_{\mathrm{o}})}^{\mathrm{*}}\}{\mathrm{\Phi }}_{\mathrm{RS}}$, where the filter ΦRS has a diameter equal to the displacement between the central and side peaks, 1/Λ, with unity in its interior and zero elsewhere. Such filtering passes the full wavefront characteristics across the input beam while rejecting the redundant/unnecessary phase characteristics in the negative/central peaks. In the third step, we apply a two-dimensional inverse fast Fourier transform, F fft - 1 $ {\mathcal{F}}_{{\mathrm{fft}}^{-1}}${·}, to the filtered output and multiply the result by the phase factor ej2πxo/Λ to give F fft - 1 { F fft { E ̃ 0 ( x o , y o ) E ̃ 0 ( x o , y o ) * } Φ RS e j 2 π x o / Λ } $ {\mathcal{F}}_{\mathrm{fft}}^{-1}\left\{{\mathcal{F}}_{\mathrm{fft}}\left\{{\tilde{E}_0({x}_{\mathrm{o}},{y}_{\mathrm{o}})\tilde{E}_0({x}_{\mathrm{o}},{y}_{\mathrm{o}})}^{\mathrm{*}}\right\}{\mathrm{\Phi }}_{\mathrm{RS}}{\mathrm{e}}^{j2\pi {x}_{\mathrm{o}}/\mathrm{\Lambda }}\right\}$. The phase factor here shifts the origin in reciprocal space to the centre of the positive peak and thus removes the fringe pattern that appeared in the imaged intensity distribution. In the fourth step, we compute the arctangent of the ratio of the last distribution’s real component R e { · } $ \mathcal{R}\mathrm{e}\{\cdot \}$ and imaginary component I m { · } $ \mathcal{I}\mathrm{m}\{\cdot \}$, scale the horizontal dimension by f1/f2, to undo any magnification incurred by the confocal primary and secondary lenses, and unwrap the phase. This gives an estimated beam phase profile of ϕ i ( est ) ( x i , y i ) = unwrap ( arctan ( I m { F fft - 1 { F fft { E ̃ 0 ( x o , y o ) E ̃ 0 ( x o , y o ) * } Φ RS } } e j 2 π x o / Λ R e { F fft - 1 { F fft { E ̃ 0 ( x o , y o ) E ̃ 0 ( x o , y o ) * } Φ RS } } e j 2 π x o / Λ ) ) , $$ {\phi }_{i(\mathrm{est})}\left({x}_i,{y}_i\right)=\mathrm{unwrap}\left(\mathrm{arctan}\left(\frac{\mathcal{I}\mathrm{m}\left\{{\mathcal{F}}_{\mathrm{fft}}^{-1}\left\{{\mathcal{F}}_{\mathrm{fft}}^{}\left\{\tilde{E}_0\left({x}_{\mathrm{o}},{y}_{\mathrm{o}}\right){\tilde{E}_0\left({x}_{\mathrm{o}},{y}_{\mathrm{o}}\right)}^{\mathrm{*}}\right\}{\mathrm{\Phi }}_{\mathrm{RS}}\right\}\right\}{\mathrm{e}}^{j2\pi {x}_{\mathrm{o}}/\mathrm{\Lambda }}}{\mathcal{R}\mathrm{e}\left\{{\mathcal{F}}_{\mathrm{fft}}^{-1}\left\{{\mathcal{F}}_{\mathrm{fft}}^{}\left\{\tilde{E}_0\left({x}_{\mathrm{o}},{y}_{\mathrm{o}}\right){\tilde{E}_0\left({x}_{\mathrm{o}},{y}_{\mathrm{o}}\right)}^{\mathrm{*}}\right\}{\mathrm{\Phi }}_{\mathrm{RS}}\right\}\right\}{\mathrm{e}}^{j2\pi {x}_{\mathrm{o}}/\mathrm{\Lambda }}}\right)\right), $$(4)which will ideally depict the input beam phase profile ϕi(xi, yi). Branch point / phase discontinuities may arise from the unwrap{·} function here, but strategies to remove them are shown elsewhere [2224].

3 Results and discussion

We consider a beam entering the SRI wavefront sensor with a radius of ω = 2.5 mm and an arbitrary input beam phase profile, ϕ i(xi, yi) in equation (1). We then solve for the electric field of the output beam, E ̃ 0 ( x o , y o ) $ \tilde{E}_0\left({x}_{\mathrm{o}},{y}_{\mathrm{o}}\right)$ in equation (3), and apply image processing to its intensity distribution to extract the estimated beam phase profile ϕ i(est)(xi, yi). The analyses of ϕ i(est)(xi, yi) are had with the input beam phase profile ϕ i(xi, yi) cast as a superposition of (orthogonal) Zernike polynomials enumerated by the (Noll) mode order J = 1, 2, 3, … . The characteristics underlying these mode orders are given in the Appendix, with details on their wavefront aberrations and symmetries.

3.1 Optical design

The performance of the SRI wavefront sensor’s design is gauged by its ability to both pass the signal beam unperturbed through the system (aside from our negation and tilt on its phase) and image the reference beam in the output pupil plane with a flat phase. The diameter of the pinhole aperture is the key parameter in such efforts and is focused upon here. We consider four representative phase profiles on the input beam, corresponding to turbulence-induced tilt along xi (J = 2), defocus (J = 4), primary coma along xi (J = 8), and secondary coma along xi (J = 16). The four phase profiles on the input beam (top row) and estimated beam (bottom row) are illustrated in Figures 3a and 3e, 3b and 3f, 3c and 3g, and 3d and 3h, respectively. The resulting phase profiles on the signal beam (top row) and reference beam (bottom row) are shown for the focal plane in Figures 4a and 4e, 4b and 4f, 4c and 4g, and 4d and 4h, respectively, and the output pupil plane in Figures 5a and 5e, 5b and 5f, 5c and 5g, and 5d and 5h, respectively. All of the results are illustrated as two-dimensional colourmaps of phase spanning from low (blue) to high (red). The pinhole aperture is shown on the reference beam in Figure 4 for a narrow aperture diameter, d = 15 μm (black circle), and a wide aperture diameter, d = 75 μm (black circle).

thumbnail Fig. 3

Phase profiles in the input plane for the input beam (top row) and estimated beam (bottom row). The profiles are shown for an input beam experiencing turbulence-induced distortion as tilt along xi (J = 2) in (a) and (e), defocus (J = 4) in (b) and (f), primary coma along xi (J = 8) in (c) and (g), and secondary coma along xi (J = 16) in (d) and (h). The phase is displayed as colours mapped from low (blue) to red (high), given a pinhole aperture with a diameter of d = 15 μm and a fringe spacing of Λ = 87 μm.

thumbnail Fig. 4

Phase profiles in the focal plane for the signal beam (top row) and reference beam (bottom row). The profiles are shown for an input beam experiencing turbulence-induced distortion as tilt along xi (J = 2) in (a) and (e), defocus (J = 4) in (b) and (f), primary coma along xi (J = 8) in (c) and (g), and secondary coma along xi (J = 16) in (d) and (h). The phase is displayed as colours mapped from low (blue) to red (high), given pinhole apertures with diameters of d = 15 and 75 μm (seen in the bottom row as small and large black circles, respectively), and a fringe spacing of Λ = 87 μm.

thumbnail Fig. 5

Phase profiles in the output plane for the signal beam (top row) and reference beam (bottom row). The profiles are shown for an input beam experiencing turbulence-induced distortion as tilt along xi (J = 2) in (a) and (e), defocus (J = 4) in (b) and (f), primary coma along xi (J = 8) in (c) and (g), and secondary coma along xi (J = 16) in (d) and (h). The phase is displayed as colours mapped from low (blue) to red (high), given a pinhole aperture with a diameter of d = 15 μm and a fringe spacing of Λ = 87 μm.

There are two key characteristics to note in the optical design. First, the presence of azimuthal asymmetry on the input beam phase profiles in Figure 3 deflects the signal and reference beams off their optical axes within their respective focal planes. Such deflections are of little consequence to the signal beam, which has fixed tilt already applied to it (from the beamsplitter) and unobstructed transmission through its focal plane (given its lack of an aperture). However, the deflections are of great concern for the reference beam, which deflects along the +xf direction with extents that are large in Figure 4e (J = 2), negligible in Figure 4f (J = 4), moderate in Figure 4g (J = 8), and small in Figure 4h (J = 16). These deflections reduce the transmitted power of the reference beam through the pinhole aperture to a great degree for the narrow aperture diameter, d = 15 μm, and a lesser degree for the wide aperture diameter, d = 75 μm. Only the input beam phase profile of Figure 4f (J = 4) escapes this deflection-induced reduction in power, as a result of its pure azimuthal symmetry. Second, we note that the reference beam phase profile in the output pupil plane should be sufficiently flat/uniform, as this will allow the signal beam phase profile to be accurately mapped onto the (superimposed) output beam phase profile. The results displayed in Figures 5e, 5f, 5g, and 5h show that the reference beam can exhibit this flat/uniform phase profile – but only for an aperture diameter of d = 15 μm. The corresponding profile for the aperture diameter of d = 75 μm (not shown) is far from flat/uniform. Such trends can be understood by the inverse Fourier transform relationship between the focal and output pupil planes, whereby a point aperture at the focus outputs a flat phase profile on the reference beam and a wide aperture at the focus outputs similar phase profiles on the reference and signal beams.

3.2 Image processing

The performance of the SRI wavefront sensor’s image processing can be assessed by its ability to estimate the input beam phase profile from the intensity distribution on the image sensor. As such, we consider the aforementioned phase profiles on the input beam, corresponding to turbulence-induced tilt along xi (J = 2), defocus (J = 4), primary coma along xi (J = 8), and secondary coma along xi (J = 16). We then analyse the resulting phase profiles on the estimated beam, which are shown in Figures 3a and 3e, 3b and 3f, 3c and 3g, and 3d and 3h, respectively. Here, we have used Fourier fringe analysis with the pinhole aperture having a diameter of d = 15 μm and the fringe spacing of Λ = 87 μm. This fringe spacing separates the positive and negative peaks off the central peak in the reciprocal space by 1/Λ = 11.5 mm−1. We then apply a bandpass filter around the positive peak with a diameter that is equal to this separation of 1/Λ. Such scaling of the filter width and peak separation minimizes the encroachment of error from the central peak into the positive peak’s passband. This error can also be reduced by making the fringe spacing as small as possible, and thus the separation as large as possible, but this must be done while considering the pixel size on the camera’s image sensor. According to the fundamental Nyquist sampling theorem [25], the minimum fringe spacing resolved by the sensor will be two pixels wide, corresponding to a halved resolution, but larger fringe spacings are ideally used to fully resolve the fringes. Thus, we have used a fringe spacing of Λ = 87 μm in this analysis. This corresponds to the experimental fringe pattern displayed in Figure 2d and is roughly four pixels wide. Given these two parameters with an input beam subject to turbulence-induced tilt along xi (J = 2), defocus (J = 4), primary coma along xi (J = 8), and secondary coma along xi (J = 16), we see strong agreement between the input beam phase profiles, in Figures 3a3d, respectively, and our estimated beam phase profiles, in Figures 3e3h, respectively.

The overall functionality of the SRI wavefront sensor is encapsulated by Figure 6. The figure shows the residual wavefront error [18], as the root-mean-squared difference between the input beam phase profile and our estimated beam phase profile, versus the mode order J for weak (blue) and strong (red) turbulence conditions. Here, the conditions are defined by the wavefront error [18], as the root-mean-squared difference between the input beam phase profile and its averaged phase across the profile, while the pinhole apertures have diameters of d = 15 μm (circles) and 75 μm (squares). In following the foundational work of Noll [26], we define weak, moderate, and strong turbulence conditions as those with wavefront errors less than or equal to 1 rad, between 1 and 2 rad, and greater than or equal to 2 rad. The results in Figure 6 are shown for weak and strong turbulence conditions with a wavefront error of 1 and 2 rad, respectively. We can conclude from these results that the least residual wavefront error is had by the pinhole aperture with a diameter of d = 15 μm, as its errors are less than 0.11 rad for all mode orders in weak and strong turbulence conditions. Nonetheless, it may still be possible to use the pinhole aperture with a diameter of d = 75 μm, but the residual wavefront error here can only be kept below 0.95 rad in the weak turbulence conditions.

thumbnail Fig. 6

Residual wavefront error versus mode order J for weak (1 rad of wavefront error, blue) and strong (2 rad of wavefront error, red) turbulence conditions with tilt along xi (J = 2), defocus (J = 4), primary coma along xi (J = 8), and secondary coma along xi (J = 16). The pinhole apertures have diameters of d = 15 μm (circles) and d = 75 μm (squares).

4 Limitations and recommendations

Our results from the prior section showed the SRI wavefront sensor’s effectiveness, but its use is subject to limitations. The foremost six limitations and our corresponding recommendations are discussed here.

The first potential limitation of the SRI wavefront sensor relates to scalability. Our prior work [27] has shown that there is a fundamental relationship between the effects of atmospheric turbulence and the diameter of the telescope aperture under equivalent atmospheric turbulence conditions. Specifically, only simple low-order (tip-tilt) correction is typically required for diameters up to 5 cm, but when the system is scaled up and the diameter increases, the effects of atmospheric turbulence grow. The wavefront sensor must then be designed to characterize higher-order modes within its images.

The second potential limitation of the SRI wavefront sensor relates to the detection limits of its hardware. The camera is the greatest concern here, as its pixel sensitivity sets the minimum requirements for the beam powers (and signal-to-noise ratios) while its pixel size dictates the minimum resolvable spatial features (and thus the maximum measurable mode order). Ideally, the SRI wavefront sensor would be implemented with combined thought to its beam powers, which may demand optical amplification, and its upper limit for mode orders, which may necessitate the use of a high-resolution camera [28].

The third potential limitation of the SRI wavefront sensor relates to noise in its image processing. Such noise can manifest from sensor, manufacturing, and assembly errors [29, 30]. Fortunately, these errors can be mitigated through careful calibration [29]. It is also possible for quantization noise to arise from the fast Fourier transform in our image processing, due to rounding, floating-point representation, and truncation errors [31]. Such errors can also be mitigated [32, 33], but doing so comes at the cost of speed. Thus, the overall speed of the AO system, and specifically its control loop, should be considered while planning noise mitigation.

The fourth potential limitation of the SRI wavefront sensor relates to inefficiencies in its image processing. In particular, its phase unwrapping can become computationally intensive due to the emergence of branch points/cuts. Fortunately, challenges such as these are being met by recent advancements in machine and deep learning. Machine learning has led to improvements for wavefront sensing and turbulence characterizations via reward functions [1], wavefront estimations [34], and wavefront control [35]. Likewise, deep learning has advanced wavefront sensing via residual wavefront error rejection [20], convolutional neural networks [36], and sophisticated control models [37]. The image processing in our work could benefit from any number of these emerging technologies.

The fifth potential limitation of the SRI wavefront sensor relates to its speed. Here, we must recognize that wavefront errors exhibit both spatial variations, as defined by the mode orders, and temporal variations, as defined by the Greenwood frequency [38]. The speed of the SRI wavefront sensor, and the overall AO system’s control loop, should then be made greater than the Greenwood frequency to mitigate any concern on temporal variations. Our SRI wavefront sensor was designed with spatial variations as the sole concern, as our overall AO system’s control loop can function at speeds above the highest (real-world/realistic) Greenwood frequency. Specifically, given a wavelength of λ0 = 1550 nm, propagation length through the atmosphere of L = 10 km, and highest (real-world/realistic) wind velocity of vw = 30 m/s, the Greenwood frequency is only 0.4vw/(λ0L)1/2 = 100 Hz [38] while our system operates at a factor of 20 above this frequency, i.e., 2 kHz. This real-time speed is achieved by first training the system, whereby the tip-tilt/deformable mirrors are perturbed and wavefront errors are measured. This builds the loop’s interaction matrix. We then apply the inverse of this interaction matrix between the inputs (from the wavefront sensor) and outputs (to the tip-tilt/deformable mirrors). Ultimately, the speed of any AO system’s control loop should be designed with the Greenwood frequency in mind, to ensure that its wavefront errors can be sensed and mitigated solely in terms of their spatial variations, as done in this work.

The sixth potential limitation of the SRI wavefront sensor relates to trade-offs from its aperture diameter. Here, we recognize that smaller pinhole aperture diameters yield better uniformity / flattening across the reference beam’s wavefronts, and thus improved estimates for the beam phase profiles, but they also give reduced power transmission when (azimuthally) asymmetric wavefront error exists across the beam. The reduction occurs because such asymmetric wavefront error deflects the beam’s focus off the centre of the pinhole aperture, i.e., optical axis, which then reduces its transmission. Such deflection / reduction will be greatest for wavefront error manifesting in the low-order (tip-tilt) modes, with reducing effects from increasing orders. Thus, the correction imparted by the tip-tilt mirror in the overall AO system should be made as accurate as possible, to lessen the low-order (tip-tilt) wavefront error on the beam, and then the pinhole aperture diameter d should be selected for the net asymmetric wavefront error, including any residual low-order (tip-tilt) error and high-order (asymmetric) error. For example, given our primary lens with a focal length of f1 = 100 mm and a representative net asymmetric wavefront error of δθ = 10 μrad, we would expect the reference beam’s focus to deflect off the optical axis by f1δθ = 1 μm. For the pinhole aperture diameters in our work, d = 15 and 75 μm, this deflection would have little consequence, but the deflection could be a concern if a longer f1 was used and / or a smaller diameter d was used. In such cases, it may be necessary to improve the correction had from the tip-tilt mirror, reduce the focal length f1, and/or increase the pinhole aperture diameter d.

5 Conclusion

This work presented the design and development of an SRI wavefront sensor for implementation in an AO system that corrects for the effects of atmospheric turbulence in FSOC links. This was done with thought to the demands for wavefront sensing in such links under weak through strong turbulence conditions. For the sensor’s optical design, we observed a trade-off for the pinhole aperture’s diameter, whereby smaller diameters yield better uniformity/flattening across the reference beam’s wavefronts and larger diameters better transmit the reference beam’s power in the presence of asymmetric wavefront error. This is because such error deflects the focus off the centre of the pinhole aperture. In light of this trade-off, the tip-tilt mirror in the overall AO system should lessen the low-order (tip-tilt) wavefront error as much as possible, and then the pinhole aperture diameter d should be selected for the remaining net asymmetric wavefront error, which can include residual low-order (tip-tilt) error and high-order (asymmetric) error. For the sensor’s image processing, we concluded that the fringe spacing Λ should be set at or above twice the pixel size on the image sensor and the reciprocal-space filter diameter should then be set at the separation between the central and positive peaks, 1/Λ. Such conditions reduce the overall error and allow the system to function roughly independent of the fringe spacing. Overall, our analysed SRI wavefront sensor, with an aperture diameter of d = 15 μm and a fringe spacing of Λ = 87 μm, gave an accurate representation of the input beam’s phase profile. It is hoped that these analyses and insights can enable wavefront sensing with improved functionality in future FSOC links.

Funding

Portions of this work were supported by the Natural Sciences and Engineering Research Council of Canada, grant RGPIN-2017-04073. The core project on which this report is based was funded by the German Federal Ministry of Education and Research under funding code 16KIS1265 (QuNET). The authors are responsible for the content of this publication.

Conflicts of interest

The authors declare no conflicts of interest.

Data availability statement

The data presented in this paper may be obtained from the authors upon reasonable request.

Author contribution statement

Authors A.C.M., I.R.H., M.F.J., and J.F.H. contributed to the data analysis/processing and the interpretation of results. I.R.H., A.P.R., R.M.C., and J.F.H designed and implemented the experimental setup. A.C.M and J.F.H co-wrote the paper.

References

  1. Nousiannen J, Rajani C, Kasper M, et al., Toward on-sky adaptive optics control using reinforcement learning, Astron. Astrophys. 664(A71), 1 (2022). https://doi.org/10.1051/0004-6361/202243311. [Google Scholar]
  2. Davies R, Kasper M, Adaptive optics for astronomy, Annu. Rev. Astron. Astrophys. 50, 305. https://doi.org/10.1146/annurev-astro-081811-125447. [Google Scholar]
  3. Carrizo CE, Calvo RM, Belmonte A, Proof of concept for adaptive sequential optimization of free-space communication receivers, Appl. Opt. 58, 5397 (2019)https://doi.org/10.1364/AO.58.005397. [NASA ADS] [CrossRef] [Google Scholar]
  4. Carrizo CE, Calvo RM, Belmonte A, Intensity-based adaptive optics with sequential optimization for laser communications, Opt. Express 26, 16044 (2018). https://doi.org/10.1364/OE.26.016044. [NASA ADS] [CrossRef] [Google Scholar]
  5. Land JE, Aerosol absorption measurement by a Shack-Hartmann wavefront sensor, Appl. Opt. 62, 4836 (2023). https://doi.org/10.1364/AO.492066. [NASA ADS] [CrossRef] [Google Scholar]
  6. Kalensky M, Kemnetz MR, Spencer MF, Effects of shock waves on Shack-Hartmann wavefront sensor data, AIAA J 61, 2356 (2023). https://doi.org/10.2514/1.J062783. [NASA ADS] [CrossRef] [Google Scholar]
  7. Hutterer V, Neubauer A, Shatokhina J, A mathematical framework for nonlinear wavefront reconstruction in adaptive optics systems with Fourier-type wavefront sensing, Inverse Probl. 39 (35007), 1 (2023). https://doi.org/10.1088/1361-6420/acb568. [Google Scholar]
  8. Knapek M, Adaptive optics for the mitigation of atmospheric effects in laser satellite-to-ground communications, Technische Universität München (2010). [Google Scholar]
  9. Roddier F, Curvature sensing and compensation: a new concept in adaptive optics, Appl. Opt. 27, 1223 (1988). https://doi.org/10.1364/AO.27.001223. [NASA ADS] [CrossRef] [Google Scholar]
  10. Notaras J, Paterson C, Demonstration of closed-loop adaptive optics with a point-diffraction interferometer in strong scintillation with optical vortices, Opt. Express 15, 13745 (2007). https://doi.org/10.1364/OE.15.013745. [NASA ADS] [CrossRef] [Google Scholar]
  11. Crepp JR, Letchev SO, Potier SJ, et al., Measuring phase errors in the presence of scintillation, Opt. Express 28, 37721 (2020). https://doi.org/10.1364/OE.408825. [NASA ADS] [CrossRef] [Google Scholar]
  12. Thornton DE, Spencer MF, Perram GP, Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer, Appl. Opt. 58, A179 (2019). [NASA ADS] [CrossRef] [Google Scholar]
  13. Zepp A, Gladysz S, Stein K, et al., Optimization of the holographic wavefront sensor for open-loop adaptive optics under realistic turbulence. Part I: simulations, Appl. Opt. 60, F88 (2021). https://doi.org/10.1364/AO.425397. [NASA ADS] [CrossRef] [Google Scholar]
  14. Zepp A, Gladysz S, Stein K, et al., Simulation-based design optimization of the holographic wavefront sensor in closed-loop adaptive optics, Light Adv. Manuf. 3, 1 (2022). https://doi.org/10.37188/lam.2022.027. [Google Scholar]
  15. Branigan E, Zepp A, Martin S, et al., Comparing thin and volume regimes of analog holograms for wavefront sensing, Opt. Express 32, 27239 (2024). https://doi.org/10.1364/OE.527893. [NASA ADS] [CrossRef] [Google Scholar]
  16. Aubailly M, Vorontsov MA, Scintillation resistant wavefront sensing based on multi-aperture phase reconstruction technique, J. Opt. Soc. Am. A 29, 1707 (2012). https://doi.org/10.1364/JOSAA.29.001707. [Google Scholar]
  17. Shortt K, Giggenbach D, Calvo RM, et al., Channel characterization for air-to-ground free-space optical communication links, Proc. SPIE 8971(897108), 1 (2014). https://doi.org/10.1117/12.2039834. [Google Scholar]
  18. Hardy JW, in Adaptive Optics for Astronomical Telescopes, edited by A. Hasegawa (Oxford Univ. Press, New York, 1998). [CrossRef] [Google Scholar]
  19. Roddier F, Adaptive optics in astronomy, (Cambridge University Press, 2009). https://doi.org/10.1017/CBO9780511525179. [Google Scholar]
  20. Hampson KM, Žurauskas M, Barbotinand A, et al., Practical implementation of adaptive optical microscopes, Zenodo (2020). https://doi.org/10.5281/zenodo.4080674. [Google Scholar]
  21. Takeda M, Ina H, Topometry and interferometry by use of a FFT algorithm for fringe pattern analysis, Japanese J. Opt. 10, 476 (1981). https://doi.org/10.11438/kogaku1972.10.476. [Google Scholar]
  22. Kim J, Fernandez B, Agrawal B, Iterative wavefront reconstruction for strong turbulence using Shack–Hartmann wavefront sensor measurements, J. Opt. Soc. Am. A 38, 456 (2021). https://doi.org/10.1364/JOSAA.413934. [Google Scholar]
  23. Lamb MP, Correia C, Sauvage JF, et al., Quantifying telescope phase discontinuities external to adaptive optics systems by use of phase diversity and focal plane sharpening, J. Astron. Telesc. Instrum. Syst. 3(39001), 1 (2017). https://doi.org/10.1117/1.JATIS.3.3.039001. [CrossRef] [Google Scholar]
  24. Sawaf F, Groves RM, Phase discontinuity predictions using a machine-learning trained kernel, Appl. Opt. 53, 5439 (2014). https://doi.org/10.1364/AO.53.005439. [NASA ADS] [CrossRef] [Google Scholar]
  25. Nyquist H, Certain topics in telegraph transmission theory, Trans. AIEE. 47, 617 (1928). https://doi.org/10.1109/5.989875. [NASA ADS] [Google Scholar]
  26. Noll RJ, Zernike polynomials and atmospheric turbulence, J. Opt. Soc. Am. 66, 207 (1976). https://doi.org/10.1364/JOSA.66.000207. [NASA ADS] [CrossRef] [Google Scholar]
  27. Osborn J, Townson MJ, Farley OJD, et al., Adaptive Optics pre-compensated laser uplink to LEO and GEO, Opt. Express 29, 6113 (2021). https://doi.org/10.1364/OE.413013. [NASA ADS] [CrossRef] [Google Scholar]
  28. Rhoadarmer TA, Development of a self-referencing interferometer wavefront sensor, Proc. SPIE 5553 (2004). Advanced Wavefront Control: Methods, Devices, and Applications II. https://doi.org/10.1117/12.559916. [Google Scholar]
  29. He Y, Bao M, Chen Y, et al., Accuracy characterization of Shack–Hartmann sensor with residual error removal in spherical wavefront calibration, Light: Adv. Manuf. 31, 1 (2023). https://doi.org/10.37188/lam.2023.036. [Google Scholar]
  30. Tyson RK, Frazier BW, Field guide to adaptive optics, Second edition, (SPIE Press, 2012). https://doi.org/10.1117/3.923078. [Google Scholar]
  31. James D, Quantization errors in the fast Fourier transform, IEEE Trans. Acoust. 3, 277 (1975). https://doi.org/10.1109/TASSP.1975.1162687. [CrossRef] [Google Scholar]
  32. Chang WH, Nguyen TQ, On the fixed-point accuracy analysis of FFT algorithms, IEEE Trans. Signal Process. 56, 4673 (2008). https://doi.org/10.1109/TSP.2008.924637. [Google Scholar]
  33. Ma Y, An accurate error analysis model for fast Fourier transform, IEEE Trans. Signal Process. 45, 1641 (1997). https://doi.org/10.1109/78.600005. [Google Scholar]
  34. Paine SW, Fienup JR, Machine learning for improved image-based wavefront sensing, Opt. Lett. 43, 1235 (2018). https://doi.org/10.1364/OL.43.001235. [NASA ADS] [CrossRef] [Google Scholar]
  35. Guo YM, Zhong LB, Min L, et al., Adaptive optics based on machine learning: a review, Opto-Electron Adv. 5, 200082 (2022). https://doi.org/10.29026/oea.2022.200082. [CrossRef] [Google Scholar]
  36. Fu H, Wan Z, Li Y, et al., Experimental demonstration of deep-learning-enabled adaptive optics, Phys. Rev. Appl. 22, 034047 (2024). https://doi.org/10.1103/PhysRevApplied.22.034047. [NASA ADS] [CrossRef] [Google Scholar]
  37. Xu Z, Yang P, Hu K, et al., Deep learning control model for adaptive optics systems, Appl. Opt. 58, 1998 (2019). https://doi.org/10.1364/AO.58.001998. [NASA ADS] [CrossRef] [Google Scholar]
  38. Tyson RK, Principles of adaptive optics (Academic Press, 1991). https://doi.org/10.1016/B978-0-12-705900-6.X5001-0. [Google Scholar]
  39. Niu K, Tian C, Zernike Polynomials and their applications, J. Opt. 24(123001), 1 (2022). https://doi.org/10.1088/2040-8986/ac9e08. [Google Scholar]
  40. Lakshminarayanana V, Fleck A, Zernike polynomials: A guide, J. Mod. Opt. 58, 545 (2011). https://doi.org/10.1080/09500340.2011.554896. [Google Scholar]

Appendix

In this work, we characterize the input beam phase profile ϕ i(xi, yi) within the input pupil plane of the SRI wavefront sensor, where xi and yi are coordinates for the horizontal and vertical dimensions, respectively. The position of the ordered pair (xi, yi) is defined by its radial distance from the origin ϕ i = (xi2 + yi2)½ and azimuthal angle ϕ i = arctan(yi/xi), counterclockwise off the +xi-axis. The radial distance spans outward to three times the input beam’s radius ω, giving 0 ≤ ρ i ≤ 3ω, and the azimuthal angle spans 0 ≤ θ < 2π. The input beam phase profile can then be expanded in terms of orthogonal Zernike polynomials, Z n m ( ρ i / ( 3 ω ) , θ i ) $ {Z}_n^m({\rho }_{\mathrm{i}}/(3\omega ),{\theta }_{\mathrm{i}})$, as [39] ϕ i ( x i , y i ) = ϕ i ( ρ i / ( 3 ω ) , θ i ) = Z n | m | ( ρ i / ( 3 ω ) , θ i ) = { Φ n m R n | m | ( ρ i / ( 3 ω ) ) cos ( m θ i ) , m 0 Φ n m R n | m | ( ρ i / ( 3 ω ) ) sin ( | m | θ i ) ,   m < 0 , $$ {\phi }_{\mathrm{i}}({x}_{\mathrm{i}},{y}_{\mathrm{i}})={\phi }_{\mathrm{i}}({\rho }_{\mathrm{i}}/(3\omega ),{\theta }_{\mathrm{i}})={Z}_n^{|m|}({\rho }_{\mathrm{i}}/(3\omega ),{\theta }_{\mathrm{i}})=\left\{\begin{array}{c}{\mathrm{\Phi }}_n^m{R}_n^{|m|}({\rho }_{\mathrm{i}}/(3\omega ))\mathrm{cos}(m{\theta }_{\mathrm{i}}),\hspace{1em}m\ge 0\\ {\mathrm{\Phi }}_n^m{R}_n^{\left|m\right|}\left({\rho }_{\mathrm{i}}/(3\omega )\right)\mathrm{sin}\left(\left|m\right|{\theta }_{\mathrm{i}}\right),\hspace{1em}\enspace m<0\end{array},\right. $$(A.1)where Φ n m $ {\mathrm{\Phi }}_n^m$ is a normalization factor, the non-negative integer index n is the radial degree, the integer index m is the azimuthal frequency, and the difference between n and |m| is even and greater than or equal to zero. These two integers define Zernike polynomials according to [39] R n | m | ( ρ i / ( 3 ω ) ) = s = 0 n - | m | / 2 ( - 1 ) s ( n - s ) ! s ! ( n + | m | / 2 - s ) ! ( n - | m | / 2 - s ) ! ( ρ i / 3 ω ) n - 2 s . $$ {R}_n^{\left|m\right|}({\rho }_{\mathrm{i}}/(3\omega ))=\sum_{s=0}^{n-\left|m\right|/2} \frac{(-1{)}^s(n-s)!}{s!\left(n+\left|m\right|/2-s\right)!\left(n-\left|m\right|/2-s\right)!}{\left({\rho }_{\mathrm{i}}/3\omega \right)}^{n-2s}. $$(A.2)

Table A.1 lists these two integer indices with their associated Noll mode order J, as used in this work and elsewhere [18, 26], and OSA/ANSI mode order, as used elsewhere [40]). The table then lists the normalized Zernike polynomials with descriptors for the associated wavefront aberration and even/odd symmetry.

Table A.1

Zernike integer indices, mode orders, and polynomials, with wavefront aberration and symmetry.

All Tables

Table A.1

Zernike integer indices, mode orders, and polynomials, with wavefront aberration and symmetry.

All Figures

thumbnail Fig. 1

Schematic of the (a) AO system and (b) SRI wavefront sensor. In (a), the 1550-nm laser beam (violet) propagates through five relays, for which the spatial light modulator, tip-tilt mirror, deformable mirror, and flat mirror (FM) are within the relays’ pupil planes. In (b), the 1550-nm input beam (violet) propagates into the SRI wavefront sensor and is split by the input beamsplitter (BS) into the signal beam (blue) and reference beam (red). These beams pass through confocal lens pairs, with a pinhole aperture in the focus of the reference beam, and are then overlapped by the output beamsplitter (BS). The output beam (violet) is then resolved on the camera’s image sensor. The four dotted lines across the beams in the SRI wavefront sensor designate the input pupil plane (violet), focal plane of the signal arm (blue), focal plane of the reference arm (red), and output pupil plane (black).

In the text
thumbnail Fig. 2

Measured imaged intensity distributions of the output beam (overlapped reference and signal beams) on the camera’s image sensor as a function of the transverse dimensions xo and yo. The signal beam has varied degrees of horizontal tilt across it, yielding fringe spacings of Λ = (a) 387 μm, (b) 177 μm, (c) 117 μm, and (d) 87 μm.

In the text
thumbnail Fig. 3

Phase profiles in the input plane for the input beam (top row) and estimated beam (bottom row). The profiles are shown for an input beam experiencing turbulence-induced distortion as tilt along xi (J = 2) in (a) and (e), defocus (J = 4) in (b) and (f), primary coma along xi (J = 8) in (c) and (g), and secondary coma along xi (J = 16) in (d) and (h). The phase is displayed as colours mapped from low (blue) to red (high), given a pinhole aperture with a diameter of d = 15 μm and a fringe spacing of Λ = 87 μm.

In the text
thumbnail Fig. 4

Phase profiles in the focal plane for the signal beam (top row) and reference beam (bottom row). The profiles are shown for an input beam experiencing turbulence-induced distortion as tilt along xi (J = 2) in (a) and (e), defocus (J = 4) in (b) and (f), primary coma along xi (J = 8) in (c) and (g), and secondary coma along xi (J = 16) in (d) and (h). The phase is displayed as colours mapped from low (blue) to red (high), given pinhole apertures with diameters of d = 15 and 75 μm (seen in the bottom row as small and large black circles, respectively), and a fringe spacing of Λ = 87 μm.

In the text
thumbnail Fig. 5

Phase profiles in the output plane for the signal beam (top row) and reference beam (bottom row). The profiles are shown for an input beam experiencing turbulence-induced distortion as tilt along xi (J = 2) in (a) and (e), defocus (J = 4) in (b) and (f), primary coma along xi (J = 8) in (c) and (g), and secondary coma along xi (J = 16) in (d) and (h). The phase is displayed as colours mapped from low (blue) to red (high), given a pinhole aperture with a diameter of d = 15 μm and a fringe spacing of Λ = 87 μm.

In the text
thumbnail Fig. 6

Residual wavefront error versus mode order J for weak (1 rad of wavefront error, blue) and strong (2 rad of wavefront error, red) turbulence conditions with tilt along xi (J = 2), defocus (J = 4), primary coma along xi (J = 8), and secondary coma along xi (J = 16). The pinhole apertures have diameters of d = 15 μm (circles) and d = 75 μm (squares).

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.