Issue 
J. Eur. Opt. SocietyRapid Publ.
Volume 19, Number 1, 2023
EOSAM 2022



Article Number  20  
Number of page(s)  5  
DOI  https://doi.org/10.1051/jeos/2023017  
Published online  26 April 2023 
Short Communication
Field evaluation of a novel holographic singleimage depth reconstruction sensor^{}
Universität Stuttgart, Institut für Technische Optik, Pfaffenwaldring 9, 70569 Stuttgart, Germany
^{*} Corresponding author: hartlieb@ito.unistuttgart.de
Received:
30
January
2023
Accepted:
11
April
2023
A camerabased singleimage sensor is presented, that is able to measure the distance of one or multiple object points (light emitters). The sensor consists of a camera, whose lens is upgraded with a diffractive optical element (DOE). It fulfils two tasks: adding a vortex point spread function (PSF) and replication of the vortex PSFs to a predefined pattern of K spots. Both, shape and rotation of the vortex PSF is sensitive to defocus. The sensor concept is presented and its capabilities evaluated both on axis and offaxis. The achieved standard deviation of the error ranges between 8.5 μm (onaxis) and 3.5 μm (offaxis) within a measurement range of 20 mm. However, as soon as calibration and measurement position no longer match, the accuracy is limited. An analysis of the effects responsible for this are also part of the publication.
Key words: Depth measurement / PSF modification / Diffractive optical element / Digital image correlation / Multipoint method / singleshot 3D
As this paper is written for the topical issue of the EOSAM 2022 conference, a part of this paper can be found in the proceeding submitted for this conference (see [11] and https://jeos.edpsciences.org)
© The Author(s), published by EDP Sciences, 2023
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
When an object point is imaged to a detector, its lateral position (x, y) can be detected rather easily by calculating the Center of Gravity (CoG) of the PSF, whereas the axial dimension (z), thus the distance of the object point, is lost. It can be reconstructed by evaluating the increasing diameter of the defocused PSF, but the achievable accuracy is bad. The accuracy of this reconstruction can be improved by increasing objectspace numerical aperture (NA) of the imaging system. However, a large NA limits the depth of field and, therefore, the measurement range. Another possibility to improve the depth reconstruction of object points is to modify the PSF of the imaging system. A classic way of PSF modification is, for example, a superimposed astigmatism by the use of two orthogonal cylindrical lenses [1, 2]. In the last two decades, mainly for the application in optical microscopy, other ways of PSF manipulation based on diffractive optical elements (DOE) were developed. The purpose is, to modulate the phase of the light in a way, that the shape of the PSF corresponds to the changing distance z. Popular examples are the corkscrew PSF (CSPSF) [3], selfbending PSF (SBPSF) [4], tetrapod PSF (TPPSF) [5] and doublehelix PSF (DHPSF) [6]. The ratio between measurement range and accuracy reached by those techniques ranges between 280 and 560.
In this contribution we address the question, to which extent the formerly published singleimage depth reconstruction method [10] can be used as a single image 3D position sensor. Therefore, field measurements are carried out to investigate its offaxis performance.
The objective of the singleimage 3D position sensor is to achieve both, good accuracy and large measurement range. Therefore, two measures are used: We combine a PSF modification method known from microscopy with a holographic replication technique called multipoint method (MPM) [7] and apply it to a low NA objective lens. The low NA lens ensures a large measurement range and the combination of the multipoint method and the PSF modification is used to increase the accuracy of depth detection [11].
2 Principle and results
The MPM is a technique to improve the detection accuracy of a point light source imaged to a sensor. The wave nature of light fundamentally limits this accuracy, since the position, where a single photon impinges the camera sensor can only be described statistically (photon noise). Hence, the more photons are collected, the more precise the spot position can be localized. The number of photons, that can be collected by each pixel is limited by the quantum well capacity. It can be increased by temporal averaging, however error contributors like discretization and fixedpatternnoise are not affected and the temporal resolution is reduced. The idea of the MPM is to use spatial averaging for single points by replicating the spot to a pattern of copies using a DOE. If the object moves, all copies move by the same amount. By making the object point brighter (light emitter) and using the DOE to replicate the spot to N copies, the number of pixels carrying useful position information of the object is increased by N. By averaging the centers of all spot copies, the accuracy of subpixel localization can be improved in theory by a factor of $\sqrt{N}$. The MPM has already been successfully applied to various applications to improve the lateral position measurement accuracy, like stereo 3D position measurement [8] and vibration measurement [9].
For the PSF modification technique we use the DHPSF introduced by Baránek and Bouchal in [6]. The discrete spiral phase modulation (SPM) modifies the incoming light in a way, that the transversal component of the resulting intensity profile consists of two helixes rotating around each other. In the image plane, this forms two spots that rotate around a common axis. The angle of rotation is dependent on the defocus of the object point. The MPM is used to replicate this DHPSF to a predefined pattern. Each copy consists of two rotating spots. So if the object point is moved in z, all DHPSF replications show the same angle of rotation. By averaging all measured angles, errors caused by photon noise and discretisation are reduced and, therefore, the accuracy of the measured rotation angle can be improved theoretically by a factor equal to the square root of the number of replications. In Figure 1 the principle of the multipoint doublehelix PSF is shown for four replications. The experimental setup consists of a linear stage (Walter Uhl GT6BO01) that is used to move a point light source (fibre coupled laser, λ = 633 nm) in x, y and z. The camera system has a low NA objective lens (f′ = 50 mm, NA = 0.0595) with a DOE mounted in front, to perform the replication to N = 25 copies and induce the vortex phase modulation. The distance between light source and objective lens is 234 mm and the depth measurement range is 20 mm.
Fig. 1 Combination of the MPM and the DHPSF. The two rotating spots created by the SPM are replicated to four copies [10]. 
Simulations and experiments show, that the two rotating spots, created by the doublehelix, form a tail, whose length is growing with the angle of rotation. More information regarding the simulations can be found in [10]. This tail makes it difficult to evaluate the rotation angle using the CoGs of both spots. Therefore, we use the method of cross correlation with a reference image stack. The reference images are acquired at K equidistant points in the measurement range of 20 mm. This measurement range is chosen for practical reasons, to avoid overlap of the increasing diameters of the PSF copies. It could be increased by some more millimetres, until the decreasing signaltonoise ratio becomes the limiting factor. A measurement is performed by positioning the light source inside the measurement range, acquire an image, in the following this is called a measurement image, and cross correlate it with the reference image stack. The peak of the resulting correlation energy curve marks the measurement result. In order to get the peak position, we use the ±15 points around the maximum value to fit a parabola function. Subsequently, the position, where the reference image stack is acquired, is called calibration position.
In a former publication we have shown, that the onaxis performance of the measurement principle is very good [10]. The mean standard deviation of three measurements was $\overline{\sigma}$ = 8.51 μm within a measurement range of 20 mm. This leads to a measurement range to accuracy ratio of 20 mm/8.51 μm = 2350, which is very high compared to other singleimage measurement systems. However, the measurement result is acquired on the optical axis. In order to use the proposed method as a 3D position measurement sensor, its offaxis performance is to be evaluated.
This is done in two steps:

Field evaluation with no offset to the calibration position x_{i}.

Field evaluation with offset ∆x to the calibration position.
The scheme of the field measurements is illustrated in Figure 2. In the first step, the actual measurement is executed at the same position where the reference image stack is acquired. The difference is now, that it is no longer on the optical axis. In the second evaluation step, the measurement position is moved away from the calibration position by ∆x. In object space, the calibration positions are selected in different distances to the optical axis: x_{1} = 5 mm, x_{2} = 10 mm, x_{3} = 15 mm, x_{4} = 20 mm and x_{5} = 25 mm.
Fig. 2 Field evaluation of the proposed sensor. The blue/white crosses at x_{0}, x_{1}, x_{2} mark the image positions, where a reference image stack is acquired. The red cross marks the current measurement position. For ∆x′ = 0 the calibration and measurement position is identical. 
At each position, the linear stage is positioned to K = 2000 equidistant points in the measurement range of 20 mm in order to acquire reference image stacks at those positions.
The results for all five field measurements are shown in Figure 3. The curves show the difference between the measured position of the MPDHSensor and the linear stage position at 180 equidistant points within the measurement range. In order to show all measurements in one graph, they are separated by an offset equal to 0.1 mm times the index i of the field position x_{i}, so the measurement result of x_{0} is shown as the lowest graph and the one of x_{5} as the highest.
Fig. 3 Field measurements at calibration positions x_{0} = 0 mm, x_{1} = 5 mm, x_{2} = 10 mm, x_{3} = 15 mm, x_{4} = 20 mm and x_{5} = 25 mm with ∆x = 0 mm, meaning that the measurements are taken at the same position as the calibration. To show all field measurements in one plot, the signals are shifted by an offset of 0.1 mm times index i of x_{ i }, so the measurements at field position x_{0} are shown in the lowest and at x_{5} in the highest plot. 
The results show, that the standard deviation is getting smaller, the further away of the optical axis the measurement is performed. On the optical axis the standard deviation of the error is σ_{0} = 8.55 μm and at x_{5} = 25 mm it is σ_{5} = 3.54 μm. It is not clear, why the standard deviation is getting smaller with increasing field position. One reason could be the field dependent intensity distribution change of the spots, making the pattern more unique for correlation. In Figure 4 one spot of the MPDHPSF Cluster is shown at the different field positions x_{0}, x_{1}, x_{2}, x_{3}, x_{4} and x_{5}. There the field dependent intensity distribution change to one side is shown. This effect can be ascribed to vignetting of the objective lens and the NA of the fibre.
Fig. 4 Changing intensity distribution of one defocused DHPSF spot depending on field position. Images (a) to (f) show one spot at calibration positions x_{0}, x_{1}, x_{2}, x_{3}, x_{4} and x_{5}. The spot is always at the same defocus position of z = 20 mm. 
As previously stated, the second step is, to examine the performance at different distances ∆x from a calibrated position. In this experiment we use the calibration position x_{3} = 15 mm. The offset distances are ∆x_{1} = 0.2 mm, ∆x_{2} = 0.4 mm, ∆x_{3} = 0.6 mm and ∆x_{4} = 1.0 mm. Each measurement consists of M = 180 equidistant points within the measurement range of 20 mm. The results are shown in Figure 5. As in the previous case (Fig. 3), for visualization the signals are separated by an offset equal to 0.5 mm times the index i of the distance ∆x_{1}. So the error signal of ∆x_{1} has an offset of 0.5 mm and of ∆x_{4} an offset of 2.0 mm. Two things are conspicuous in those results. Firstly, with increasing distance ∆x_{i} jumps are appearing in the error signal. The number of jumps seems to vary linear with ∆x_{ i } (∆x_{1} = 0.2 mm has one and ∆x_{5} = 1 mm has five jumps). Secondly, an almost linear trend is superimposed on the actual signal.
Fig. 5 Field measurements at calibration position x_{3} = 15 mm with different offsets ∆x_{1} = 0.2 mm, ∆x_{2} = 0.4 mm, ∆x_{3} = 0.6 mm and ∆x_{4} = 1.0 mm. Blue dotted lines show the curves without jumps and detrended. 
Those two occurrences obviously limit the accuracy of the method for 3D field measurements and are analysed in more detail. Therefore, in Figure 6 the 2D correlation energy distribution is plotted for ∆x_{5}. Each row of the image shows the correlation energy curve of one measurement image. It is generated by crosscorrelating one measurement image (in total M = 180 images, plotted as yaxis) with the whole reference image stack (K = 2000 images, plotted as xaxis) and storing the maximum correlation energy of each correlation result (plotted as colormap). In a perfect world, the measurement and reference images are connected in a linear relationship (illustrated as a blue dotted line in Fig. 6), so that each correlation energy curve has only one peak marking the measurement result.
Fig. 6 2D correlation energy distribution for ∆x_{4}. Each point (x, y) in the image represents the maximum correlation energy between measurement image #x and reference image #y. The blue dotted line illustrates a perfect linear relationship between measurement and reference. 
However, with increasing offset ∆x_{ i } multiple peaks appear, as it can be seen in the magnified crosssection plot in Figure 6. If those peaks are changing height, a jump in the measurement result will appear. The linear slope, which is superimposed to the measurement results of Figure 5 can also be explained by the existence of those multiple peaks, since the stripes representing the multiple peaks in Figure 6 no longer have the same slope as the dotted blue line. If the jumps and the linear trend is manually removed, the resulting curve for each offset ∆x_{i} is shown as blue dotted line.
3 Discussion
In this article first field measurements of the proposed single image 3D position sensor are presented. The results are both, very promising and challenging. Promising is the fact, that field measurements on the calibrated positions achieve even better standard deviations than on the optical axis. On the other hand, as soon as the calibration position is left, two effects, that arise from the ambiguity of multiple correlation peaks, currently limit the accuracy of the measurement results. However, the fact, that not only the coarse distance of the light source (MPDHPSF reconstruction), but also the lateral position is known (CoG), should make it possible to handle those effects. Several measures can be taken to solve those problems, which will be analysed in detail in subsequent publications:

Lowpass filtering of the correlation energy signal. This would remove the multiple peaks and, therefore, the jumps.

Reduce the calibration grid period.

Use of a different PSF modification to combine it with the MPM

Investigation of a simulation based calibration of the sensor.
Furthermore, other reconstruction algorithms need to be analysed, such as neural network approaches. Another challenge of the proposed method is the amount of reference images and the computationally demanding task of cross correlation. However, better search algorithms can be used, so that instead of the whole reference stack just a few correlations have to be performed, in order to find the peak. Furthermore, the computation time of correlation can be accelerated considerably by processing it on the graphics board.
The advantages of the proposed sensor system are the costeffective singlecamera setup together with the ability, to be able to retrofit it to existing applications. The application scope of this kind of measurement system can be small machines, whose position is to be measured, like 3D printers and milling or turning machines. It is also possible to extend the measurement volume by increasing the distance between lens and light source. One has to keep in mind, that in this case the NA is getting smaller and, thus, the depth resolution is reduced.
At the calibration positions the ratio of accuracy to measurement range of the presented method is clearly above 2000. Comparable single image 3D methods are astigmatism (below 400), tetrapod PSF (around 600) and time of flight (up to 1000). Details to the named ratio values can be found in [10].
4 Summary
The presented singleimage depth measurement system is based on the combination of a DHPSF and a spatial replication method, which is both created by a phase modulating DOE placed in front of the imaging lens.
In this article the measurement system accuracy is analysed both, on the optical axis and in the field. The depth measurement range analysed in this article is 20 mm. On the optical axis, the standard deviation is 8.5 μm. When only a single DHPSF is used for depth reconstruction (without MPM), the result is up to factor 3 worse. For the field measurements two scenarios are investigated. In the first, where calibration and measurement position match, the standard deviation of the error ranges between 8.3 μm and 3.5 μm, depending on field position. In the second scenario, calibration and measurement position are separated by an offset. Here two effects are observed: Superimposition of a linear slope and jumps on the error signal. The origin of both is analysed. Following publications will focus on the compensation of those effects to realise a 3D calibration.
Conflict of interest
The authors declare no conflict of interest.
Funding
We thank the Deutsche Forschungsgemeinschaft for funding under the grant 279064222.
References
 Li L., Kuang C., Luo D., Liu X. (2012) Axial nanodisplacement measurement based on astigmatism effect of crossed cylindrical lenses, Appl. Opt. 51, 13, 2379–2387. [NASA ADS] [CrossRef] [Google Scholar]
 Hsu W.Y., Yu Z.R., Chen P.J., Kuo C.H., Hwang C.H. (2011) Development of the micro displacement measurement system based on astigmatic method, in: IEEE International Instrumentation and Measurement Technology Conference, 10–12 May 2011, Hangzhou, China, pp. 1–4. [Google Scholar]
 Lew M.D., Lee S.F., Badieirostami M., Moerner W.E. (2011) Corkscrew point spread function for farfield threedimensional nanoscale localization of pointlike objects, Opt. Lett. 36, 2, 202–204. [NASA ADS] [CrossRef] [Google Scholar]
 Jia S., Vaughan J., Zhuang X. (2013) Isotropic 3D super resolution imaging with selfbending point spread function, Biophys. J. 104, 2, 668a. [NASA ADS] [CrossRef] [Google Scholar]
 Shechtman Y., Sahl S.J., Backer A.S., Moerner W.E. (2014) Optimal point spread function design for 3D imaging, Phys. Rev. Lett. 113, 13, 133902. [NASA ADS] [CrossRef] [Google Scholar]
 Baránek M., Bouchal Z. (2014) Optimizing the rotating point spread function by SLM aided spiral phase modulation, Proc. SPIE 161, 9441. [Google Scholar]
 Haist T., Dong S., Arnold T., Gronle M., Osten W. (2014) Multiimage position detection, Opt. Exp. 22, 12, 14450–14463. [NASA ADS] [CrossRef] [Google Scholar]
 Hartlieb S., Tscherpel M., Guerra F., Haist T., Osten W., Ringkowski M., Sawodny O. (2021) Highly accurate imaging based position measurement using holographic point replication, Measurement 172, 108852. [Google Scholar]
 Hartlieb S., Ringkowski M., Haist T., Sawodny O., Osten W. (2021) Multipositional imagebased vibration measurement by holographic image replication, Light. Adv. Manuf. 2, 1. [CrossRef] [Google Scholar]
 Hartlieb S., Schober C., Haist T., Reichelt S. (2022) Accurate single image depth detection using multiple rotating point spread functions, Opt. Exp. 30, 23035–23049. [NASA ADS] [CrossRef] [Google Scholar]
 Hartlieb S., Schober C., Haist T., Reichelt S. (2022) Holographic singleimage depth reconstruction, EPJ Web Conf. 266, 10005. [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
All Figures
Fig. 1 Combination of the MPM and the DHPSF. The two rotating spots created by the SPM are replicated to four copies [10]. 

In the text 
Fig. 2 Field evaluation of the proposed sensor. The blue/white crosses at x_{0}, x_{1}, x_{2} mark the image positions, where a reference image stack is acquired. The red cross marks the current measurement position. For ∆x′ = 0 the calibration and measurement position is identical. 

In the text 
Fig. 3 Field measurements at calibration positions x_{0} = 0 mm, x_{1} = 5 mm, x_{2} = 10 mm, x_{3} = 15 mm, x_{4} = 20 mm and x_{5} = 25 mm with ∆x = 0 mm, meaning that the measurements are taken at the same position as the calibration. To show all field measurements in one plot, the signals are shifted by an offset of 0.1 mm times index i of x_{ i }, so the measurements at field position x_{0} are shown in the lowest and at x_{5} in the highest plot. 

In the text 
Fig. 4 Changing intensity distribution of one defocused DHPSF spot depending on field position. Images (a) to (f) show one spot at calibration positions x_{0}, x_{1}, x_{2}, x_{3}, x_{4} and x_{5}. The spot is always at the same defocus position of z = 20 mm. 

In the text 
Fig. 5 Field measurements at calibration position x_{3} = 15 mm with different offsets ∆x_{1} = 0.2 mm, ∆x_{2} = 0.4 mm, ∆x_{3} = 0.6 mm and ∆x_{4} = 1.0 mm. Blue dotted lines show the curves without jumps and detrended. 

In the text 
Fig. 6 2D correlation energy distribution for ∆x_{4}. Each point (x, y) in the image represents the maximum correlation energy between measurement image #x and reference image #y. The blue dotted line illustrates a perfect linear relationship between measurement and reference. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.