Open Access
Review
Issue
J. Eur. Opt. Society-Rapid Publ.
Volume 20, Number 1, 2024
Article Number 18
Number of page(s) 18
DOI https://doi.org/10.1051/jeos/2024018
Published online 03 May 2024
  1. Guo J.W., He Y.S., Qi X.Z., Wu G., Hu Y., Li B., Zhang J.W. (2019) Real-time measurement and estimation of the 3D geometry and motion parameters for spatially unknown moving targets, Aerosp. Sci. Technol. 97, 105619. [Google Scholar]
  2. Xu D.F., Zhu Y.K., Choy C.B., Li F.F. (2017) Scene graph generation by iterative message passing, in: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 21–26 July. [Google Scholar]
  3. Liu A., Makadia A., Tucker R., Snavely N., Jampani V., Kanazawa V. (2021) Infinite nature: Perpetual view generation of natural scenes from a single image, in: International Conference on Computer Vision, Montreal, Canada, 10–17 October. [Google Scholar]
  4. Fuller A., Fan Z., Day C., Barlow C. (2020) Digital twin: Enabling technologies, challenges and open research, IEEE Access 8, 108952–108971. [NASA ADS] [CrossRef] [Google Scholar]
  5. Tao F., Zhang H., Liu H., Nee A.Y.C. (2019) Digital twin in industry: State-of-the-art, IEEE Tran. Ind. Inform. 15, 2405–2415. [Google Scholar]
  6. Vuković M., Mazzei D., Chessa S., Fantoni G. (2021) Digital twins in industrial IoT: A survey of the state of the art and of relevant standards, in: IEEE International Conference on Communications Workshops, Montreal, Canada, 14–23 June. [Google Scholar]
  7. Weidlich D., Zickner H., Riedel T., Böhm A. (2009) Real 3D geometry and motion data as a basis for virtual design and testing, in: CIRP Design Conference, Cranfield University, 30–31 March. [Google Scholar]
  8. Richter S.R., Alhaija H.A., Koltun V. (2023) Enhancing photorealism enhancement, IEEE Trans. Pattern Anal. Mach. Intell. 45, 1700–1715. [CrossRef] [Google Scholar]
  9. Xue Y., Li Y., Singh K.K., Lee Y.J. (2022) GIRAFFE HD: A high-resolution 3D-aware generative model, in: IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 18–24 June. [Google Scholar]
  10. Tan S., Wong K., Wang S., Manivasagam S., Ren M., Urtasun R. (2021) SceneGen: Learning to generate realistic traffic scenes, in: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, 20–25 June. [Google Scholar]
  11. Fan Y., Lin Z., Saito J., Wang W., Komura T. (2022) FaceFormer: Speech-driven 3D facial animation with transformers, in: IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 18–24 June. [Google Scholar]
  12. Wang J.K., Pun A., Tu J., Manivasagam S., Sadat A., Casas S., Ren M. (2021) AdvSim: Generating safety-critical scenarios for self-driving vehicles, in: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, 20–25 June. [Google Scholar]
  13. Mi L., Zhao H., Nash C., Jin X.H., Gao J.Y., Sun C., Schmid C. (2021) HDMapGen: A hierarchical graph generative model of high definition maps, in: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, 20–25 June. [Google Scholar]
  14. Luo C.Y., Yang X.D., Yuille A. (2021) Self-supervised pillar motion learning for autonomous driving, in: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, 20–25 June. [Google Scholar]
  15. Iwashita S., Murase Y., Yasukawa Y., Kanda S., Sawasaki N., Asada T. (2005) Developing a service robot, in: IEEE International Conference Mechatronics and Automation, Niagara Falls, Canada, 29 July 2005–01 August. [Google Scholar]
  16. Luo Z., Xue W., Chae J., Fu G. (2022) Skp: Semantic 3d keypoint detection for category-level robotic manipulation, IEEE Robot. Automat. Lett. 7, 5437–5444. [Google Scholar]
  17. Zhou Z., Li L., Fürsterling A., Durocher H.J., Mouridsen J., Zhang X. (2022) Learning-based object detection and localization for a mobile robot manipulator in SME production, Robot. Comput.-Integr. Manuf. 73, 102229. [CrossRef] [Google Scholar]
  18. Jiang S., Yao W., Wong M.S., Hang M., Hong Z., Kim E.J., Joo S.H., Kuc T.Y. (2019) Automatic elevator button localization using a combined detecting and tracking framework for multi-story navigation, IEEE Access 8, 1118–1134. [Google Scholar]
  19. Xiang L., Gai J., Bao Y., Yu J., Schnable P.S., Tang L. (2023) Field-based robotic leaf angle detection and characterization of maize plants using stereo vision and deep convolutional neural networks, J. Field Robot. 40, 1034–1053. [CrossRef] [Google Scholar]
  20. Montoya Angulo A., Pari Pinto L., Sulla Espinoza E., Silva Vidal Y., Supo Colquehuanca E. (2022) Assisted operation of a robotic arm based on stereo vision for positioning near an explosive device, Robotics 11, 100. [CrossRef] [Google Scholar]
  21. Vizzo I., Mersch B., Marcuzzi R., Wiesmann L., Behley J., Stachniss C. (2022) Make it dense: Self-supervised geometric scan completion of sparse 3D lidar scans in large outdoor environments, IEEE Robot. Autom. Lett. 7, 8534–8541. [Google Scholar]
  22. Jiang S., Hong Z. (2023) Unexpected Dynamic Obstacle Monocular Detection in the Driver View, IEEE Intell. Transp. Syst. Mag. 15, 68–81. [Google Scholar]
  23. Weerakoon K., Sathyamoorthy A.J., Patel U., Manocha D. (2022) Terp: Reliable planning in uneven outdoor environments using deep reinforcement learning, in: 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, USA, 23–27 May. [Google Scholar]
  24. Duan R., Paudel D.P., Fu C., Lu P. (2022) Stereo orientation prior for UAV robust and accurate visual odometry, IEEE/ASME Trans. Mechatron. 27, 3440–3450. [CrossRef] [Google Scholar]
  25. Ding C., Dai Y., Feng X., Zhou Y., Li Q. (2023) Stereo vision SLAM-based 3D reconstruction on UAV development platforms, J. Electron. Imaging 32, 013041. [NASA ADS] [Google Scholar]
  26. Sumetheeprasit B., Rosales Martinez R., Paul H., Ladig R., Shimonomura K. (2023) Variable baseline and flexible configuration stereo vision using two aerial robots, Sensors 23, 1134. [NASA ADS] [CrossRef] [Google Scholar]
  27. Petrakis G., Antonopoulos A., Tripolitsiotis A., Trigkakis D., Partsinevelos P. (2023) Precision mapping through the stereo vision and geometric transformations in unknown environments, Earth Sci. Inform. 16, 1849–1865. [NASA ADS] [CrossRef] [Google Scholar]
  28. Xie J.Y., You X.Q., Huang Y.Q., Ni Z.R., Wang X.C., Li X.R., Yang C.Y. (2020) 3D-printed integrative probeheads for magnetic resonance, Nat. Commun. 11, 5793. [Google Scholar]
  29. Pang S., Morris D., Radha H. (2022) Fast-CLOCs: Fast camera-LiDAR object candidates fusion for 3D object detection, in: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, USA, 03–08 January. [Google Scholar]
  30. Downs L., Francis A., Koenig N., Kinman B., Hickman R., Reymann K., McHugh T.B., Vanhoucke V. (2022) Google scanned objects: A high-quality dataset of 3d scanned household items, in: International Conference on Robotics and Automation (ICRA), Philadelphia, USA, 23–27 May. [Google Scholar]
  31. Pirone D., Sirico D., Miccio L., Bianco V., Mugnano M., Ferraro P., Memmolo P. (2022) Speeding up reconstruction of 3D tomograms in holographic flow cytometry via deep learning, Lab Chip 22, 793–804. [CrossRef] [Google Scholar]
  32. Jiang S., Tarabalka Y., Yao W., Hong Z., Feng G. (2023) Space-to-speed architecture supporting acceleration on VHR image processing, ISPRS J. Photogramm. Remote Sens. 198, 30–44. [NASA ADS] [CrossRef] [Google Scholar]
  33. Mur-Artal R., Montiel J., Tardós J. (2015) ORB-SLAM: A versatile and accurate monocular SLAM system, IEEE Trans. Robot. 31, 5, 1147–1163. [CrossRef] [Google Scholar]
  34. Rosinol R., Leonard J., Carlone L. (2023) NeRF-SLAM: Real-time dense monocular SLAM with neural radiance fields, in: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, USA, 01–05 October. [Google Scholar]
  35. Luo K., Yang G., Xian W., Haraldsson H., Hariharan B., Belongie S., Stay Positive, (2021) Non-negative image synthesis for augmented reality, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 20–25 June. [Google Scholar]
  36. Charles R.Q., Su H., Kaichun M., Guibas L.J. (2017) PointNet: Deep learning on point sets for 3D classification and segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 21–26 July. [Google Scholar]
  37. Charles R.Q., Li Y., Hao S., Leonidas J.G. (2017) PointNet++: Deep hierarchical feature learning on point sets in a metric space, in: International Conference on Neural Information Processing Systems, Long Beach, USA, 4–9 December 2017. [Google Scholar]
  38. Fan H., Su H., Guibas L. (2017) A point set generation network for 3D object reconstruction from a single image, in: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 21–26 July. [Google Scholar]
  39. Nie Y., Hou J., Han X.G., Nießner M. (2021) RfD-Net: Point scene understanding by semantic instance reconstruction, in: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, 20–25 June. [Google Scholar]
  40. Lu Q., Xiao M., Lu Y., Yuan X.H., Yu Y. (2019) Attention-based dense point cloud reconstruction from a single image, IEEE Access 7, 137420–137431. [NASA ADS] [CrossRef] [Google Scholar]
  41. Luo S., Hu W. (2021) Diffusion probabilistic models for 3D point cloud generation, in: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, 20–25 June. [Google Scholar]
  42. Wu Z.R., Song S.R., Khosla A., Yu F., Zhang L.G., Tang X.O., Xiao J.X. (2015) 3D ShapeNets: A deep representation for volumetric shapes, in: IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 07–12 June. [Google Scholar]
  43. Choy C.B., Xu D.F., Gwak J.Y., Chen K., Savarese S. (2016) 3D–R2N2: A unified approach for single and multi-view 3D object reconstruction, in: European Conference on Computer Vision, Amsterdam, Netherlands, 11–14 October. [Google Scholar]
  44. Wu J.J., Zhang C.K., Zhang X.M., Zhang Z.T., Freeman W.T., Tenenbaum J.B. (2018) Learning shape priors for single-view 3D completion and reconstruction, in: European Conference on Computer Vision, Munich, Germany, 8–14 September. [Google Scholar]
  45. Kanazawa A., Tulsiani S., Efros A.A., Malik J. (2018) Learning category-specific mesh reconstruction from image collections, in: European Conference on Computer Vision, Munich, Germany, 8–14 September. [Google Scholar]
  46. Wang N.Y., Zhang Y.D., Li Z.W., Fu Y.W., Liu W., Jiang Y.G. (2018) Pixel2Mesh: Generating 3D mesh models from single RGB images, in: European Conference on Computer Vision, Munich, Germany, 8–14 September. [Google Scholar]
  47. Wen C., Zhang Y.D., Li Z.W., Fu Y.W. (2019) Pixel2Mesh++: Multi-view 3D mesh generation via deformation, in: IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October 2019–02 November 2019. [Google Scholar]
  48. Mescheder L., Oechsle M., Niemeyer M., Nowozin S., Geiger A. (2019) Occupancy networks: Learning 3D reconstruction in function space, in: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 16–20 June. [Google Scholar]
  49. Park J.J., Florence P., Straub J., Newcombe R., Lovegrove S. (2019) DeepSDF: Learning continuous signed distance functions for shape representation, in: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 16–20 June. [Google Scholar]
  50. Mildenhall B., Srinivasan P.P., Tancik M., Barron J.T., Ramamoorthi R., Ng R. (2020) NeRF: Representing scenes as neural radiance fields for view synthesis, in: European Conference on Computer Vision, Glasgow, UK, 23–28 August. [Google Scholar]
  51. Moravec H.P. (1981) Rover visual obstacle avoidance, in: International Joint Conference on Artificial Intelligence, Vancouver, Canada, 24–28 August. [Google Scholar]
  52. Harris C., Stephens M. (1988) A combined corner and edge detector, in: Alvey Vision Conference, Manchester, UK, 31 August–2 September. [Google Scholar]
  53. Harris C. (1993) Geometry from visual motion, in: Active vision 5, 263–284. [Google Scholar]
  54. Lowe D.G. (1999) Object recognition from local scale-invariant features, in: IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 Sept. [Google Scholar]
  55. Mikolajczyk K., Schmid C. (2001) Indexing based on scale invariant interest points, in: IEEE International Conference on Computer Vision, Vancouver, Canada, 7–14 July. [Google Scholar]
  56. Brown M., Lowe D. (2002) Invariant features from interest point groups, in: British Machine Vision Conference, Cardiff, UK, 2–5 September. [Google Scholar]
  57. Lowe D.G. (2004) Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision 60, 91–110. [CrossRef] [Google Scholar]
  58. Bay H., Ess A., Tuytelaars T., Van Gool L. (2006) SURF: Speeded up robust features, in: European Conference on Computer Vision, Graz, Austria, 7–13 May. [Google Scholar]
  59. Rosten E., Drummond T. (2006) Machine learning for high-speed corner detection, in: European Conference on Computer Vision, Graz, Austria, 7–13 May. [Google Scholar]
  60. Rublee E., Rabaud V., Konolige K., Bradski G. (2011) ORB: An efficient alternative to SIFT or SURF, in: International Conference on Computer Vision, Barcelona, Spain, 06–13 November. [Google Scholar]
  61. Cruz-Mota J., Bogdanova I., Paquier B., Bierlaire, M., Thiran, J. (2012) Scale invariant feature transform on the sphere: Theory and applications, Int. J. Comput. Vis. 98, 217–241. [CrossRef] [Google Scholar]
  62. Lakshmi K.D., Vaithiyanathan V. (2016) Image registration techniques based on the scale invariant feature transform, IETE Tech. Rev. 34, 22–29. [Google Scholar]
  63. Al-khafaji S.L., Zhou J., Zia A., Liew A.W. (2018) Spectral-spatial scale invariant feature transform for hyperspectral images, IEEE Trans. Image Process. 27, 837–850. [NASA ADS] [CrossRef] [Google Scholar]
  64. Li M.J., Yuan X.C. (2021) FD-TR: Feature detector based on scale invariant feature transform and bidirectional feature regionalization for digital image watermarking, Multimed. Tools Appl. 80, 32197–32217. [CrossRef] [Google Scholar]
  65. Andrade N., Faria F., Cappabianco F. (2018) A practical review on medical image registration: From rigid to deep learning based approaches, in: SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Parana, Brazil, 29 October 2018–01 November 2018. [Google Scholar]
  66. Sedghi A., O’Donnell L.J., Kapur T., Learned-Miller E., Mousavi P., Wells W.M. III (2021) Image registration: Maximum likelihood, minimum entropy and deep learning, Med. Image Anal. 69, 101939. [CrossRef] [Google Scholar]
  67. Yu K., Ma J., Hu F.Y., Ma T., Quan S.W., Fang B. (2019) A grayscale weight with window algorithm for infrared and visible image registration, Infrared Phys. Technol. 99, 178–186. [NASA ADS] [CrossRef] [Google Scholar]
  68. Ruppert G.S.R., Favretto F., Falcão A.X., Yasuda C. (2010) Fast and accurate image registration using the multiscale parametric space and grayscale watershed transform, in: International Conference on Systems, Signals and Image Processing, Rio de Janeiro, Brazil, 17–19 June 2010. [Google Scholar]
  69. Mei X., Sun X., Zhou M., Jiao S., Wang H., Zhang X.P. (2011) On building an accurate stereo matching system on graphics hardware, in: IEEE International Conference on Computer Vision Workshops, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  70. Bleyer M., Rhemann C., Rother C. (2011) PatchMatch stereo-stereo matching with slanted support windows, in: British Machine Vision Conference, Dundee, UK, 29 August–2 September. [Google Scholar]
  71. Han X.F., Leung T., Jia Y.Q., Sukthankar R., Berg A.C. (2015) MatchNet: Unifying feature and metric learning for patch-based matching, in: IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 07–12 June. [Google Scholar]
  72. Barron J.T., Adams A., Shih Y., Hernández C. (2015) Fast bilateral-space stereo for synthetic defocus, in: IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 07–12 June. [Google Scholar]
  73. Barron J.T., Poole B. (2016) The fast bilateral solver, in: European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October. [Google Scholar]
  74. Žbontar J., LeCun Y. (2015) Computing the stereo matching cost with a convolutional neural network, in: IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 07–12 June. [Google Scholar]
  75. Chen Z.Y., Sun X., Wang Y., Yu Y.N., Huang C. (2015) A deep visual correspondence embedding model for stereo matching costs, in: IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 Dec. [Google Scholar]
  76. Žbontar J., LeCun Y. (2016) Stereo matching by training a convolutional neural network to compare image patches, IEEE Trans. Pattern Anal. Mach. Intell. 17, 2287–2318. [Google Scholar]
  77. Ye X.Q., Li J.M., Wang H., Huang H.X., Zhang X.L. (2017) Efficient stereo matching leveraging deep local and context information, IEEE Access 5, 18745–18755. [NASA ADS] [CrossRef] [Google Scholar]
  78. Zhang F.H., Prisacariu V., Yang R.G., Torr P.H.S. (2019) GA-Net: Guided aggregation net for end-to-end stereo matching, in: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 16–20 June. [Google Scholar]
  79. Zhang J.W., Wang X., Bai X., Wang C., Huang L., Chen Y.M., Gu L. (2022) Revisiting domain generalized stereo matching networks from a feature consistency perspective, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 18–24 June. [Google Scholar]
  80. Moulon P., Monasse P., Marlet R. (2013) Global fusion of relative motions for robust, accurate and scalable structure from motion, in: IEEE International Conference on Computer Vision, Sydney, Australia, 01–08 December. [Google Scholar]
  81. Heller J., Havlena M., Jancosek M., Torii A., Pajdla T. (2015) 3D reconstruction from photographs by CMP SfM web service, in: IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 18–22 May. [Google Scholar]
  82. Schönberger J.L., Frahm J.L. (2016) Structure-from-motion revisited, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 27–30 June. [Google Scholar]
  83. Cui H., Gao X., Shen S., Hu Z. (2017) HSfM: Hybrid structure-from-motion, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 21–26 July. [Google Scholar]
  84. Yin H.Y., Yu H.Y. (2020) Incremental SFM 3D reconstruction based on monocular, in: International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 12–13 December. [Google Scholar]
  85. Wang Y.X., Lu Y.W., Xie Z.H., Lu G.Y. (2021) Deep unsupervised 3D SfM face reconstruction based on massive landmark bundle adjustment, in: Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment. ACM International Conference on Multimedia, New York, United States, 20–24 October. [Google Scholar]
  86. Seitz S.M., Curless B., Diebel J., Scharstein D., Szeliski R. (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, USA, 17–22 June. [Google Scholar]
  87. Sinha S., Mordohai P., Pollefeys M. (2007) Multi-View Stereo via Graph Cuts on the Dual of an Adaptive Tetrahedral Mesh, in: 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October. [Google Scholar]
  88. Lin X.B., Wang J.X., Lin C. (2020) Research on 3d reconstruction in binocular stereo vision based on feature point matching method, in: International Conference on Information Systems and Computer Aided Education (ICISCAE), Dalian, China, 27–29 September. [Google Scholar]
  89. Lindenberger P., Sarlin P.E., Larsson V., Pollefeys M. (2021) Pixel-perfect structure-from-motion with featuremetric refinement, in: IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, 10–17 Oct. [Google Scholar]
  90. Zhou L., Zhang Z., Jiang H., Sun H., Bao H., Zhang G. (2021) DP-MVS: Detail preserving multi-view surface reconstruction of large-scale scenes, Remote Sens. 13, 4569. [NASA ADS] [CrossRef] [Google Scholar]
  91. Eigen D., Puhrsch C., Fergus R. (2014) Depth map prediction from a single image using a multi-scale deep network, in: International Conference on Neural Information Processing Systems, Cambridge, United States, December 8–13. [Google Scholar]
  92. Eigen D., Fergus R. (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture, in: IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 Dec. [Google Scholar]
  93. Crispell D., Bazik M. (2017) Pix2face: Direct 3D face model estimation, in: IEEE International Conference on Computer Vision, Venice, Italy, 22–29 Oct. [Google Scholar]
  94. Yao Y., Luo Z., Li S., Fang T., Quan L. (2018) MVSNet: Depth inference for unstructured multi-view stereo, in: European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September. [Google Scholar]
  95. Yao Y., Luo Z., Li S., Shen T., Fang T., Quan L. (2019) Recurrent MVSNet for high-resolution multi-view stereo depth inference, in: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 15–20 June. [Google Scholar]
  96. Chen R., Han S., Xu J., Su H. (2019) Point-Based Multi-View Stereo Network, in: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 27 October 2019–02 November 2019. [Google Scholar]
  97. Zhang J., Yao Y., Li S., Luo Z., Fang T. (2020) Visibility-aware multi-view stereo network, in: The 31st British Machine Vision Virtual Conference, Virtual Conference, 7–10 September. [Google Scholar]
  98. Wei Z., Zhu Q., Min M., Chen Y., Wang G. (2021) AA-RMVSNet: Adaptive aggregation recurrent multi-view stereo network, in: The IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, 10–17 Oct. [Google Scholar]
  99. Peng P., Wang R., Wang Z., Lai Y., Wang R. (2022) Rethinking depth estimation for multi-view stereo: A unified representation, in: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, USA, June 2022, pp. 18–24. [Google Scholar]
  100. Yen-Chen L., Florence P., Barron J., Rodriguez A., Isola P., Lin T. (2021) iNeRF: Inverting neural radiance fields for pose estimation, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September 2021–01 October 2021. [Google Scholar]
  101. Ma L., Li X., Liao J., Zhang Q., Wang X., Wang J., Sander P. (2022) Deblur-NeRF: Neural radiance fields from blurry images, in: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, USA, 18–24 June. [Google Scholar]
  102. Xu Qiangeng, Xu Z., Philip J., Bi S., Shu Z., Sunkavalli K., Neumann Ulrich (18–24 June 2022) Point-NeRF: Point-based Neural Radiance Fields, New Orleans, USA. [Google Scholar]
  103. Jiang Y., Hedman P., Mildenhall B., Xu D., Barron J., Wang Z., Xue T. (2023) AligNeRF: High-fidelity neural radiance fields via alignment-aware training, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, Canada, 18–22 June. [Google Scholar]
  104. Xu L., Xiangli Y., Peng S., Pan X., Zhao N., Theobalt C., Dai B., et al. (2023) Grid-guided neural radiance fields for large urban scenes, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, Canada, 18–22 June. [Google Scholar]
  105. Stucker C., Schindler K. (2020) ResDepth: Learned residual stereo reconstruction, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, USA, 14–19 June. [Google Scholar]
  106. He K, Zhang X., Ren S, Sun J (2016) Deep residual learning for image recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 27–30 June. [Google Scholar]
  107. Peng S.D., Zhang Y.Q., Xu Y.H., Wang Q.Q., Shuai Q., Bao H.J., Zhou X.W. (2021) Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans, in: IEEE Conference on Computer Vision and Pattern Recognition Workshops, Nashville, USA, 19–25 June. [Google Scholar]
  108. Choe J., Im S., Rameau F., Kang M., Kweon I.S. (2021) VolumeFusion: Deep depth fusion for 3d scene reconstruction, in: IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, 10–17 Oct. [Google Scholar]
  109. Wang D., Cui X.R., Chen X., Zou Z.X., Shi T.Y., Salcudean S., Wang Z.J. (2021) Multi-view 3D reconstruction with transformers, in: IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, 10–17 Oct. [Google Scholar]
  110. Huang Y.H., He Y., Yuan Y.J., Lai Y.K., Gao L. (2022) StylizedNeRF: Consistent 3D scene stylization as stylized NeRF via 2D–3D mutual learning, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 18–24 June. [Google Scholar]
  111. Geiger A., Lenz P., Urtasun R. (2012) Are we ready for autonomous driving? The KITTI vision benchmark suite, in: IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 16–21 June. [Google Scholar]
  112. Geiger A., Lenz P., Stiller C., Urtasun R. (2013) Vision meets robotics: The KITTI dataset, Int. J. Robot. Res. 32, 1231–1237. [CrossRef] [Google Scholar]
  113. Menze M., Geiger A. (2015) Object scene flow for autonomous vehicles, in: IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 07–12 June. [Google Scholar]
  114. Jensen R.R., Dahl A., Vogiatzis G., Tola E., Aanæs H. (2014) Large scale multi-view stereopsis evaluation, in: IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, 23–28 June. [Google Scholar]
  115. Aanæs H., Jensen R.R., Vogiatzis G., Tola E., Dahl A.B. (2016) Large-scale data for multiple-view stereopsis, Int. J. Comput. Vision 120, 153–168. [CrossRef] [Google Scholar]
  116. Chang A.X., Funkhouser T., Guibas L., Hanrahan P., Huang Q.X., Li Z.M., Savarese S. (2015) ShapeNet: An information-rich 3d model repository, pp. 1–11. ArXiv preprint available at https://doi.org/10.48550/arXiv.1512.03012. [Google Scholar]
  117. Yi L., Kim V.G., Ceylan D., Shen I., Yan M.Y., Su H., Lu C. (2016) A scalable active framework for region annotation in 3D shape collections, ACM Trans. Graph. 35, 1–12. [CrossRef] [Google Scholar]
  118. Dai A., Chang A.X., Savva M., Halber M., Funkhouser T., Nießner M. (2017) ScanNet: Richly-annotated 3d reconstructions of indoor scenes, in: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 21–26 July. [Google Scholar]
  119. Knapitsch A., Park J., Zhou Q.Y., Koltun V. (2017) Tanks and temples: Benchmarking large-scale scene reconstruction, ACM Trans. Graph. 36, 1–13. [CrossRef] [Google Scholar]
  120. Schöps T., Schönberger J.L., Galliani S., Sattler T., Schindler K., Pollefeys M., Geiger A. (2017) A multi-view stereo benchmark with high-resolution images and multi-camera videos, in: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 21–26 July. [Google Scholar]
  121. Huang X.Y., Cheng X.J., Geng Q.C., Cao B.B., Zhou D.F., Wang P., Lin Y.Q. (2018) The apolloscape dataset for autonomous driving, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, USA, 18–22 June. [Google Scholar]
  122. Huang X.Y., Wang P., Cheng X.J., Zhou D.F., Geng Q.C., Yang R.G. (2020) The apolloscape open dataset for autonomous driving and its application, IEEE Trans. Pattern Anal. Mach. Intell. 42, 2702–2719. [CrossRef] [Google Scholar]
  123. Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; Gall, J. : SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October 2019–02 November 2019. [Google Scholar]
  124. Behley J., Garbade M., Milioto A., Quenzel J., Behnke S., Gall J., Stachniss C. (2021) Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI dataset, Int. J. Robot. Res. 40, 959–967. [CrossRef] [Google Scholar]
  125. Yao Y., Luo Z.X., Li S.W., Zhang J.Y., Ren Y.F., Zhou L., Fang T. (2020) BlendedMVS: A large-scale dataset for generalized multi-view stereo networks, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 13–19 June. [Google Scholar]
  126. Yu F., Chen H.F., Wang X., Xian W.Q., Chen Y.Y., Liu F.C., Madhavan V. (2020) BDD100K: A diverse driving dataset for heterogeneous multitask learning, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 13–19 June. [Google Scholar]
  127. Caesar H., Bankiti V., Lang A.H., Vora S., Liong V.E., Xu Q., Krishnan A., Pan Y., Baldan G., Beijbom O. (2020) nuScenes: A multimodal dataset for autonomous driving, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, 13–19 June. [Google Scholar]
  128. Zhang R., Isola P., Efros A.A., Shechtman E., Wang Q. (2018) The unreasonable effectiveness of deep features as a perceptual metric, in: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 18–23 June. [Google Scholar]
  129. Rubner Y., Tomasi C., Guibas L.J. (2000) The earth mover’s distance as a metric for image retrieval, Int. J. Comput. Vision 40, 99–121. [CrossRef] [Google Scholar]
  130. Zhang C., Cai Y.J., Lin G.S., Shen C.H. (2020) DeepEMD: Few-shot image classification with differentiable earth mover’s distance and structured classifiers, in: IEEE/CVF conference on computer vision and pattern recognition, Seattle, USA, 13–19 June. [Google Scholar]
  131. Achlioptas P., Diamanti O., Mitliagkas I., Guibas L. (2018) Learning representations and generative models for 3d point clouds, in: International Conference on Machine Learning, Stockholm, Sweden, 10–15 July. [Google Scholar]
  132. Wen C., Yu B.S., Tao D.C. (2021) Learning progressive point embeddings for 3d point cloud generation, in: IEEE Conference on Computer Vision and Pattern Recognition Workshops, Nashville, USA, 19–25 June. [Google Scholar]
  133. Zhang C., Cai Y.J., Lin G.S., Shen C.H. (2023) DeepEMD: Differentiable earth mover’s distance for few-shot learning, IEEE Trans. Pattern Anal. Mach. Intell. 45, 5632–5648. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.