Dec 02, 2024
- Company
- Press Release
- R&D
- North America
- AI & Robotics
Jun 05, 2024
Company / Press Release
Osaka, Japan - Panasonic Holdings Corporation (Panasonic Holdings) has developed a novel camera calibration method that estimates accurate and robust camera angles from a fisheye image. These angles are essential for positioning and navigation in physical spaces.
Accurate travel direction estimation that aids in positioning and navigation in physical spaces enables self-driving for cars, drones, and robots. Specific measurement systems, such as gyroscopes, are generally attached as additional devices to cameras. However, downsizing, lightening, and reducing costs require a technology that accurately estimates travel directions using only image capture. To address the problem of angle estimation difficulty due to lens distortion from fisheye cameras, Panasonic Holdings developed a method designed for applications such as broad surveillance and obstacle detection. Specifically, this accurate and robust method based on pose estimation can address drastically distorted images under the so-called “Manhattan world assumption;” the assumption that buildings, roads, and other man-made objects typically are at right angles to each other. Since the method can be calibrated from a single general image of a city scene, it can extend applications to moving bodies, such as cars, drones, and robots.
The technology will be presented at the main conference of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024, as a research outcome of the REAL-AI*1 program for developing top human resources in the Panasonic group. The conference will be held in Seattle, Washington, United States, from June 17 to 21, 2024. This conference is one of the top international AI and computer vision conferences.
Panasonic Holdings aims to contribute to helping customers' lives and work through research and development of AI technology that accelerates social implementation and training of top AI researchers.
Image-based camera angle estimation is desirable for accurate positioning and navigation using low-cost miniaturized devices for cars, drones, and robots. However, city scene images, including many objects, prevent calibration methods from keying in on ground and vertical directions for camera angle estimation. Estimating camera angles from a single image is difficult even under the Manhattan world assumption. Conventional calibration methods estimated camera angles based on many vanishing points (VPs) that have specific appearances at infinity. Vanishing points have six directions, both ends of X-, Y-, and Z-axes, along three-dimensional orthogonal axes. In addition to these VPs, the method proposes eight more points, called auxiliary diagonal points (ADPs), directing to either 45° or −45° from each axis of X-, Y-, and Z-axes. These ADPs improve the accuracy and robustness of camera angle estimation because the information for AI training is increased by ADPs, which can be regarded as the same as VPs.
Additionally, the proposed method uses heatmaps to robustly estimate VPs in scenes that contain few artificial objects, whereas conventional methods use arc detection. This heatmap is widely used for accurate and robust estimation in computer vision tasks, especially pose estimation and skeletal detection. Following heatmaps, proposed neural networks estimate the probability of VPs in each pixel to determine the VP image coordinates based on areas with a high probability of being VPs. By contrast, conventional methods directly estimate these VP coordinates. The results of the proposed method using heatmaps show that VPs and ADPs were detected accurately in Figure 1. Final camera angles are introduced on the basis of combinations of VPs, ADPs, and lens distortion. This lens distortion is estimated by technology presented in 2022*2.
Figure 2 shows the qualitative results obtained on the synthetic fisheye images from panoramic image datasets. Vertical and horizontal reference lines are indicated to visualize rotations and distortion. Conventional method results had inclined reference lines against the ground-truth cross in Figure 2(a), expressing substantial angle and lens distortion errors. By contrast, the new method developed by Panasonic Holdings achieved accurate angle estimation, showing the cross of the cyan line and magenta or yellow lines in Figure 2(b).
The method can address problematic conditions where city images contain few artificial objects. In particular, this method accurately estimates the angle from city images in which street trees dominate the images. Experimental results demonstrated the effectiveness of the proposed method on large-scale datasets and with off-the-shelf cameras at the world’s highest accuracy*3.
Applying pose estimation networks, this new method can estimate angles accurately and robustly from a single image even if the images are heavily distorted by certain lenses. Notably, this method will contribute to applications involving moving bodies, such as cars, drones, and robots, for positioning and navigation because accurate travel directions need to be estimated at a low cost in these fields.
*1: An in-company group develops top human resources to lead to advanced AI R&D of the Panasonic group through developing top human resources for value creation and rapid business development with advanced technologies, supervised by two professors: Prof. Tadahiro Taniguchi is a professor at Kyoto University and a visiting research professor at Ritsumeikan University; Prof. Takayoshi Yamashita is a professor at Chubu University. Young and senior researchers are challenging top conferences, and many research papers have been accepted at these conferences.
*2: N. Wakai, S. Sato, Y. Ishii, and T. Yamashita. Rethinking Generic Camera Models for Deep Single Image Camera Calibration to Recover Rotation and Fisheye Distortion. In Proceedings of the European Conference on Computer Vision, volume 13678, pages 679-698, 2022.
https://doi.org/10.1007/978-3-031-19797-0_39
*3: As of June 5, 2024, by in-company investigation, as camera calibration accuracy in methods using a general scene image to estimate pan, tilt, and roll angles in a Manhattan world.
“Deep Single Image Camera Calibration by Heatmap Regression to Recover Fisheye Images Under Manhattan World Assumption” https://arxiv.org/abs/2303.17166
This research is produced by Dr. Nobuhiko Wakai, Platform Division, Panasonic Holdings, Satoshi Sato and Yasunori Ishii, Technology Division, Panasonic Holdings, in association with Prof. Takayoshi Yamashita, Chubu University.
About the Panasonic Group Founded in 1918, and today a global leader in developing innovative technologies and solutions for wide-ranging applications in the consumer electronics, housing, automotive, industry, communications, and energy sectors worldwide, the Panasonic Group switched to an operating company system on April 1, 2022 with Panasonic Holdings Corporation serving as a holding company and eight companies positioned under its umbrella. The Group reported consolidated net sales of 8,496.4 billion yen for the year ended March 31, 2024. To learn more about the Panasonic Group, please visit: https://holdings.panasonic/global/ |
The content in this website is accurate at the time of publication but may be subject to change without notice.
Please note therefore that these documents may not always contain the most up-to-date information.
Please note that German, French and Chinese versions are machine translations, so the quality and accuracy may vary.