3D Information Processing
- Generation of Accurate 3D Texture Model using LiDAR and 360 Camera
- Indoor and Outdoor Modeling using 2D Images
- 3D Model with Thermal Information using FIR camera
- Conversion to Visible Images from Multispectral Satellite Images using Deep Learning
- Floormap and 360 Video Alignment using Deep Learning
- Website and Real POI Matching using Images of Signboards
3D information processing team researches on "artificial intelligence which can understand the environment." We use RGB image, thermal image, and other sensor data to realize automatic re-creation of space map and recognition of scene context and so on. We are doing 3D modeling, change detection, scene recognition etc. using core methods such as image processing, deep learning.
This study focuses on LiDAR（Light Detection and Ranging）and 360 camera to generate accurate texture models of indoor space such as underground mall. SLAM, calibration, segmentation, etc. are detailed topics in this research.
(Top: LiDAR point cloud; Middle: 360 camera image; Bottom: point cloud with color based on 360 camera image)
(Re)construction of indoor/outdoor 3D model using mobile 360 camera is the main topic of this study. Standardization of map format is also ongoing.
You can see the 3D model of IB building of Nagoya university (70MB).
- mouse drag for view pan
- clicking the camera pose for 360 camera image from the point (fly button to close).
- menu on upper left for changing the size of points etc.
In this study we focus on creating the 3D thermal map using RGB camera and thermal camera.
Projecting thermal information onto the 3D mesh model of the scene reconstructed with Structure from Motion (SfM) and Multi-view Stereo (MVS) will produce the resulting model.
We are aiming to create satellite-powered ground observation system which can be used regardless of clouds.
In general, ground observation needs RGB images. However, visible light cannot pass through cloud and that will affect the ground visibility.
We develop the method to estimate RGB images from the images which are taken by longer wavelength lights (Near IR, Far IR, SAR), for better human readability.
In this study, we develop the estimation method to recognize the relationship of 360 video and floormap of a building with neural network. By connectng multiple resources, models of large buildings (station, university...) can be generated.
We develop the method of matching the images on the web and real stores for automatic landmark recognition system.
To improve the detection accuracy of signboards, we research recursive annotation.