3D Information Processing


3D information processing team researches on "artificial intelligence which can understand the environment." We use RGB image, thermal image, and other sensor data to realize automatic re-creation of space map and recognition of scene context and so on. We are doing 3D modeling, change detection, scene recognition etc. using core methods such as image processing, deep learning.

Generation of Accurate 3D Texture Model using LiDAR and 360 Camera

Texture model

This study focuses on LiDAR(Light Detection and Ranging)and 360 camera to generate accurate texture models of indoor space such as underground mall. SLAM, calibration, segmentation, etc. are detailed topics in this research.

(Top: LiDAR point cloud; Middle: 360 camera image; Bottom: point cloud with color based on 360 camera image)


360 camera

Colored point cloud

Indoor and Outdoor Modeling using 2D Images

3D reconstruction of Toyoda Kodo, Nagoya University

(Re)construction of indoor/outdoor 3D model using mobile 360 camera is the main topic of this study. Standardization of map format is also ongoing.

You can see the 3D model of IB building of Nagoya university (70MB).

3D model, IB building


  • mouse drag for view pan
  • clicking the camera pose for 360 camera image from the point (fly button to close).
  • menu on upper left for changing the size of points etc.

3D Model with Thermal Information using FIR camera

Adding thermal inofrmation

In this study we focus on creating the 3D thermal map using RGB camera and thermal camera.

Projecting thermal information onto the 3D mesh model of the scene reconstructed with Structure from Motion (SfM) and Multi-view Stereo (MVS) will produce the resulting model.

Adding thermal information (animation)

Conversion to Visible Images from Multispectral Satellite Images using Deep Learning

Conersion to visible images

We are aiming to create satellite-powered ground observation system which can be used regardless of clouds.

In general, ground observation needs RGB images. However, visible light cannot pass through cloud and that will affect the ground visibility.

We develop the method to estimate RGB images from the images which are taken by longer wavelength lights (Near IR, Far IR, SAR), for better human readability.

Paper: https://arxiv.org/abs/1710.04835

Floormap and 360 Video Alignment using Deep Learning


In this study, we develop the estimation method to recognize the relationship of 360 video and floormap of a building with neural network. By connectng multiple resources, models of large buildings (station, university...) can be generated.

Website and Real POI Matching using Images of Signboards

Detection of signboard

We develop the method of matching the images on the web and real stores for automatic landmark recognition system.

To improve the detection accuracy of signboards, we research recursive annotation.