Monthly Archives: May 2014

Presentación de un demostrador de Realidad Aumentada en iOsPresentation of an Augmented Reality demonstrator in iOs

La semana pasada se presentó la primera versión de un demostrador que permite realizar la proyección de una imagen virtual sobre un entorno real de forma consistente a lo largo de un periodo de tiempo. Se realiza una detección del entorno para seleccionar la superficie de proyección y, a lo largo de la observación, se mantiene la proyección de forma estable sobre ella.

En el escenario seleccionado confluyen diferentes elementos adversos que dificultan las tareas de detección y seguimiento, como son cambios de iluminación, escala, giros y movimientos de cámara y oclusiones.

PFCMiguel2Last Friday a first version of a demonstrator was presented. It is able to project a virtual image on a real environment in a consistent manner. First, the real environment is detected and the projection surface is selected. Then, a projection is presented on it.

Different adverse elements are present in the scenario, as occlusions, illumination changes, turns and camera movements.

PFCMiguel2

Primera batería de fotosFirst photograph battery

Ya hemos comenzado a tomar fotos de la ciudad de Zaragoza para entrenar al sistema de reconocimiento automático. Las fotografías se etiquetan automáticamente con su posición GPS en el momento en que se toman. Se van almacenando en una web del proyecto:

Este es el mapa general:
mapa_general

Detalle de la Plaza del Pilar:
mapa_la_seo

La zona de La Seo:
mapa_la_seo_detalla_imagen

Algunas de las fotografías de la galería:
1_galeria

galeria_lonja

We have started the battery of photograps around the city. This phase is necessary in order to train the visual recognition system. The photographs are automatically labeled with their GPS position, and they are stored in a web:

This is the general map:
mapa_general

A detail of Plaza del Pilar:
mapa_la_seo

The zone of Plaza de La Seo:
mapa_la_seo_detalla_imagen

Some of the photos in the gallery:
1_galeria

galeria_lonja

Aceptado un artículo en CEIG 2014Article accepted in CEIG 2014

El artículo “Improving depth estimation using superpixels” ha sido aceptado para su presentación en el Congreso Español de Informática Gráfica, CEIG 2014. Los autores son Ana B. Cambra, Adolfo Muñoz, Ana C. Murillo, José J. Guerrero y Diego Gutiérrez.

Este es el abstract:

This work is focused on assigning a depth label to each pixel in the image. We consider off-the-shelf algorithms that provide depth information from multiple views or depth information directly obtained from RGB-d sensors. Both of them are common scenarios of a well studied problem where many times we get incomplete depth information. Then, user interaction becomes necessary to finish, improve or correct the solution for certain applications where accurate and dense depth information for all pixels in the image is needed. This work presents our approach to improve the depth assigned to each pixel in an automated manner. Our proposed pipeline combines state-of-the art methods for image superpixel segmentation and energy minimization. Superpixel segmentation reduces complexity and provides more robustness to the labeling decisions. We study how to propagate the depth information to incomplete or inconsistent regions of the image using a Markov Random Field (MRF) energy minimization framework. We propose and evaluate an energy function and validate it together with the designed pipeline. We present a quantitative evaluation of our approach with different variations to show the improvements we can obtain. This is done using a publicly available stereo dataset that provides ground truth information. We show additional qualitatively results, with other tests cases and scenarios using different input depth information, where we also obtain significant improvements on the depth estimation compared to the initial one. CEIG2014

The article“Improving depth estimation using superpixels” has been acceped in Congreso Español de Informática Gráfica, CEIG 2014. The authors are Ana B. Cambra, Adolfo Muñoz, Ana C. Murillo, José J. Guerrero y Diego Gutiérrez.

Abstract:

This work is focused on assigning a depth label to each pixel in the image. We consider off-the-shelf algorithms that provide depth information from multiple views or depth information directly obtained from RGB-d sensors. Both of them are common scenarios of a well studied problem where many times we get incomplete depth information. Then, user interaction becomes necessary to finish, improve or correct the solution for certain applications where accurate and dense depth information for all pixels in the image is needed. This work presents our approach to improve the depth assigned to each pixel in an automated manner. Our proposed pipeline combines state-of-the art methods for image superpixel segmentation and energy minimization. Superpixel segmentation reduces complexity and provides more robustness to the labeling decisions. We study how to propagate the depth information to incomplete or inconsistent regions of the image using a Markov Random Field (MRF) energy minimization framework. We propose and evaluate an energy function and validate it together with the designed pipeline. We present a quantitative evaluation of our approach with different variations to show the improvements we can obtain. This is done using a publicly available stereo dataset that provides ground truth information. We show additional qualitatively results, with other tests cases and scenarios using different input depth information, where we also obtain significant improvements on the depth estimation compared to the initial one. CEIG2014