I have defended my PhD in Robotics in 2007 within the Robotics, Action, and Perception (RAP) group of the Laboratory for Analysis and Architecture of Systems1 (LAAS), CNRS2, under the supervision of Viviane Cadenat, Associate Professor at Paul Sabatier University of Toulouse, France. Specifically, my PhD thesis was entitled “Multi-sensor-based control strategies and visual signal loss management for mobile robots navigation”. The subject was to design multi-sensor-based control strategies allowing a mobile robot to perform vision-based tasks amidst possibly occluding obstacles. Actually, the sensors’ improvement gave rise to the sensor-based control which allows defining the robotic task in the sensor space rather than in the configuration space. As cameras provide high-rate meaningful data, visual servoing has been particularly investigated, and can be used to perform various and accurate navigation tasks. The objectives are then to perform reliable navigation tasks, despite the presence of obstacles. Thereby, it is necessary to preserve not only the robot safety (i.e. ensuring non-collision) but also the visual features’ visibility to ensure the vision-based task feasibility. To achieve these aims we have first proposed techniques able to fulfill simultaneously the mentioned objectives. However, avoiding both collisions and occlusions often over-strained the robotic navigation task, reducing the range of realizable missions. This is the reason why we have developed a second approach which let occurs the loss of the visual features if it is necessary for the success of the task. Using the link between vision and motion, we have proposed different methods (analytical and numerical) to compute the visual signal as soon it becomes unavailable. We have then applied them to perform vision-based tasks in cluttered environments, before highlighting their interest to deal with a camera failure during the mission.
In addition, during my doctorate degree, I also had the opportunity to perform teaching activities, first as temporary teacher (3years ), and then as teaching assistant, specifically in French “Attaché Temporaire d’Enseignement et de Recherche” (ATER,1year ), both for the Paul Sabatier University of Toulouse. These global teaching experiences have led to a total volume of 308hETD 3.
I obtained a PhD thesis in Robots Control from Paul Sabatier University of Toulouse (France), that was entitled: “Multi-sensor-based control strategies and visual signal loss management for mobile robots navigation“.
- D. Folio PhD thesis
- the 11th July 2007
- Michel Devy (DR LAAS)
- François Chaumette (DR IRISA/INRIA)
- Seth Hutchinson (Prof. University of Illinois, USA)
- Bernard Bayle (MdC Université de Strasbourg)
- Michel Courdesses (Prof. Université de Toulouse)
- Viviane Cadenat (MdC Université de Toulouse
- Philippe Souères (Prof. Université de Toulouse)
The literature provides many techniques to design efficient control laws to realize robotic navigation tasks. In recent years, the sensors improvement gave rise to the sensor-based control which allows to define the robotic task in the sensor space rather than in the configuration space. In this context, as cameras provide high-rate meaningful data, visual servoing has been particularly investigated, and can be used to perform various and accurate navigation tasks. This method, which relies on the interaction between the camera and the visual features motions, consists in regulating an error in the image plane. Nonetheless, vision-based navigation tasks in cluttered environment cannot be expressed as a sole regulation of visual data. Indeed, in this case, it is necessary to preserve not only the robot safety (i.e. non-collision) but also the visual features visibility. This thesis addresses this issue and aims at developing sensor-based control laws allowing a mobile robot to perform vision-based tasks amidst possibly occluding obstacles. We have first proposed techniques able to fulfill simultaneously the two previously mentioned objectives. However, avoiding both collisions and occlusions often over-strained the robotic navigation task, reducing the range of realizable missions. This is the reason why we have developed a second approach which lets the visual features loss occurs if it is necessary for the task realization. Using the link between vision and motion, we have proposed different methods (analytical and numerical) to compute the visual signal as soon it becomes unavailable. We have then applied them to perform vision-based tasks in cluttered environments, before highlighting their interest to deal with a camera failure during the mission.
- “A tutorial on visual servo control”, IEEE transactions on robotics and automation, vol. 12, no. 5, pp. 651–670, October 1996.
- “Visual servo control.”, IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 82–90, November 2006.
- “Visual servo control.”, IEEE Robotics & Automation Magazine, vol. 14, no. 1, pp. 109–118, March 2007.
- “Using redundancy to avoid simultaneously occlusions and collisions while performing a vision-based task amidst obstacles”, in European Conference on Mobile Robots (ECMR’05), Ancône, Italie, 2005.
This article presents a redundancy-based control strategy allowing a mobile robot to avoid simultaneously occlusions and obstacles while performing a vision-based task. The proposed method relies on the continuous switch between two controllers realizing respectively the nominal vision-based task and the occlusion and obstacle avoidance. Simulation results validating our approach are presented at the end of the paper.
- “A controller to avoid both occlusions and obstacles during a vision-based navigation task in a cluttered environment”, in European Control Conference (ECC’05), Seville, Espagne, 2005., pp. 3898–3903.
This paper presents a sensor-based controller allowing to visually drive a mobile robot towards a target while avoiding visual features occlusions and obstacle collisions. We consider the model of a cart-like robot equipped with proximetric sensors and a camera mounted on a pan-platform. The proposed method relies on the continuous switch between three controllers realizing respectively the nominal vision-based task, the obstacle bypassing and the occlusion avoidance.Simulationresults are given at the end of the paper.
- “A new controller to perform safe vision-based navigation tasks amidst possibly occluding obstacles”, in European Control Conference (ECC’07), 2007.
This paper provides a new method allowing to safely perform a vision-based task amidst possibly occluding obstacles. Our objective is to fulfill the following two requirements: first, the robot safety by guaranteeing noncollision; second, the ability of keeping on realizing the vision-based task despite possible target occlusions or loss. To this aim, several controllers have been designed and then merged depending on the risks of collisions and occlusions. The presented simulation results validate the proposed approach.
- “Using simple numerical schemes to compute visual features whenever unavailable”, in IFAC International Conference on Informatics in Control, Automation and Robotics (ICINCO’07), 2007.
In this paper, we address the problem of estimating image features whenever they become unavailable during a vision-based task. The method consists in using numerical algorithm to compute the lacking data and allows to treat both partial and total visual features loss. Simulation and experimental results validate our work for two different visual-servoing navigation tasks. A comparative analysis allows to select the most efficient algorithm.
- “A sensor-based controller able to treat total image loss and to guarantee non-collision during a vision-based navigation task”, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’2008), Nice, France, 2008., pp. 3052–3057.
This paper deal with the problem of executing a vision-based task in an unknown environment. During such a task, two unexpected events may occur: the image data loss due to a camera occlusion and the robot collision with obstacles. We first propose a method allowing to compute the visual data when they are totally lost, before addressing the obstacle avoidance problem. Then, we design a sensor-based control strategy to perform safely vision-based tasks despite complete loss of the image. Simulation and experimental results validate our work.
- “Stratégies de commande référencées multi-capteurs et gestion de la perte du signal visuel pour la navigation d’un robot mobile”, PhD thesis, Paul Sabatier University of Toulouse, LAAS-CNRS, Toulouse, France, 2007.
“Equivalent TD hours” that is in French “heures équivalentes TD” (hETD), are the reference hours to calculate the teaching duties. The rules for a tenured teacher are as follows: 1h of course = 1.5hETD, while the others, e.g. 1h of tutorial (TD) = 1h of practical work (TP) = 1hETD. ↩