Doctorate degree

I have defended my PhD in Robotics in 2007 within the Robotics, Action, and Perception (RAP) group of the Laboratory for Analysis and Architecture of Systems1 (LAAS), CNRS2, under the supervision of Viviane Cadenat, Associate Professor at Paul Sabatier University of Toulouse, France. Specifically, my PhD thesis was entitled Multi-sensor-based control strategies and visual signal loss management for mobile robots navigation. The subject was to design multi-sensor-based control strategies allowing a mobile robot to perform vision-based tasks amidst possibly occluding obstacles. Actually, the sensors’ improvement gave rise to the sensor-based control which allows defining the robotic task in the sensor space rather than in the configuration space. As cameras provide high-rate meaningful data, visual servoing has been particularly investigated, and can be used to perform various and accurate navigation tasks[1][2][3]. The objectives are then to perform reliable navigation tasks, despite the presence of obstacles. Thereby, it is necessary to preserve not only the robot safety (i.e. ensuring non-collision) but also the visual features’ visibility to ensure the vision-based task feasibility. To achieve these aims we have first proposed techniques able to fulfill simultaneously the mentioned objectives[4][5]. However, avoiding both collisions and occlusions often over-strained the robotic navigation task, reducing the range of realizable missions. This is the reason why we have developed a second approach which let occurs the loss of the visual features if it is necessary for the success of the task. Using the link between vision and motion, we have proposed different methods (analytical and numerical) to compute the visual signal as soon it becomes unavailable[6]. We have then applied them to perform vision-based tasks in cluttered environments, before highlighting their interest to deal with a camera failure during the mission[7][8].

In addition, during my doctorate degree, I also had the opportunity to perform teaching activities, first as temporary teacher (3years ), and then as teaching assistant, specifically in French “Attaché Temporaire d’Enseignement et de Recherche” (ATER,1year ), both for the Paul Sabatier University of Toulouse. These global teaching experiences have led to a total volume of 308hETD 3.

Thesis Details

I obtained a PhD thesis in Robots Control from Paul Sabatier University of Toulouse (France), that was entitled: “Multi-sensor-based control strategies and visual signal loss management for mobile robots navigation[9].

Download
D. Folio PhD thesis
Defended
the 11th July 2007
Committee
President
Michel Devy (DR LAAS)
Reviewers
François Chaumette (DR IRISA/INRIA)
Seth Hutchinson (Prof. University of Illinois, USA)
Examiners
Bernard Bayle (MdC Université de Strasbourg)
Michel Courdesses (Prof. Université de Toulouse)
Director
Viviane Cadenat (MdC Université de Toulouse
Guess
Philippe Souères (Prof. Université de Toulouse)

Abstract

The literature provides many techniques to design efficient control laws to realize robotic navigation tasks. In recent years, the sensors improvement gave rise to the sensor-based control which allows to define the robotic task in the sensor space rather than in the configuration space. In this context, as cameras provide high-rate meaningful data, visual servoing has been particularly investigated, and can be used to perform various and accurate navigation tasks. This method, which relies on the interaction between the camera and the visual features motions, consists in regulating an error in the image plane. Nonetheless, vision-based navigation tasks in cluttered environment cannot be expressed as a sole regulation of visual data. Indeed, in this case, it is necessary to preserve not only the robot safety (i.e. non-collision) but also the visual features visibility. This thesis addresses this issue and aims at developing sensor-based control laws allowing a mobile robot to perform vision-based tasks amidst possibly occluding obstacles. We have first proposed techniques able to fulfill simultaneously the two previously mentioned objectives. However, avoiding both collisions and occlusions often over-strained the robotic navigation task, reducing the range of realizable missions. This is the reason why we have developed a second approach which lets the visual features loss occurs if it is necessary for the task realization. Using the link between vision and motion, we have proposed different methods (analytical and numerical) to compute the visual signal as soon it becomes unavailable. We have then applied them to perform vision-based tasks in cluttered environments, before highlighting their interest to deal with a camera failure during the mission.

References

  1. Hutchinson S., Hager G. D., and Corke P. I. “A tutorial on visual servo control”, IEEE transactions on robotics and automation, vol. 12, no. 5, pp. 651–670, October 1996.
    @article{hutchinson1996tutorial,
      title = {A tutorial on visual servo control},
      author = {Hutchinson, Seth and Hager, Gregory D and Corke, Peter I},
      year = {1996},
      doi = {10.1109/70.538972},
      issn = {1042-296X},
      month = oct,
      number = {5},
      pages = {651--670},
      volume = {12},
      journal = {IEEE transactions on robotics and automation},
      publisher = {IEEE}
    }
    Details
  2. Chaumette F. and Hutchinson S. “Visual servo control.”, IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 82–90, November 2006.
    @article{chaumette2006visual,
      title = {Visual servo control.},
      author = {Chaumette, Fran{\c{c}}ois and Hutchinson, Seth},
      year = {2006},
      doi = {10.1109/MRA.2006.250573},
      issn = {1070-9932},
      month = nov,
      number = {4},
      pages = {82--90},
      subtitle = {Part {I}. Basic approaches},
      volume = {13},
      journal = {IEEE Robotics \& Automation Magazine},
      publisher = {IEEE}
    }
    Details
  3. Chaumette F. and Hutchinson S. “Visual servo control.”, IEEE Robotics & Automation Magazine, vol. 14, no. 1, pp. 109–118, March 2007.
    @article{chaumette2007visual,
      title = {Visual servo control.},
      author = {Chaumette, Fran{\c{c}}ois and Hutchinson, Seth},
      year = {2007},
      doi = {10.1109/MRA.2007.339609},
      issn = {1070-9932},
      month = mar,
      number = {1},
      pages = {109--118},
      subtitle = {Part {II}. Advanced approaches},
      volume = {14},
      journal = {IEEE Robotics \& Automation Magazine},
      publisher = {IEEE}
    }
    Details
  4. Folio D. and Cadenat V. “Using redundancy to avoid simultaneously occlusions and collisions while performing a vision-based task amidst obstacles”, in European Conference on Mobile Robots (ECMR’05), Ancône, Italie, 2005.

    This article presents a redundancy-based control strategy allowing a mobile robot to avoid simultaneously occlusions and obstacles while performing a vision-based task. The proposed method relies on the continuous switch between two controllers realizing respectively the nominal vision-based task and the occlusion and obstacle avoidance. Simulation results validating our approach are presented at the end of the paper.

    @inproceedings{2005_ecmr_folio,
      title = {Using redundancy to avoid simultaneously occlusions and collisions while performing a vision-based task amidst obstacles},
      author = {Folio, David and Cadenat, Viviane},
      booktitle = {European Conference on Mobile Robots (ECMR'05)},
      year = {2005},
      month = sep,
      address = {Ancône, Italie},
      keywords = {Visual servoing, Redundant task, Obstacle avoidance, Occlusion avoidance}
    }
    Details
  5. Folio D. and Cadenat V. “A controller to avoid both occlusions and obstacles during a vision-based navigation task in a cluttered environment”, in European Control Conference (ECC’05), Seville, Espagne, 2005., pp. 3898–3903.

    This paper presents a sensor-based controller allowing to visually drive a mobile robot towards a target while avoiding visual features occlusions and obstacle collisions. We consider the model of a cart-like robot equipped with proximetric sensors and a camera mounted on a pan-platform. The proposed method relies on the continuous switch between three controllers realizing respectively the nominal vision-based task, the obstacle bypassing and the occlusion avoidance.Simulationresults are given at the end of the paper.

    @inproceedings{2005_ecc_folio,
      title = {A controller to avoid both occlusions and obstacles during a vision-based navigation task in a cluttered environment},
      author = {Folio, David and Cadenat, Viviane},
      booktitle = {European Control Conference (ECC'05)},
      year = {2005},
      doi = {10.1109/CDC.2005.1582770},
      month = dec,
      pages = {3898--3903},
      address = {Seville, Espagne},
      ieeexplore = {1582770},
      keywords = {Visual servoing, Redundant task, Obstacle avoidance, Occlusion avoidance}
    }
    Details
  6. Folio D. and Cadenat V. “A new controller to perform safe vision-based navigation tasks amidst possibly occluding obstacles”, in European Control Conference (ECC’07), 2007.

    This paper provides a new method allowing to safely perform a vision-based task amidst possibly occluding obstacles. Our objective is to fulfill the following two requirements: first, the robot safety by guaranteeing noncollision; second, the ability of keeping on realizing the vision-based task despite possible target occlusions or loss. To this aim, several controllers have been designed and then merged depending on the risks of collisions and occlusions. The presented simulation results validate the proposed approach.

    @inproceedings{2007_ecc_folio,
      title = {A new controller to perform safe vision-based navigation tasks amidst possibly occluding obstacles},
      author = {Folio, David and Cadenat, Viviane},
      booktitle = {European Control Conference (ECC'07)},
      year = {2007},
      month = jul,
      keywords = {visual features estimation, visual servoing, collision avoidance.}
    }
    Details
  7. Folio D. and Cadenat V. “Using simple numerical schemes to compute visual features whenever unavailable”, in IFAC International Conference on Informatics in Control, Automation and Robotics (ICINCO’07), 2007.

    In this paper, we address the problem of estimating image features whenever they become unavailable during a vision-based task. The method consists in using numerical algorithm to compute the lacking data and allows to treat both partial and total visual features loss. Simulation and experimental results validate our work for two different visual-servoing navigation tasks. A comparative analysis allows to select the most efficient algorithm.

    @inproceedings{2007_icinco_folio,
      title = {Using simple numerical schemes to compute visual features whenever unavailable},
      author = {Folio, David and Cadenat, Viviane},
      booktitle = {IFAC International Conference on Informatics in Control, Automation and Robotics (ICINCO'07)},
      year = {2007},
      month = may,
      keywords = {visual features estimation, visual servoing, collision avoidance.}
    }
    Details
  8. Folio D. and Cadenat V. “A sensor-based controller able to treat total image loss and to guarantee non-collision during a vision-based navigation task”, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’2008), Nice, France, 2008., pp. 3052–3057.

    This paper deal with the problem of executing a vision-based task in an unknown environment. During such a task, two unexpected events may occur: the image data loss due to a camera occlusion and the robot collision with obstacles. We first propose a method allowing to compute the visual data when they are totally lost, before addressing the obstacle avoidance problem. Then, we design a sensor-based control strategy to perform safely vision-based tasks despite complete loss of the image. Simulation and experimental results validate our work.

    @inproceedings{2008_iros_folio,
      title = {A sensor-based controller able to treat total image loss and to guarantee non-collision during a vision-based navigation task},
      author = {Folio, David and Cadenat, Viviane},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'2008)},
      year = {2008},
      doi = {10.1109/IROS.2008.4650743},
      month = sep,
      pages = {3052--3057},
      address = {Nice, France},
      keywords = {camera occlusion,obstacle avoidance, robot collision, sensor-based controller, total image loss, vision-based navigation task, collision avoidance, image sensors, robot vision}
    }
    Details
  9. Folio D. “Stratégies de commande référencées multi-capteurs et gestion de la perte du signal visuel pour la navigation d’un robot mobile”, PhD thesis, Paul Sabatier University of Toulouse, LAAS-CNRS, Toulouse, France, 2007.
    @phdthesis{2007_thesis_folio,
      title = {Stratégies de commande référencées multi-capteurs et gestion de la perte du signal visuel pour la navigation d'un robot mobile},
      author = {Folio, David},
      year = {2007},
      month = jul,
      address = {Toulouse, France},
      school = {Paul Sabatier University of Toulouse, LAAS-CNRS}
    }
    Details
  1. In French Laboratoire d’Analyse et d’Architecture des Systèmes. LAAS is a laboratory depending on the CNRS. http://www.laas.fr 

  2. Form French National Center for Scientific Research, from French Centre National de la Recherche Scientifique (CNRS), is the largest governmental research organization in France. http://www.cnrs.fr 

  3. “Equivalent TD hours” that is in French “heures équivalentes TD” (hETD), are the reference hours to calculate the teaching duties. The rules for a tenured teacher are as follows: 1h of course = 1.5hETD, while the others, e.g. 1h of tutorial (TD) = 1h of practical work (TP) = 1hETD