Tous les Focus du Mois / All month's focus

The Franco-Australian IRP-ARS (Advanced Autonomy for Robotic Systems)

FOCUS DU MOIS DE : décembre 2020

The Franco-Australian IRP-ARS (Advanced Autonomy for Robotic Systems) is managed by Professor Tarek Hamel (I3S UMR-CNRS 7271, Côte d’Azur University) in collaboration with Professor Robert Mahony  (Australian Centre for Robotic Vision, Australian National University) and started in 2020.

• Professor Tarek Hamel. I3S UMR-CNRS 7271, Côte d’Azur University (leader of the French part)CRAN UMR-CNRS 7039, Université de Lorraine and IRISA UMR -CNRS 6074, Rennes.

• Professor Robert Mahony. Australian Centre for Robotic Vision, Australian National University (leader of the Australian part), the University of Melbourne and Monash University.

control and perception algorithms ; Unmanned Robotic Systems ; high quality state estimation


During the last two decades important progress has been made in the development of advanced methods for motion control of unmanned autonomous robotic systems.  The commercial landscape of these vehicles is characterized by a plethora of small start-up companies proposing an increasing variety of devices for various applications. This rapidly growing market has boosted research studies in the field of autonomous robotic systems, particularly from the Systems and Control community.

The objective of the IRP-ARS is to pool the top-level resources of research institutions from both France and Australia and build an excellence centre  to go beyond the state of the art in systems and control theory applied to Unmanned Robotic Systems. It will be built on existing collaborations and it will establish new ones between the top-level research entities.

Theoretical open problems with high impact in practical applications will be addressed to enable Unmanned Robotic Systems to operate more robustly and more reliably in complex and cluttered dynamic environments encountered in real-world applications. These aims will be achieved by meeting the following objectives:

  • Develop a unified control approach of a large class of Unmanned Robotic Systems (Airplanes, helicopters and other Vertical Take-Off and Landing (VTOL) vehicles, blimps, rockets, hydroplanes, ships and submarines are generally underactuated)
  • Develop a unifying scientific framework for the analysis and design of observers for general systems with symmetry of the system with high quality state estimation (position, velocity, orientation, etc.).

Exploit the latest advances in deep learning to develop ‘front-end’ sensor processing that directly integrates into globally well-defined control and perception algorithms for systems with symmetry.

Data Augmentation for Automatic Identification of Spatiotemporal Dispersion Electrograms in Persistent Atrial Fibrillation Ablation Using Machine Learning

FOCUS DU MOIS DE : juillet 2020

Amina Ghrissi, doctorante en troisième année de thèse au laboratoire I3S (UMR 7271, Université Côte d'Azur, CNRS), est arrivée finaliste du “Best Student Paper Award” au 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) qui se tiendra à Montréal, Canada, du 20 au 24 juillet 2020, le congrès mondial le plus important autour du génie biomédical. Amina est ainsi la seule finaliste a avoir été sélectionnée sur tout le continent européen pour son article intitulé “Data Augmentation for Automatic Identification of Spatiotemporal Dispersion Electrograms in Persistent Atrial Fibrillation Ablation Using Machine Learning”.

Atrial Fibrillation, Machine Learning, PentaRay Catheter, Electrogram, Data Augmentation, Convolutional Neural Networks, Catheter Ablation

Ce travail s'intéresse à la fibrillation atriale (FA), l'arythmie cardiaque la plus répandue et responsable de jusqu'à un quart des accidents vasculaires cérébraux. Amina met en application des outils d’intelligence artificielle (machine learning) pour détecter des signaux électriques cardiaques associés à des zones dites de dispersion spatio-temporelle. Des études récentes ont montré que l'ablation de ces zones à l'aide de cathéters est capable d'arrêter la FA dans un pourcentage de cas très élevé, mais la détection des zones était effectuée de manière visuelle jusqu'à présent. Les techniques de machine learning développées par Amina sont susceptibles d'aider le cardiologue à effectuer des interventions d'ablation par cathéter mieux adaptées à chaque patient, améliorant ainsi le taux de succès tout en réduisant le coût, la durée et les risques de complications de cette thérapie pour le traitement de la FA persistante.

La thèse fait partie du projet « Intégration et Analyse de Données Biomédicales » (IADB) soutenu par l’Initiative d’excellence IDEX UCA-JEDI. La thèse avait démarré dans le cadre de la nomination de Vicente Zarzoso à l'Institut universitaire de France (IUF) et elle est co-dirigée avec Johan Montagnat au laboratoire I3S.

Ce succès est le fruit d'une étroite collaboration avec le Service cardiologie du CHU Nice Pasteur, représenté par le Dr. Fabien Squara, et l’Institut IFES à Vitoria, Brésil.

Plus d'informations :
- Article nominé :
- Site web du congrès :

DISPUTool -- A tool for the Argumentative Analysis of Political Debates

FOCUS DU MOIS DE : janvier 2020

runner-up (second place) of the Application Impact award at IJCAI 2019

Shohreh Haddadan, Elena Cabrio, Serena Villata. DISPUTool -- A tool for the Argumentative Analysis of Political Debates. IJCAI 19 - 28th International Joint Conference on Artificial Intelligence, Aug 2019, Macao, Macau SAR China. pp.6524-6526, ⟨10.24963/ijcai.2019/944⟩. ⟨hal-02381055⟩
Natural Language Processing, Argument Mining, Political Debates

Political debates are the means used by political candidates to put forward and justify their positions in front of the electors with respect to the issues at stake. Argument mining is a novel research area in Artificial Intelligence, aiming at analyzing discourse on the pragmatics level and applying a certain argumentation theory to model and automatically analyze textual data. In this paper, the authors present DISPUTool, a tool designed to ease the work of historians and social science scholars in analyzing the argumentative content of political speeches. More precisely, DISPUTool allows to explore and automatically identify argumentative components over the 39 political debates from the last 50 years of US presidential campaigns (1960-2016).

Towards the design of task-adapted models for inverse imaging problems

FOCUS DU MOIS DE : novembre 2019
L. Calatroni, C. Cao, J. C. De Los Reyes, C.-B. Schönlieb, T. Valkonen, Bilevel approaches for learning of variational imaging models, RADON book series, vol. 18 on Variational Methods, (2016).
Large-scale learning, variational image reconstruction, non-smooth and non-convex optimisation, bilevel learning.
Data-driven learning approaches are becoming nowadays the reference paradigms for the design and the solution of a large class of reconstruction problems arising in several imaging applications such as biological imaging. In their general form, such methods compute the optimal ingredients of the model by exploring information in the training data with respect to some fixed quality measure, so as to estimate the degradation process (noise, blur, deformation…) in a robust and efficient way by means of large-scale optimisation techniques. However, the notion of “optimality” used to tailor the best model is often intrinsically ambiguous since it strongly depends on the objectives of the task performed which could range from image enhancement to object detection, e.g., in microscopy imaging. The general task becomes then to optimise the optimisers, which naturally leads to a two-level non-smooth and non-convex optimisation problem, which can accommodate a wide variety of advanced image reconstruction models. These characteristics pose severe limitations from the point of view of numerical efficiency and problem complexity. 
As a newly recruited CNRS researcher, Luca Calatroni's research project within the Morpheme project (SIS research group) is to unify data- and application-driven optimisation in one single recipe and extend advanced optimisation strategies to solve this two-level problem with a specific focus on fluorescence microscopy applications.

Integration of Structural Constraints into TSP Models

FOCUS DU MOIS DE : octobre 2019

N. Isoart and J.-C. Régin: Integration of Structural Constraints into TSP Models. 25th International Conference of Constraint Programming. LNCS, vol. 11802, pp. 284-299, Connecticut, USA 2019. Springer (2019)

Constraint Programming, Global constraint, TSP

Several models based on constraint programming have been proposed to solve the traveling salesman problem (TSP). The most efficient ones, such as the weighted circuit constraint (WCC), mainly rely on the Lagrangian relaxation of the TSP, based on the search for spanning tree or more precisely "1-tree". The weakness of these approaches is that they do not include enough structural constraints and are based almost exclusively on edge costs. The purpose of this paper is to correct this drawback by introducing the Hamiltonian cycle constraint associated with propagators. We propose some properties preventing the existence of a Hamiltonian cycle in a graph or, conversely, properties requiring that certain edges be in the TSP solution set. Notably, The authors design a propagator based on the research of k-cutsets. The combination of this constraint with the WCC constraint allows us to obtain, for the resolution of the TSP, gains of an order of magnitude for the number of backtracks as well as a strong reduction of the computation time.

On the Cost of Acking in Data Stream Processing Systems

FOCUS DU MOIS DE : septembre 2019

Alessio Pagliari, Fabrice Huet, Guillaume Urvoy-Keller. On the Cost of Acking in Data Stream Processing Systems. 2019 19th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (CCGRID)

Data Stream Processing, Message Reliability, Apache Storm, Acking Framework, Scheduling

The widespread use of social networks and applications such as IoT networks generates a continuous stream of data that companies and researchers want to process, ideally in real-time. Data stream processing systems (DSP) enable such continuous data analysis by implementing the set of operations to be performed on the stream as directed acyclic graph (DAG) of tasks. While these DSP systems embed mechanisms to ensure fault tolerance and message reliability, only few studies focus on the impact of these mechanisms on the performance of applications at runtime. In this paper, the authors demonstrate the impact of the message reliability mechanism on the performance of the application. They use an experimental approach, using the Storm middleware, to study an acknowledgment-based framework. They compare the two standard schedulers available in Storm with applications of various degrees of parallelism, over single and multi cluster scenarios. The authors show that the acking layer may create an unforeseen bottleneck due to the acking tasks placement; a problem which, to the best of their knowledge, has been overlooked in the scientific and technical literature. The authors propose two strategies for improving the acking tasks placement and demonstrate their benefit in terms of throughput and latency.

Respiratory Waveform Estimation From Multiple Accelerometers: An Optimal Sensor Number and Placement Analysis

FOCUS DU MOIS DE : août 2019

A. Siqueira, A. F. Spirandeli, R. Moraes and V. Zarzoso, "Respiratory Waveform Estimation From Multiple Accelerometers: An Optimal Sensor Number and Placement Analysis," in IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 4, pp. 1507-1515, July 2019.

Accelerometers ; Estimation ; Thorax;Informatics ; Abdomen ; Transducers ; Data acquisition ; Respiratory measurements ; accelero-meters ; linear estimation ; minimum mean square error ; blind source extraction ; independent component analysis

Respiratory patterns are commonly measured to monitor and diagnose cardiovascular, metabolic, and sleep disorders. Electronic devices such as masks used to record respiratory waveforms usually require medical staff support and obstruct the patients' breathing, causing discomfort. New techniques are being investigated to overcome such limitations. An emerging approach involves accelerometers to estimate the respiratory waveform based on chest motion. However, most of the existing techniques employ a single accelerometer placed on an arbitrary thorax position. The present work investigates the use and optimal placement of multiple accelerometers located on the thorax and the abdomen. The study population is composed of 30 healthy volunteers in three different postures. By means of a custom-made microcontrolled system, data are acquired from an array of ten accelerometers located on predefined positions and a pneumotachograph used as reference. The best sensor locations are identified by optimal linear reconstruction of the reference waveform from the accelerometer data in the minimum mean square error sense. The analysis shows that right-hand side locations contribute more often to optimal respiratory waveform estimates, a sound finding given that the right lung has a larger volume than the left lung. In addition, the authors show that the respiratory waveform can be blindly extracted from the recorded accelerometer data by means of independent component analysis. In conclusion, linear processing of multiple accelerometers in optimal positions can successfully recover respiratory information in clinical settings, where the use of masks may be contraindicated.

Solving Equations on Discrete Dynamical Systems

FOCUS DU MOIS DE : mai 2019

Alberto Dennunzio, Enrico Formenti, Luciano Margara, Valentin Montmirail, Sara Riva

Boolean Automata Networks ; Discrete Dynamical Systems ; Decidability

Boolean automata networks, genetic regulation networks, and metabolic net- works are just a few examples of biological modelling by discrete dynamical systems (DDS). A major issue in modelling is the verification of the model against the exper- imental data or inducing the model under uncertainties in the data. Equipping finite discrete dynamical systems with an algebraic structure of commutative semiring pro- vides a suitable context for hypothesis verification on the dynamics of DDS. Indeed, hypothesis on the systems can be translated into polynomial equations over DDS. Solu- tions to these equations provide the validation to the initial hypothesis. Unfortunately, finding solutions to general equations over DDS is undecidable. In this article, the authors want to push the envelop further by proposing a practical approach for some decidable cases in a suitable configuration that the authors call the Hypothesis Checking. the authors demonstrate that for many decidable equations all boils down to a “simpler” equation. However, the problem is not to decide if the simple equation has a solution, but to enumerate all the solutions in order to verify the hypothesis on the real and undecidable systems. We evaluate experimentally our approach and show that it has good scalability properties.

A New Adaptation Lever in 360° Video Streaming

FOCUS DU MOIS DE : juin 2019

Lucile Sassatelli, Marco Winckler, Thomas Fisichella, Ramon Aparicio-Pardo, Anne-Marie Dery-Pinna
29th Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV'19)

360 video, streaming, user attention, limited bandwidth

Despite exciting prospects, the development of 360° videos is persistently hindered by the difficulty to stream them. To reduce the data rate, existing streaming strategies adapt the video rate to the user's Field of View (FoV), but the difficulty of predicting the FoV and persistent lack of bandwidth are important obstacles to achieve best experience. In this article, the authors exploit the recent findings on human attention in VR to introduce a new additional degree of freedom for the streaming algorithm to leverage: Virtuall Walls (VWs) are designed to translate bandwidth limitation into a new type of impairment allowing to preserve the visual quality by subtly limiting the user's freedom in well-chosen periods. The authors carry out experiments with 18 users and confirm that, if the VW is positioned after the exploration phase in scenes with concentrated saliency, a substantial fraction of users seldom perceive it. With a double-stimulus approach, the authors show that, compared with a reference with no VW consuming the same amount of data, VW can improve the quality of experience. Simulation of different FoV-based streaming adaptations with and without VW show that VW enables reduction in stalls and increases quality in FoV.

Multifaceted Automated Analyses for Variability-Intensive Embedded Systems

FOCUS DU MOIS DE : mars 2019

Sami Lazreg, Maxime Cordy, Philippe Collet, Patrick Heymans, Sébastien Mosser. Multifaceted Automated Analyses for Variability-Intensive Embedded Systems. The International Conference on Software Engineering (ICSE) (ACM/IEEE). Montréal, QC, Canada: 25 May - 31 May 2019!.

embedded systems, formal model-driven framework, variability and configurable hardware platforms, functional and non-functional behaviour, verification

Embedded systems, as the ones found in the automotive domain, must comply with stringent functional and non-functional requirements. To fulfil these requirements, engineers are confronted with a plethora of design alternatives both at the software and hardware level, out of which they must select the optimal solution wrt. possibly-antagonistic quality attributes (e.g. cost of manufacturing vs. speed of execution). We propose a formal model-driven framework to assist engineers in this choice. It captures high-level specifications of the system alternatives in the form of dataflows with variability and configurable hardware platforms. A mapping algorithm then derives the design space, i.e. the set of compatible pairs of application and platform variants, and a variability-aware executable model, which encodes the functional and non-functional behaviour of all viable system variants. Novel verification algorithms then pinpoint the optimal system variants efficiently. The benefits of our approach are evaluated through an industrial case study.

Modélisation de l’évolution de performances sportives ; détection de situations

FOCUS DU MOIS DE : février 2019

Maria João Rendas, Asya Metelkina et Luc Pronzato

modélisation paramétrique ; modélisation non paramétrique ; signaux biologiques ; IA ; dopage

L’utilisation de produits dopants dans le cadre de la pratique sportive est un problème important,  qui dépasse aujourd’hui largement le cadre des compétitions sportives de haut niveau, et dont la détection efficace doit reposer sur des outils de dépistage automatisés, permettant l’identification rapide et à faible coût des athlètes possiblement dopés.
La fédération mondiale de référence IAAF (International Association of Athletics Federations, Monaco) et l’I3S développent depuis avril 2017 une collaboration sur la modélisation mathématique de performances d’athlètes de haut niveau. C’est la première fois que l’IAAF collabore avec un organisme de recherche français pour le traitement de ses données dans ce cadre. Le but de cette collaboration (I3S, équipe SIS : Dr Metelkina, Pr Rendas, Pr Pronzato – IAAF : Dr Bermon et Dr Garnier) est la caractérisation probabiliste de l’évolution de la performance d’athlètes tout le long de leur carrière.
L’ABP (Athlete’s Biological Passport) basé sur des indicateurs hématologiques obtenus lors de prélèvements effectués selon des protocoles stricts permet aujourd’hui un triage fiable de l’ensemble d’athlètes suivis. Cependant, les coûts associés à l’ABP limitent le nombre d’athlètes suivis ainsi que la fréquence des prélèvements auxquels chaque athlète est soumis. En utilisant le modèle probabiliste pour signaler l’éloignement des performances d’un athlète par rapport à des évolutions considérées comme normales, cette collaboration doit conduire d’une part à un système de détection précoce de cas de dopage capable de surveiller d’une façon continue un grand nombre d’athlètes, et d’autre part à une optimisation des plans de tests anti dopage.
La comparaison de deux approches complémentaires de modélisation (paramétrique versus non paramétrique) visant à représenter les performances d’un ensemble d’athlètes masculins sur des épreuves d‘athlétisme (400, 800, 1500, 5000 et 10000 mètres) sont en cours de publication.
Les figures montrent deux carrières détectées comme anormales par la méthode non paramétrique. Le trait bleu épais correspond à la trajectoire de l’athlète, et les autres courbes à un ensemble de trajectoires « typiques ». La figure ci-dessous correspond à un recordman sur 400 mètres, avec des temps bien inférieurs à la population d’athlètes analysée, tandis que la première figure (ci-dessus) correspond au cas d’un athlète dont les performances sur 400 mètres s’améliorent (de manière atypique et douteuse) en fin de carrière.

A Logical Framework for Modelling Breast Cancer Progression

FOCUS DU MOIS DE : janvier 2019

Joëlle Despeyroux, Amy Felty, Pietro Lio and Carlos Olarte.
MLCSB 2018

Bioinformatic, Linear Logic, Model, Breast Cancer

Data streams for a personalised breast cancer programme could include collections of image data, tumour genome sequencing, likely at the single cell level, and liquid biopsies (DNA and Circulating Tumour Cells (CTCs)). Although they are rich in information, the full power of these datasets will not be realised until we develop methods to model the cancer systems and conduct analyses that transect these streams. In addition  to machine learning approaches, we believe that logical reasoning has the potential to provide help in clinical decision support systems for doctors. The autors develop a logical approach to modelling cancer progression, focusing on mutation analysis and CTCs, which include the appearance of driver mutations, the transformation of normal cells to cancer cells in the breast, their circulation in the blood, and their path to the bone.
Long term goal is to improve the prediction of survival of metastatic breast cancer patients. The autors model the behaviour of the CTCs as a transition system, and we use Linear Logic (LL) to reason about our model. The autors consider several important properties about CTCs and prove them in LL. In addition, The autors formalise our results in the Coq Proof Assistant, thus providing formal proofs of their model. The autors believe that our results provide a promising proof-of-principle and can be generalised to other
cancer types and groups of driver mutations.

Optimization of Network Infrastructures

FOCUS DU MOIS DE : décembre 2018

A little bit of green in networks and other problems of placement and management of resources.

Frédéric Giroire

algorithmics ; graph ; optimization and probabilities ; networks ; SDN ; NFV ; SFC ; energy efficiency

In this thesis (HDR / habilitation degree), Frédéric Giroire presents a set of solutions to optimize network infrastructures. Pushed by the new sensitivity of the society, politics, and companies to energy costs and global warming, Frédéric investigated the question of how to build green networks. Frédéric first studied some practical scenarios to answer the question: how much energy could be saved for Internet Service Providers by putting into practice energy efficient protocols? It led Frédéric to study fundamental problems of graph theory.
At the core of these energy efficient methods, there is a dynamic adaptation to the changes of demands, which is impossible to do in legacy networks which are mostly manually operated. The emergence of two new paradigms, software defined networking (SDN) and network function virtualization (NFV), leads to a finer control of networks and thus bears the promise to to put energy efficient solutions into practice. Frédéric thus studied how to use SDN to implement dynamic routing.
Frédéric’s approach has been to use theoretical tools to solve problems raised by the introduction of new technologies or new applications. Frédéric’s tools come mainly from combinatorics and in particular from graph theory, algorithmics, optimization and probabilities. When Frédéric was able to propose new methods of resolution, Frédéric then tried to evaluate their practical impact by numerical evaluation, simulation or experimentation with realistic scenarios.
More about this thesis: let's consult the manuscript and the slides of Frédéric’s defense

A Survey of Active Object Languages

FOCUS DU MOIS DE : novembre 2018

Frank De Boer, Vlad Serbanescu, Reiner Hähnle, Ludovic Henrio, Justine Rochas, et al.. A Survey of Active Object Languages. ACM Computing Surveys, Association for Computing Machinery, 2017, 50 (5), pp.1 - 39.

Concurrent programming structures; Programming languages ; active objects ; actors ; concurrency ; distributed ; systems

To program parallel systems efficiently and easily, a wide range of programming models have been proposed, each with different choices concerning synchronization and communication between parallel entities. Among them, the actor model is based on loosely coupled parallel entities that communicate by means of asynchronous messages and mailboxes. Some actor languages provide a strong integration with object-oriented concepts; these are often called active object languages. This paper reviews four major actor and active object languages and compares them according to carefully chosen dimensions that cover central aspects of the programming paradigms and their implementation.