Clement Jonquet's Publications

  Bib file of all my publications. This page does not work completely on Chrome.

QuickSearch:   Number of matching entries: 0.

Search Settings

    Publication Type
    Amine Abdaoui, Andon Tchechmedjiev, William Digan, Sandra Bringay & Clement Jonquet. French ConText: Détecter la négation, la temporalité et le sujet dans les textes cliniques Français, In 4ème Symposium sur l'Ingénierie de l'Information Médicale, SIIM'17. Toulouse, France, November 2017. pp. 10.

    french

    Abstract: La détection du contexte des conditions cliniques présentes dans un dossier patient est un élément important dans le traitement automatique des textes médicaux. Ce papier décrit l'adaptation au français du système anglais ConText qui permet de détecter si une condition clinique identifiée dans un texte est affirmée ou niée, récente ou historique, et concerne le patient ou pas. Nous avons évalué notre système sur deux types de textes médicaux : des dossiers patients et des certificats de décès. Les résultats obtenus sont comparables aux versions anglaises et suédoises de ConText et dépassent ceux obtenus en langue française (négation seulement) lorsqu'on utilise le même jeu de données de réfé-rence. En outre, le système French ConText a été intégré dans le SIFR Annotator (http://bioportal.lirmm.fr/annotator), un service web d'annotation sémantique de données biomédicales pour le français.
    BibTeX:
    		@inproceedings{Abd17-SIIM,
    		  author = {Amine Abdaoui and Andon Tchechmedjiev and William Digan and Sandra Bringay and Clement Jonquet },
    		  title = {French ConText: Détecter la négation, la temporalité et le sujet dans les textes cliniques Français},
    		  booktitle = {4ème Symposium sur l'Ingénierie de l'Information Médicale, SIIM'17},
    		  year = {2017},
    		  pages = {10},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_SIIM2017_FrenchContext.pdf}
    		}
    		
    Amina Annane, Zohra Bellahsene, Faiçcal Azouaou & Clement Jonquet. Building an effective and efficient background knowledge resource to enhance ontology matching, Web Semantics. August 2018. Vol. 51 pp. 51-68. Elsevier.

    journal

    Abstract: Ontology matching is critical for data integration and interoperability. Original ontology matching approaches relied solely on the content of the ontologies to align. However, these approaches are less effective when equivalent concepts have dissimilar labels and are structured with different modeling views. To overcome this semantic heterogeneity, the community has turned to the use of external background knowledge resources. Several methods have been proposed to select ontologies, other than the ones to align, as background knowledge to enhance a given ontology-matching task. However, these methods return a set of complete ontologies, while, in most cases, only fragments of the returned ontologies are effective for discovering new mappings. In this article, we propose an approach to select and build a background knowledge resource with just the right concepts chosen from a set of ontologies, which improves efficiency without loss of effectiveness. The use of background knowledge in ontology matching is a double-edged sword: while it may increase recall (i.e., retrieve more correct mappings), it may lower precision (i.e., produce more incorrect mappings). Therefore, we propose two methods to select the most relevant mappings from the candidate ones: (1) a selection based on a set of rules and (2) a selection based on supervised machine learning. Our experiments, conducted on two Ontology Alignment Evaluation Initiative (OAEI) datasets, confirm the effectiveness and efficiency of our approach. Moreover, the -measure values obtained with our approach are very competitive to those of the state-of-the-art matchers exploiting background knowledge resources.
    BibTeX:
    		@article{Ann17-JWS,
    		  author = {Amina Annane and Zohra Bellahsene and Faiçcal Azouaou and Clement Jonquet},
    		  title = {Building an effective and efficient background knowledge resource to enhance ontology matching},
    		  journal = {Web Semantics},
    		  publisher = {Elsevier},
    		  year = {2018},
    		  volume = {51},
    		  pages = {51-68},
    		  url = {http://dx.doi.org/10.1016/j.websem.2018.04.001},
    		  doi = {https://doi.org/10.1016/j.websem.2018.04.001}
    		}
    		
    Amina Annane, Zohra Bellahsene, Faical Azouaou & Clement Jonquet. YAM-BIO -- Results for OAEI 2017, LIRMM, University of Montpellier, LIRMM, University of Montpellier, Montpellier, France, November 2017. System paper

    report

    Abstract: The YAM-BIO ontology alignment system is an extension of YAM++ but dedicated to aligning biomedical ontologies. YAM++ has successfully participated in several editions of the Ontology Alignment Evaluation Initiative (OAEI) between 2011 and 2013, but this is the first participation of YAM-BIO. The biomedical extension includes a new component that uses existing mappings between multiple biomedical ontologies as background knowledge. In this short system paper, we present YAM-BIO's workflow and the results obtained in the Anatomy and Large Biomedical Ontologies tracks of the OAEI 2017 campaign.
    BibTeX:
    		@techreport{Ann17-OAEI,
    		  author = {Amina Annane and Zohra Bellahsene and Faical Azouaou and Clement Jonquet},
    		  title = {YAM-BIO -- Results for OAEI 2017},
    		  school = {LIRMM, University of Montpellier},
    		  year = {2017},
    		  note = {Ontology Alignment Evaluation Initiative 2017 Campaign},
    		  url = {http://disi.unitn.it/ pavel/om2017/papers/oaei17_paper14.pdf}
    		}
    		
    Amina Annane, Zohra Bellahsene, Faical Azouaou & Clement Jonquet. Selection and Combination of Heterogeneous BK to Enhance Biomedical Ontology Matching, In 20th International Conference on Knowledge Engineering and Knowledge Management, EKAW'16. Bologna, Italy, November 2016. Lecture Notes in Artificial Intelligence, Vol. 10024 pp. 19-33. Springer.

    conference

    Abstract: This paper presents a novel background knowledge approach which selects and combines existing mappings from a given biomedical ontology repository to improve ontology alignment. Current background knowledge approaches usually select either manually or automatically a limited number of different ontologies and use them as a whole for background knowledge. Whereas in our approach, we propose to pick up only relevant concepts and relevant existing mappings linking these concepts all together in a specific and customized background knowledge graph. Paths within this graph will help to discover new mappings. We have implemented and evaluated our approach using the content of the NCBO BioPortal repository and the Anatomy benchmark from the Ontology Alignment Evaluation Initiative. We used the mapping gain measure to assess how much our final background knowledge graph improves results of state-of-the-art alignment systems. Furthermore, the evaluation shows that our approach produces a high quality alignment and discovers mappings that have not been found by state-of-the-art systems.
    BibTeX:
    		@inproceedings{Ann16-EKAW,
    		  author = {Amina Annane and Zohra Bellahsene and Faical Azouaou and Clement Jonquet},
    		  title = {Selection and Combination of Heterogeneous BK to Enhance Biomedical Ontology Matching},
    		  booktitle = {20th International Conference on Knowledge Engineering and Knowledge Management, EKAW'16},
    		  publisher = {Springer},
    		  year = {2016},
    		  volume = {10024},
    		  pages = {19-33},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_EKAW16_Mappings_Annane.pdf},
    		  doi = {https://doi.org/10.1007/978-3-319-49004-5_2}
    		}
    		
    Amina Annane, Vincent Emonet, Faical Azouaou & Clement Jonquet. Multilingual Mapping Reconciliation between English-French Biomedical Ontologies, In 6th International Conference on Web Intelligence, Mining and Semantics, WIMS'16. Nimes, France, June 2016. (13), pp. 12. ACM.

    conference

    Abstract: Even if multilingual ontologies are now more common, for historical reasons, in the biomedical domain, many ontologies or terminologies have been translated from one natural language to another resulting in two potentially aligned ontologies but with their own specificity (e.g., format, developers, and versions). Most often, there is no formal representation of the translation links between translated ontologies and original ones and those mappings are not formally available as linked data. However, these mappings are very important for the interoperability and the integration of multilingual biomedical data. In this paper, we propose an approach to represent translation mappings between ontologies based on the NCBO BioPortal format. We have reconciled more than 228K mappings between ten English ontologies hosted on NCBO BioPortal and their French translations. Then, we have stored both the translated ontologies and mappings on a French customized version of the platform, called the SIFR BioPortal, making the whole thing available in RDF. Reconciling the mappings turned more complex than expected because the translations are rarely exactly the same than the original ontologies as discussed in this paper.
    BibTeX:
    		@inproceedings{Ann16-WIMS,
    		  author = {Amina Annane and Vincent Emonet and Faical Azouaou and Clement Jonquet},
    		  title = {Multilingual Mapping Reconciliation between English-French Biomedical Ontologies},
    		  booktitle = {6th International Conference on Web Intelligence, Mining and Semantics, WIMS'16},
    		  publisher = {ACM},
    		  year = {2016},
    		  number = {13},
    		  pages = {12},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_WIMS2016_Reconciliation_Annane.pdf},
    		  doi = {https://doi.org/10.1145/2912845.2912847}
    		}
    		
    Amina Annane, Vincent Emonet, Faical Azouaou & Clement Jonquet. Réconciliation d'alignements multilingues dans BioPortal, In 27èmes Journées Francophones d'Ingénierie des Connaissances, IC'16. Montpellier, France, June 2016. (18), pp. 12.

    french

    Abstract: De nos jours, les ontologies sont souvent développées de manière multilingue. Cependant, pour des raisons historiques, dans le domaine biomédical, de nombreuses ontologies ou terminologies ont été traduites d'une langue à une autre ou sont maintenues explicitement dans chaque langue. Cela génère deux ontologies potentiellement alignées mais avec leurs propres spécificités (format, développeurs, versions, etc.). Souvent, il n'existe pas de représentation formelle des liens de traduction reliant les ontologies traduites aux originales et ils ne sont pas accessibles sous forme de linked data. Cependant, ces liens sont très importants pour l'interopérabilité et l'intégration de données biomédicales multilingues. Dans cet article, nous présentons les résultats d'une étude de réconciliation des liens de traduction entre ontologies sous forme d'alignements multilingues. Nous avons réconcilié et représenté à l'aide de vocabulaire du web sémantique, plus de 228K mappings entre dix ontologies anglaises hébergées sur le NCBO BioPortal et leurs traductions françaises. Ensuite, nous avons stocké à la fois les ontologies et les mappings sur une version française de la plate-forme, appelée SIFR BioPortal, pour rendre le tout disponible en RDF (données liées). La réconciliation des alignements s'est avérée plus complexe que ce qu'on pourrait penser car les traductions ne sont que rarement l'exacte copie des originales comme nous le discutons.
    BibTeX:
    		@inproceedings{Ann16-IC,
    		  author = {Amina Annane and Vincent Emonet and Faical Azouaou and Clement Jonquet},
    		  title = {Réconciliation d'alignements multilingues dans BioPortal},
    		  booktitle = {27èmes Journées Francophones d'Ingénierie des Connaissances, IC'16},
    		  year = {2016},
    		  number = {18},
    		  pages = {12},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IC2016_Reconciliation.pdf}
    		}
    		
    Emmanuel Castanier, Clement Jonquet, Soumia Melzi, Pierre Larmande, Manuel Ruiz & Patrick Valduriez. Semantic Annotation Workflow using Bio-Ontologies, In Workshop on Crop Ontology and Phenotyping Data Interoperability. Montpellier, France, April 2014.

    poster-demo

    BibTeX:
    		@inproceedings{Cas14-CO-PDI,
    		  author = {Emmanuel Castanier and Clement Jonquet and Soumia Melzi and Pierre Larmande and Manuel Ruiz and Patrick Valduriez},
    		  title = {Semantic Annotation Workflow using Bio-Ontologies},
    		  booktitle = {Workshop on Crop Ontology and Phenotyping Data Interoperability},
    		  year = {2014},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster_CO-PDI_IBC_final.pdf}
    		}
    		
    Stefano A. Cerri, Monica Crubézy, Pascal Dugénie, Clement Jonquet & Phillippe Lemoisson. The Grid Shared Desktop for CSCL, In eChallenges 2006 Conference. Barcelona, Spain, October 2006. Information and Communication Technologies and the Knowledge Economy, Vol. 3 pp. 1493-1499. IOS Press.

    conference

    Abstract: The Grid Shared Desktop (GSD) is a collaborative environment that provides a multidimensional humanstomachinetohumans interface by the means of multiples cleverly intricated desktops. The GSD is a platform independent solution that benefits of the intrinsic advantages of the Grid technology such as scalability and security. In order to verify that our GSD solution meets CSCL requirements, we have conducted experiments in the context of the ELeGI project. As part of the project use cases, the scenario described here aims at using the GSD for the construction of a shared ontology for organic chemistry. This article summarises results from the first series of experiments aiming to evaluate subjective usability aspects of the GSD in a context of scientific collaboration. We assess the GSD prototype in order to extend its functionalities for other business perspectives.
    BibTeX:
    		@inproceedings{Dug06-eChallenges06,
    		  author = {Stefano A. Cerri and Monica Crubézy and Pascal Dugénie and Clement Jonquet and Phillippe Lemoisson},
    		  title = {The Grid Shared Desktop for CSCL},
    		  booktitle = {eChallenges 2006 Conference},
    		  publisher = {IOS Press},
    		  year = {2006},
    		  volume = {3},
    		  pages = {1493-1499},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-eChallenges2006-GSD_for_CSCL.pdf}
    		}
    		
    Stefano A. Cerri, Marc Eisenstadt & Clement Jonquet. Dynamic Learning Agents and Enhanced Presence on the Grid, In 3rd International LeGE-WG Workshop: Grid Infrastructure to Support Future Technology Enhanced Learning. Berlin, Germany, December 2003. Electronic Workshops in Computing.

    workshop

    Abstract: Human Learning on the Grid will be based on the synergies between advanced software and Human agents. These synergies will be possible to the extent that conversational protocols among Agents, human and/or artificial ones, can be adapted to the ambitious goal of dynamically generating services for human learning. In the paper we highlight how conversations may procure learning both in human and in artificial Agents. The STROBE model for communicating Agents and its current evolutions shows how an artificial Agent may "learn" dynamically (at run time) at the Data, Control and Interpreter level, in particular exemplifying the "learning by being told" modality. The enhanced telepresence research, exemplified by Buddyspace, in parallel, puts human Agents in a rich communicative context where learning effects may occur also as a "serendipitous" side effect of communication. The integration of the two streams of research will be the result of a workpackage within the E-LeGI EU Integrated Project, currently under negotiation.
    BibTeX:
    		@inproceedings{Jon03-LeGE-WG03,
    		  author = {Stefano A. Cerri and Marc Eisenstadt and Clement Jonquet},
    		  title = {Dynamic Learning Agents and Enhanced Presence on the Grid},
    		  booktitle = {3rd International LeGE-WG Workshop: Grid Infrastructure to Support Future Technology Enhanced Learning},
    		  publisher = {Electronic Workshops in Computing},
    		  year = {2003},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article3rd-LeGE-WG-Cerri-Eisenstadt-Jonquet.pdf}
    		}
    		
    Madalina Croitoru, Stéphane Bazan, Stefano A. Cerri, Hugh Davis, Raffaella Folgieri, Clement Jonquet, François Scharffe, Steffan Staab, Michalis Vafopoulos, Thanassis Tiropanis & Su White. Negotiating the Web Science Curriculum through Shared Educational Artefacts, In 3rd International Conference on Web Science, ACM WebSci'11. Koblenz, Germany, June 2011. pp. 14-17.

    conference

    Abstract: The far-reaching impact of the Web on society is widely recognised. The interdisciplinary study of this impact has crystallised in the field of study known as Web Science. However, defining an agreed, shared understanding of what constitutes Web Science requires complex negotiation and translations of understandings across component disciplines, national cultures and educational traditions. Some individual institutions have already established particular curricula, and discussions in the Web Science Curriculum Workshop series have marked the territory to some extent. This paper reports on a process being adopted across a consortium of partners to systematically create a shared understanding of what constitutes Web Science. It records and critiques the processes instantiated to agree a common curriculum, and presents a framework for future discussion and development.
    BibTeX:
    		@inproceedings{Croi11-WebSci11,
    		  author = {Madalina Croitoru and Stéphane Bazan and Stefano A. Cerri and Hugh Davis and Raffaella Folgieri and Clement Jonquet and François Scharffe and Steffan Staab and Michalis Vafopoulos and Thanassis Tiropanis and Su White},
    		  title = {Negotiating the Web Science Curriculum through Shared Educational Artefacts},
    		  booktitle = {3rd International Conference on Web Science, ACM WebSci'11},
    		  year = {2011},
    		  pages = {14-17},
    		  note = {Honorable mention award},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-WebSci11_CroitoruEtAl.pdf}
    		}
    		
    Pascal Dugénie, Clement Jonquet & Stefano A. Cerri. The Principle of Immanence in GRID-Multiagent Integrated Systems, In 4th International Workshop On Agents and Web Services Merging in Distributed Environments, AWeSOMe'08, OTM 2008 Workshops. Monterrey, Mexico, November 2008. Lecture Notes in Computer Science, Vol. 5333 pp. 98-107. Springer.

    workshop

    Abstract: Immanence reflects the principle of emergence of something new from inside a complex system (by opposition to transcendence). For example, immanence occurs when social organization emerges from the internal behaviour of a complex system. In this position paper, we defend the vision that the integration of the GRID and Multi-Agent System (MAS) models enables immanence to occur in the corresponding integrated systems and allows self-organization. On one hand, GRID is known to be an extraordinary infrastructure for coordinating distributed computing resources and Virtual Organizations (VOs). On the other hand MAS interest focusses on complex behaviour of systems of agents. Although several existing VO models specify how to manage resource, services, security policies and communities of users, none of them has considered to tackle the internal self-organization aspect of the overall complex system. We briefly present AGORA, a virtual organization model integrated in an experimental collaborative environment platform. AGORA's architecture adopts a novel design approach, modelled as a dynamic system in which the result of agent interactions are fed back into the system structure.
    BibTeX:
    		@inproceedings{Dug08-AWeSOMe08,
    		  author = {Pascal Dugénie and Clement Jonquet and Stefano A. Cerri},
    		  title = {The Principle of Immanence in GRID-Multiagent Integrated Systems},
    		  booktitle = {4th International Workshop On Agents and Web Services Merging in Distributed Environments, AWeSOMe'08, OTM 2008 Workshops},
    		  publisher = {Springer},
    		  year = {2008},
    		  volume = {5333},
    		  pages = {98-107},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-AWESOME08-Dugenie-Jonquet-Cerri.pdf}
    		}
    		
    Pascal Dugénie, Philippe Lemoisson, Clement Jonquet & Monica Crubézy. The Grid Shared Desktop: a bootstrapping environment for collaboration, Advanced Technology for Learning, Special issue on Collaborative Learning. 2006. Vol. 3 (4), pp. 241-249.

    journal

    Abstract: The paradigm shift from an information-sharing infrastructure (i.e., the Web) to a resource-sharing infrastructure (i.e., the Grid) has open new perspectives for CSCL (Computer Supported Collaborative Learning). With Grid, it is now possible to envisage a scalable infrastructure that offers live collaborative environments in a secure manner. The Grid Shared Desktop (GSD) is one such collaborative environment that inherits from the desktop as a natural human; machine interface to become a multidimensional humans-to-humans interface via several dedicated desktops. The success of such environments depends upon several considerations that we will develop here. We have not so far identified any equivalent solution that can fully suit CSCL requirements. In fact, all solutions are either ad-hoc system-oriented or they are not scalable as they cannot manage resources efficiently. In order to satisfy the CSCL needs, we propose a platform-independent solution that benefits from the intrinsic advantages of the Grid technologies. This goal is greatly enhanced thanks to the ability of Grid to support stateful, dynamic services. In this paper, we also tackle the problem of bootstrapping and supporting a collaborative environment. As we target communities of non-computer-literate people, we investigate easy-to-use and flexible solutions. Finally, we present our latest experimental case study with the GSD in the context of collaborative construction of a shared ontology.
    BibTeX:
    		@article{Dug06-ATL06,
    		  author = {Pascal Dugénie and Philippe Lemoisson and Clement Jonquet and Monica Crubézy},
    		  title = {The Grid Shared Desktop: a bootstrapping environment for collaboration},
    		  journal = {Advanced Technology for Learning, Special issue on Collaborative Learning},
    		  year = {2006},
    		  volume = {3},
    		  number = {4},
    		  pages = {241-249},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/RR-LIRMM-06013-okprotege-march2006.pdf},
    		  doi = {https://doi.org/10.2316/Journal.208.2006.4.208-0895}
    		}
    		
    Biswanath Dutta, Anne Toulet, Vincent Emonet & Clement Jonquet. New Generation Metadata vocabulary for Ontology Description and Publication, In 11th Metadata and Semantics Research Conference, MTSR'17. Tallinn, Estonia, November 2017. Communications in Computer and Information Science, Vol. 755 pp. 173-185. Springer.

    conference

    Abstract: Scientific communities are using an increasing number of ontologies and vocabularies. Currently, the problem lies in the difficulty to find and select them for a specific knowledge engineering task. Thus, there is a real need to precisely describe these ontologies with adapted metadata, but none of the existing metadata vocabularies can completely meet this need if taken independently. In this paper, we present a new version of Metadata vocabulary for Ontology Description and publication, referred as MOD 1.2 which succeeds previous work published in 2015. It has been designed by reviewing in total 23 standard existing metadata vocabularies (e.g., Dublin Core, OMV, DCAT, VoID) and selecting relevant properties for describing ontologies. Then, we studied metadata usage analytics within ontologies and ontology repositories. MOD 1.2 proposes in total 88 properties to serve both as (i) a vocabulary to be used by ontology developers to annotate and describe their ontologies, or (ii) an explicit OWL vocabulary to be used by ontology libraries to offer semantic descriptions of ontologies as linked data. The experimental results show that MOD 1.2 supports a new set of queries for ontology libraries. Because MOD is still in early stage, we also pitch the plan for a collaborative design and adoption of future versions within an international working group.
    BibTeX:
    		@inproceedings{Bis17-MTSR,
    		  author = {Biswanath Dutta and Anne Toulet and Vincent Emonet and Clement Jonquet},
    		  title = {New Generation Metadata vocabulary for Ontology Description and Publication},
    		  booktitle = {11th Metadata and Semantics Research Conference, MTSR'17},
    		  publisher = {Springer},
    		  year = {2017},
    		  volume = {755},
    		  pages = {173-185},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_MTSR-2017_MOD1.2.pdf},
    		  doi = {https://doi.org/10.1007/978-3-319-70863-8_17}
    		}
    		
    Frédéric Duvert, Clement Jonquet, Pascal Dugénie & Stefano A. Cerri. Agent-Grid Integration Ontology, In 2nd International Workshop on Agents, Web Services and Ontologies Merging, AWeSOMe'06. Montpellier, France, November 2006. Lecture Notes in Computer Science, Vol. 4277 pp. 136-146. Springer.

    workshop

    Abstract: The integration of GRID and MAS (Multi-Agents Systems) is an active research topic.We have recently proposed the Agent-Grid Integration Language, to describe a service-based integration of GRID and MAS models. However, the complexity of the mutual integration aspects leads us to define a rigorous way to formalize the key concepts, their relations and the integration rules by means of an ontology. With this ontology, we can describe the elements and their composition that occur in various service exchange scenarios with agent on the Grid. The ontology could be used both to model the behaviour of GRIDMAS integrated systems and to check the consistency of these systems and their instances. A concrete scenario is illustrated.
    BibTeX:
    		@inproceedings{Duv06-AWeSOMe06,
    		  author = {Frédéric Duvert and Clement Jonquet and Pascal Dugénie and Stefano A. Cerri},
    		  title = {Agent-Grid Integration Ontology},
    		  booktitle = {2nd International Workshop on Agents, Web Services and Ontologies Merging, AWeSOMe'06},
    		  publisher = {Springer},
    		  year = {2006},
    		  volume = {4277},
    		  pages = {136-146},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-AWeSOMe06-Duvert-Jonquet-Dugenie-Cerri.pdf}
    		}
    		
    Solène Eholié, Mike Donald Tapi Nzali, Sandra Bringay & Clement Jonquet. MuEVo, un vocabulaire multi-expertise (patient/médecin) dédié au cancer du sein, In 2ème Atelier sur l'Intelligence Artificielle et la Santé. Montpellier, France, June 2016. pp. 7.

    french

    Abstract: Il existe un écart notable à la fois d'ordre lexical et sémantique entre le vocabulaire des professionnels de la santé et celui des patients. Á notre connaissance, il n'existe pas de ressource formalisée pour le français liant ces deux niveaux de vocabulaire. Nous présentons dans ce travail, une formalisation en SKOS d'un vocabulaire reliant ces deux niveaux d'expertise dans le cadre de la thématique du cancer du sein ainsi qu'une méthode d'alignement de la terminologie résultante, MuEVo, à des terminologies biomédicales de référence à savoir MeSH, SNOMED et MedDRA.
    BibTeX:
    		@inproceedings{Eho16-IASante,
    		  author = {Solène Eholié and Mike Donald Tapi Nzali and Sandra Bringay and Clement Jonquet},
    		  title = {MuEVo, un vocabulaire multi-expertise (patient/médecin) dédié au cancer du sein},
    		  booktitle = {2ème Atelier sur l'Intelligence Artificielle et la Santé},
    		  year = {2016},
    		  pages = {7},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IA-Sante-2016_MuEVo.pdf}
    		}
    		
    Solène Eholié, Mike-Donald Tapi-Nzali, Sandra Bringay & Clement Jonquet. MuEVo, a breast cancer Consumer Health Vocabulary built out of web forums, In 9th International Semantic Web Applications and Tools for Life Sciences, SWAT4LS'16. Amsterdam, The Netherlands, December 2016. pp. 10.

    conference

    Abstract: Semantically analyze patient-generated text from a biomedical perspective is challenging because of the vocabulary gap between patients and health professionals. The medical expertise and vocabulary is well formalized in standards terminologies and ontologies, which enable semantic analysis of expert-generated text; however resources which formalize the vocabulary of health consumers (patients and their family, laypersons in general) remain scarce. The situation is even worse if one is interested in another language than English. In previous studies, we attempted to produce a French preliminary Consumer Health Vocabulary (CHV) by mining the language used within online public forums & Facebook groups about breast cancer. In this work, we show our effort to concretely align the vocabulary produced to standard terminologies and to represent its content (terms & mappings) using semantic web languages such as RDF and SKOS. We used a sample of 173 relations built around 64 expert concepts which have been automatically (89 or manually (11 aligned to standard biomedical terminologies, in our case: MeSH, MedDRA and SNOMEDint. The resulting vocabulary, called MuEVo (Multi-Expertise Vocabulary) and the mappings are publicly available in the SIFR BioPortal French biomedical ontology repository.
    BibTeX:
    		@inproceedings{Eho16-SWAT4LS,
    		  author = {Solène Eholié and Mike-Donald Tapi-Nzali and Sandra Bringay and Clement Jonquet},
    		  title = {MuEVo, a breast cancer Consumer Health Vocabulary built out of web forums},
    		  booktitle = {9th International Semantic Web Applications and Tools for Life Sciences, SWAT4LS'16},
    		  year = {2016},
    		  pages = {10},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-SWAT4LS-2016_MUEVO.pdf}
    		}
    		
    Amir Ghazvinian, Natasha F. Noy, Clement Jonquet, Nigam H. Shah & Mark A. Musen. What Four Million Mappings Can Tell You about Two Hundred Ontologies, In 8th International Semantic Web Conference, ISWC'09. Washington DC, USA, November 2009. Lecture Notes in Computer Science, Vol. 5823 pp. 229-242. Springer.

    conference

    Abstract: The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal — an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.
    BibTeX:
    		@inproceedings{Gha09-ISWC09,
    		  author = {Amir Ghazvinian and Natasha F. Noy and Clement Jonquet and Nigam H. Shah and Mark A. Musen},
    		  title = {What Four Million Mappings Can Tell You about Two Hundred Ontologies},
    		  booktitle = {8th International Semantic Web Conference, ISWC'09},
    		  publisher = {Springer},
    		  year = {2009},
    		  volume = {5823},
    		  pages = {229-242},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-ISWC2009-Ghazvinian_paper380.pdf}
    		}
    		
    Julien Grosjean, Lina F. Soualmia, Khedidja Bouarech, Clement Jonquet & Stefan J. Darmoni. An Approach to Compare Bio-Ontologies Portals, In 26th International Conference of the European Federation for Medical Informatics, MIE'14. Istanbul, Turkey, September 2014. Studies in Health Technology and Informatics, Vol. 205 pp. 1008-1012. IOS Press.

    conference

    Abstract: Background: main biomedical information retrieval systems are based on controlled vocabularies and most specifically on terminologies or ontologies (T/O). These classification structures allow indexing, coding, annotating different kind of documents. Many T/O have been created for different purposes and it became a problem for finding specific concepts in the multitude of existing nomenclatures. The NCBO (National Center for Biomedical Ontologies) BioPortal2 and the CISMeF (Catalogue et Index des SitesMŽedicaux de langue Francžaise) HeTOP3 projects have been developed to tackle this issue. Objective: the present work consists in comparing both portals. Methods: we hereby are proposing a set of criteria to compare bio-ontologies portals in terms of goals, features, technologies and usability. Results: BioPortal and HeTOP have been compared based on the given criteira. While both portals are designed to store and make T/O available to the community and are sharing many basic features, they are also very different mainly because of their basic purposes. Conclusion: thanks to the comparison criteria, we can assume that a merge between BioPortal and HeTOP is possible in terms of functionnalities. The main difficulties will be about merging the data repositories and applying different policies on T/O content.
    BibTeX:
    		@inproceedings{Gro14-MIE,
    		  author = {Julien Grosjean and Lina F. Soualmia and Khedidja Bouarech and Clement Jonquet and Stefan J. Darmoni},
    		  title = {An Approach to Compare Bio-Ontologies Portals},
    		  booktitle = {26th International Conference of the European Federation for Medical Informatics, MIE'14},
    		  publisher = {IOS Press},
    		  year = {2014},
    		  volume = {205},
    		  pages = {1008-1012},
    		  url = {http://www.chu-rouen.fr/tibs/wp-content/uploads/pdf/Grosjean2014a.pdf},
    		  doi = {https://doi.org/10.3233/978-1-61499-432-9-1008}
    		}
    		
    Julien Grosjean, Lina F. Soualmia, Khedidja Bouarech, Clement Jonquet & Stefan J. Darmoni. Comparing BioPortal and HeTOP: towards a unique biomedical ontology portal?, In 2nd International Work-Conference on Bioinformatics and Biomedical Engineering, IWBBIO'14. Granada, Spain, April 2014. pp. 11.

    workshop

    Abstract: The volume of data in the biomedical field constantly grows. The vast majority of information retrieval systems are based on controlled vocabularies and most specifically on terminologies or ontologies (T/O). These classification structures allow indexing, coding, annotating various types of documents. In Health, many T/O have been created for different purposes and it became a problem to find specific concepts in the multitude of nomenclatures. The NCBO (National Center for Biomedical Ontologies, Stanford University) BioPortal project and the CISMeF (Catalogue et Index des Sites MŽedicaux de langue Francžaise, Rouen University Hospital) HeTOP portals have been developed to tackle this issue. While both portals are designed to store and make T/O available to the community, they are also very different mainly because of their basic purposes. The present work consists in comparing both portals and in answering the following question: is it possible to merge BioPortal and HeTOP into one unique solution to manage T/O ?
    BibTeX:
    		@inproceedings{Gro14-IWBBIO,
    		  author = {Julien Grosjean and Lina F. Soualmia and Khedidja Bouarech and Clement Jonquet and Stefan J. Darmoni},
    		  title = {Comparing BioPortal and HeTOP: towards a unique biomedical ontology portal?},
    		  booktitle = {2nd International Work-Conference on Bioinformatics and Biomedical Engineering, IWBBIO'14},
    		  year = {2014},
    		  pages = {11},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IWBBIO14_Comparing_BioPortal-HeTOP.pdf}
    		}
    		
    Lisa Harper, Jacqueline Campbell, Ethalinda K. S. Cannon, Sook Jung, Dorrie Main, Monica Poelchau, Ramona Walls, Carson Andorf, Elizabeth Arnaud, Tanya Berardini, Clayton Birkett, Steve Cannon, James Carson, Bradford Condon, Laurel Cooper, Nathan Dunn, Chris Elsik, Andrew Farmer, Stephen Ficklin, David Grant, Emily Grau, Nic Herndon, Zhi-Liang Hu, Jodi Humann, Pankaj Jaiswal, Clement Jonquet, Marie-Angélique Laporte, Pierre Larmande, Gerard Lazo, Fiona McCarthy, Naama Menda, Christopher Mungall, Monica Munoz-Torres, Sushma Naithani, Rex Nelson, Daureen Nesdill, Carissa Park, James Reecy, Leonore Reiser, Lacey-Anne Sanderson, Taner Sen, Margaret Staton, Sabarinath Subramaniam, Marcela Karey Tello-Ruiz, Victor Unda, Deepak Unni, Liya Wang, Doreen Ware, Jill Wegrzyn, Jason Williams & Margaret Woodhouse. AgBioData Consortium Recommendations for Sustainable Genomics and Genetics Databases for Agriculture, Database. 2018. Vol. IN PRESS

    journal

    BibTeX:
    		@article{Har18-Database,
    		  author = {Lisa Harper and Jacqueline Campbell and Ethalinda KS Cannon and Sook Jung and Dorrie Main and Monica Poelchau and Ramona Walls and Carson Andorf and Elizabeth Arnaud and Tanya Berardini and Clayton Birkett and Steve Cannon and James Carson and Bradford Condon and Laurel Cooper and Nathan Dunn and Chris Elsik and Andrew Farmer and Stephen Ficklin and David Grant and Emily Grau and Nic Herndon and Zhi-Liang Hu and Jodi Humann and Pankaj Jaiswal and Clement Jonquet and Marie-Angélique Laporte and Pierre Larmande and Gerard Lazo and Fiona McCarthy and Naama Menda and Christopher Mungall and Monica Munoz-Torres and Sushma Naithani and Rex Nelson and Daureen Nesdill and Carissa Park and James Reecy and Leonore Reiser and Lacey-Anne Sanderson and Taner Sen and Margaret Staton and Sabarinath Subramaniam and Marcela Karey Tello-Ruiz and Victor Unda and Deepak Unni and Liya Wang and Doreen Ware and Jill Wegrzyn and Jason Williams and Margaret Woodhouse},
    		  title = {AgBioData Consortium Recommendations for Sustainable Genomics and Genetics Databases for Agriculture},
    		  journal = {Database},
    		  year = {2018},
    		  volume = {IN PRESS}
    		}
    		
    Nordine El Hassouni, Manuel Ruiz, Anne Toulet, Clement Jonquet & Pierre Larmande. The Agronomic Linked Data (AgroLD) project, In European conference dedicated to the future use of ICT in the agri-food sector, bioresource and biomass sector, EFITA'17, demonstration session. Montpellier, France, July 2017. pp. 257.

    poster-demo

    Abstract: Agronomy is an overarching field that consists of various areas of research such Genetics, Plant Molecular Biology, Ecology and Earth Science. At the Institute of Computational Biology (IBC), we are currently building a RDF knowledge base, Agronomic Linked Data (AgroLD ; www.agrold.org). The knowledge base is designed to integrate data from various publically available plant centric data sources. The aim of AgroLD project is to provide a portal for bioinformatics and domain experts to exploit the homogenized data model towards filling the knowledge gaps. To this end, we plan to engage with stakeholders in demonstrating the advantages of SW in answering complex domain relevant questions that were unapproachable using traditional methods, strategically filling knowledge gaps.
    BibTeX:
    		@inproceedings{Lar17-EFITA,
    		  author = {Nordine El Hassouni and Manuel Ruiz and Anne Toulet and Clement Jonquet and Pierre Larmande},
    		  title = {The Agronomic Linked Data (AgroLD) project},
    		  booktitle = {European conference dedicated to the future use of ICT in the agri-food sector, bioresource and biomass sector, EFITA'17, demonstration session},
    		  year = {2017},
    		  pages = {257},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo_EFITA2017_AgroLD.pdf}
    		}
    		
    Clement Jonquet. Maitriser une technologie de gestion des ontologies et vocabulaires en France : défis et enjeux, In SemWebPro Conference. Paris, France, November 2018. pp. 2.

    french

    BibTeX:
    		@inproceedings{Jon18-SemWebPro,
    		  author = {Clement Jonquet},
    		  title = {Maitriser une technologie de gestion des ontologies et vocabulaires en France : défis et enjeux},
    		  booktitle = {SemWebPro Conference},
    		  year = {2018},
    		  pages = {2},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Abstract-SemWebPro2018_Jonquet.pdf}
    		}
    		
    Clement Jonquet. Challenges for ontology repositories and applications to biomedicine & agronomy, In 4th Annual International Symposium on Information Management and Big Data, SIMBig'17. Lima, Peru, September 2017. CEUR Workshop Proceedings, Vol. 2029 pp. 25-37.

    workshop

    Abstract: The explosion of the number of ontol-ogies and vocabularies available in the Semantic Web makes ontology libraries and repositories mandatory to find and use them. Their functionalities span from simple ontology listing with more or less of metadata description to por-tals with advanced ontology-based ser-vices: browse, search, visualization, metrics, annotation, etc. Ontology li-braries and repositories are usually de-veloped to address certain needs and communities. BioPortal, the ontology repository built by the US National Center for Biomedical Ontologies Bi-oPortal relies on a domain independent technology already reused in several projects from biomedicine to agronomy and earth sciences. In this position pa-per, we describe six high level chal-lenges for ontology repositories: metadata & selection, multilingualism, alignment, new generic ontology-based services, annotations & linked data, and interoperability & scalability. Then, we present some propositions to address these challenges and point to our previously published work and re-sults obtained within applications ;reusing NCBO technology; to biomed-icine and agronomy in the context of the NCBO, SIFR and AgroPortal pro-jects.
    BibTeX:
    		@inproceedings{Jon17-SIMBig,
    		  author = {Clement Jonquet},
    		  title = {Challenges for ontology repositories and applications to biomedicine & agronomy},
    		  booktitle = {4th Annual International Symposium on Information Management and Big Data, SIMBig'17},
    		  year = {2017},
    		  volume = {2029},
    		  pages = {25-37},
    		  note = {Keynote Speaker Paper},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Keynote_SIMBig2017_Jonquet.pdf}
    		}
    		
    Clement Jonquet. BioPortal : ontologies et ressources de données biomédicales à portée de main, In 1ère édition du Symposium sur l'Ingénierie de l'Information Médicale, SIIM'11, Démos. Toulouse, France, June 2011.

    poster-demo

    BibTeX:
    		@inproceedings{Jon11-SIIM11,
    		  author = {Clement Jonquet},
    		  title = {BioPortal : ontologies et ressources de données biomédicales à portée de main},
    		  booktitle = {1ère édition du Symposium sur l'Ingénierie de l'Information Médicale, SIIM'11, Démos},
    		  year = {2011}
    		}
    		
    Clement Jonquet. Dynamic Service Generation: Agent interactions for service exchange on the Grid, University Montpellier 2, University Montpellier 2, Montpellier, France, November 2006. PhD thesis

    dissertation

    Abstract: This thesis deals with modelling dynamic service exchange. The notion of service is now at the centre of distributed system development ; it plays a key role in their implementation and success. The thesis proposes firstly a reflection about the notion of service and introduces the concept of Dynamic Service Generation (DSG) as a different way to provide and use services in a computer-mediated context: services are dynamically constructed, provided and used by agents (human or artificial) within a community, by means of a conversation. In particular, two major characteristics of DSG are highlighted: an agent and Grid oriented aspect of service exchange. Therefore, the thesis proposes an integration of three research domains in Informatics: Service-Oriented Computing (SOC), Multi-Agents System (MAS) and GRID. The thesis contributions consists of three main aspects: The proposal of (i) a new agent representation and communication model, called STROBE, that enables agents to develop different languages for each agent they communicate with. STROBE agents are able to interpret communication messages and execute services in a given dynamic and dedicated conversation context; (ii) a computational abstraction, called i-dialogue (intertwined dialogues) that models multi-agent conversations by means of fundamental constructs of applicative/functional languages (i.e., streams, lazy evaluation and higherorder functions); (iii) a service-oriented GRID-MAS integrated model based on the representation of agent capabilities as Grid services. In this model, concepts of GRID and MAS, relations between them and the integration rules are semantically described by a set-theory formalization and a common graphical description language, called Agent-Grid Integration Language (AGIL). AGIL integrates the thesis results together by formalizing agent interactions for service exchange on the Grid.
    BibTeX:
    		@phdthesis{Jon06-PhD,
    		  author = {Clement Jonquet},
    		  title = {Dynamic Service Generation: Agent interactions for service exchange on the Grid},
    		  school = {University Montpellier 2},
    		  year = {2006},
    		  url = {http://www.lirmm.fr/ jonquet/research/LIRMM/PhDThesis/PhDthesis-Jonquet-V5-06-11-07.pdf}
    		}
    		
    Clement Jonquet. A framework and ontology for Semantic Grid services: an integrated view of WSMF and WSRF, University Montpellier 2 and KMi, Open University, University Montpellier 2 and KMi, Open University, France, May 2005. Unpublished draft research report

    report

    Abstract: This little draft paper describes some first ideas to extend the theoretical concepts of WSMF (and WSMO) to Grid service approach. WSMF is a framework for Semantic Web Services and WSMO is the corresponding ontology. Advances in Web/Grid services have been detailed in the WSRF specification. This draft paper proposes firstly a framework as an extension of WSMF, and secondly details modifications/extensions of WSMO that are required to fit with WSRF principles. This integration WSMF-WSRF can be saw has a framework and ontology for future specification of Semantic Grid Services (SGS).
    BibTeX:
    		@techreport{Jon05-SGS-RR,
    		  author = {Clement Jonquet},
    		  title = {A framework and ontology for Semantic Grid services: an integrated view of WSMF and WSRF},
    		  school = {University Montpellier 2 and KMi, Open University},
    		  year = {2005},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Unpublished-Jonquet-UM2-KMi-IntegrationWSMF-WSRF.pdf}
    		}
    		
    Clement Jonquet. Communication agent et interprétation Scheme pour l'apprentissage au méta-niveau, University Montpellier 2, University Montpellier 2, Montpellier, France, June 2003. Master Thesis

    dissertation

    Abstract: La communication entre agents cognitifs est un domaine de recherche en pleine effervescence. Notre travail consiste ici à proposer un modèle, basé sur le modèle STROBE, qui considère les agents comme des interpréteurs Scheme. Ces agents sont capables d'interpréter des messages dans un environnement donné incluant un interpréteur qui apprend par les conversations. Ces interpréteurs peuvent, en outre, évoluer dynamiquement au fur et à mesure des conversations et représentent la connaissance de ces agents au niveau méta. Nous illustrons ce modèle théorique par une expérimentation de dialogue de type " professeur-élève " où un agent apprend un nouveau performatif à l'issue de la conversation. Ainsi, ce mémoire présente avant tout les deux domaines dont notre travail s'inspire, c'est à dire l'évaluation en Scheme et la communication agent. Puis, il présente notre modèle et l'illustre par une expérimentation. Et enfin, il termine en regardant en quoi ce modèle peut être effectif dans des domaines comme le Web, le Grid ou les dialogues de type e-commerce (grâce à un point de vue contraintes). Ce dernier point est particulièrement développé pour montrer comment réaliser la spécification dynamique d'un problème en développant des agents communicants. Les détails de l'implémentation sont également fournis.
    BibTeX:
    		@mastersthesis{Jon03-DEA,
    		  author = {Clement Jonquet},
    		  title = {Communication agent et interprétation Scheme pour l'apprentissage au méta-niveau},
    		  school = {University Montpellier 2},
    		  year = {2003},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/MemoireDEA-Jonquet.signets.pdf}
    		}
    		
    Clement Jonquet, Amina Annane, Khedidja Bouarech, Vincent Emonet & Soumia Melzi. SIFR BioPortal : Un portail ouvert et générique d'ontologies et de terminologies biomédicales françaises au service de l'annotation sémantique, In 16th Journées Francophones d'Informatique Médicale, JFIM'16. Genève, Suisse, July 2016. pp. 16.

    french

    Abstract: Contexte ; Le volume de données en biomédecine ne cesse de croître. En dépit d'une large adoption de l'anglais, une quantité significative de ces données est en français. Dans le do-maine de l'intégration de données, les terminologies et les ontologies jouent un rôle central pour structurer les données biomédicales et les rendre interopérables. Cependant, outre l'existence de nombreuses ressources en anglais, il y a beaucoup moins d'ontologies en français et il manque crucialement d'outils et de services pour les exploiter. Cette lacune contraste avec le montant considérable de données biomédicales produites en français, par-ticulièrement dans le monde clinique (e.g., dossiers médicaux électroniques). Methode & Résultats ; Dans cet article, nous présentons certains résultats du projet In-dexation sémantique de ressources biomédicales francophones (SIFR), en particulier le SIFR BioPortal, une plateforme ouverte et générique pour l'hébergement d'ontologies et de terminologies biomédicales françaises, basée sur la technologie du National Center for Biomedical Ontology. Le portail facilite l'usage et la diffusion des ontologies du domaine en offrant un ensemble de services (recherche, alignements, métadonnées, versionnement, vi-sualisation, recommandation) y inclus pour l'annotation sémantique. En effet, le SIFR An-notator est un outil d'annotation basé sur les ontologies pour traiter des données textuelles en français. Une évaluation préliminaire, montre que le service web obtient des résultats équivalents à ceux reportés précedement, tout en étant public, fonctionnel et tourné vers les standards du web sémantique. Nous présentons également de nouvelles fonctionnalités pour les services à base d'ontologies pour l'anglais et le français.
    BibTeX:
    		@inproceedings{Jon16-JFIM,
    		  author = {Clement Jonquet and Amina Annane and Khedidja Bouarech and Vincent Emonet and Soumia Melzi},
    		  title = {SIFR BioPortal : Un portail ouvert et générique d'ontologies et de terminologies biomédicales françaises au service de l'annotation sémantique},
    		  booktitle = {16th Journées Francophones d'Informatique Médicale, JFIM'16},
    		  year = {2016},
    		  pages = {16},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_JFIM2016_SIFR_Jonquet.pdf}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. Characterization of the Dynamic Service Generation concept, University Montpellier 2, University Montpellier 2, France, February 2006. Research report (06007),

    report

    Abstract: The goal of this position paper is to reflect about the concept of service in Informatics. In particular, we present in this paper the concept of dynamic service generation as a different way to provide services in computermediated contexts: services are dynamically constructed and provided (generated) by agents (human or artificial) within a community, by means of a conversation. This process allows services to be more accurate, precise, customized and personalized to satisfy a non predetermined need or wish. The paper presents an overview of the concept of service from philosophy to computer science (service oriented computing). A strict comparison with the current popular approach, called product delivery, is done. The main result emerging by these reflections is a list of "characteristics" of dynamic service generation, in order to promote a progressive transition from product delivery to dynamic service generation systems by transforming one-by-one the outlined characteristics into requirements and specifications. More specifically, two major characteristics are precisely described in the paper as they imply 80% of the other ones. They promote a substitution of an agent oriented kernel to the current object oriented kernel of services as well as the Grid as the service oriented architecture and infrastructure for service exchanges between agents.
    BibTeX:
    		@techreport{Jon06-CharacDSG,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {Characterization of the Dynamic Service Generation concept},
    		  school = {University Montpellier 2},
    		  year = {2006},
    		  number = {06007},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/RR-LIRMM-06007-Jonquet-feb2006.pdf}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. i-dialogue: modeling agent conversation by streams and lazy evaluation, In International Lisp Conference, ILC'05. Stanford University, CA, USA, June 2005. pp. 219-228.

    conference

    Abstract: This paper defines and exemplifies a new computational abstraction called i-dialogue which aims to model communicative situations such as those where an agent conducts multiple concurrent conversations with other agents. The i-dialogue abstraction is inspired both by the dialogue abstraction proposed by O'Donnell and by the STROBE model. Idialogue models conversations among processes by means of fundamental constructs of applicative/functional languages. (i.e. streams, lazy evaluation and higher order functions). The i-dialogue abstraction is adequate for representing multiagent concurrent asynchronous communication such as it can occur in service providing scenarios on today's Web or Grid. A Scheme implementation of the i-dialogue abstraction has been developed and is available.
    BibTeX:
    		@inproceedings{Jon05-ILC05,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {i-dialogue: modeling agent conversation by streams and lazy evaluation},
    		  booktitle = {International Lisp Conference, ILC'05},
    		  year = {2005},
    		  pages = {219-228},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-Jonquet-Cerri-ILC05.pdf}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. The STROBE model: Dynamic Service Generation on the Grid, Applied Artificial Intelligence, Special issue on Learning Grid Services. October-November 2005. Vol. 19 (9-10), pp. 967-1013.

    journal

    Abstract: This article presents the STROBE model: both an agent representation and an agent communication model based on a social approach, that means interaction centred. This model represents how agents may realise the interactive, dynamic generation of services on the Grid. Dynamically generated services embody a new concept of service implying a collaborative creation of knowledge i.e. learning; services are constructed interactively between agents depending on a conversation. The approach consists of integrating selected features from Multi-Agent Systems and agent communication, language interpretation in applicative/functional programming and e-learning/human-learning into a unique, original and simple view that privileges interactions, yet including control. The main characteristic of STROBE agents is that they develop a language (environment + interpreter) for each of their interlocutors. The model is inscribed within a global approach, defending a shift from the classical algorithmic (control based) view to problem solving in computing to an interaction-based view of Social Informatics, where artificial as well as human agents operate by communicating as well as by computing. The paper shows how the model may not only account for the classical communicating agent approaches, but also represent a fundamental advance in modelling societies of agents in particular in Dynamic Service Generation scenarios such as those necessary today on theWeb and proposed tomorrow for the Grid. Preliminary concrete experimentations illustrate the potential of the model; they are significant examples for a very wide class of computational and learning situations.
    BibTeX:
    		@article{Jon05-AAIJ05,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {The STROBE model: Dynamic Service Generation on the Grid},
    		  journal = {Applied Artificial Intelligence, Special issue on Learning Grid Services},
    		  year = {2005},
    		  volume = {19},
    		  number = {9-10},
    		  pages = {967-1013},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-AAIJ05-Jonquet-Cerri.pdf},
    		  doi = {https://doi.org/10.1080/08839510500234826}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. Agents as Scheme Interpreters: Enabling Dynamic Specification by Communicating, In 14th Congrès Francophone AFRIF-AFIA de Reconnaissance des Formes et Intelligence Artificielle, RFIA'04. Toulouse, France, January 2004. Vol. 2 pp. 779-788.

    french

    Abstract: Nous avons proposé dans de précédents papiers une extension et une implémentation du modèle STROBE, qui considère les Agents comme des interpréteurs Scheme. Ces Agents sont capables d'interpréter des messages dans des environnements donnés incluant un interpréteur qui apprend de la conversation et donc qui représente l'évolution de sa connaissance au niveau méta. Quand ces interpréteurs sont non déterministes, le dialogue consiste à raffiner les spécifications d'un problème par des ensembles de contraintes. Ce papier présente un exemple de génération dynamique de service ; tels qu'ils sont nécessaires sur le GRID ; exploitant des Agents STROBE équipés d'un interpréteur non déterministe. Il montre comment réaliser la spécification dynamique d'un problème. Puis il illustre comment ces principes peuvent être intéressants pour d'autres applications. Les détails de l'implémentation ne sont pas fournis ici mais sont disponibles.
    BibTeX:
    		@inproceedings{Jon04-RFIA04,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {Agents as Scheme Interpreters: Enabling Dynamic Specification by Communicating},
    		  booktitle = {14th Congrès Francophone AFRIF-AFIA de Reconnaissance des Formes et Intelligence Artificielle, RFIA'04},
    		  year = {2004},
    		  volume = {2},
    		  pages = {779-788},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/ArticleRFIA04-Jonquet-Cerri.pdf}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. Agents Communicating for Dynamic Service Generation, In 1st International Workshop on Grid Learning Services, GLS'04. Maceio, Brazil, September 2004. pp. 39-53.

    workshop

    Abstract: This article proposes both an agent representation and an agent communication model based on a social approach. By modelling Grid services with agents we are confident to be able to realise the interactive, dynamic generation of services that is necessary in order to have learning effects on interlocutors. The approach consists of integrating features from agent communication, language interpretation and e-learning/human-learning into a unique, original and simple view that privileges interactions, yet including control. The model is based on STROBE and proposes to enrich the languages of agents (Environment + Interpreter) by allowing agents to dynamically modify them ; at run time ; not only at the Data or Control level, but also at the Interpreter level (meta-level). The model is inscribed within a global approach, defending a shift from the classical algorithmic (control based) view to problem solving in computing to an interaction-based view of Social Informatics, where artificial as well as human agents operate by communicating as well as by computing. The paper shows how the model may not only account for the classical communication agent approaches, but also represent a fundamental advance in modelling societies of agents in particular in dynamic service generation scenarios such as those necessary today on theWeb and proposed tomorrow on the Grid. Preliminary concrete experimentations illustrate the potential of the model; they are significant examples for a very wide class of computational and learning situations.
    BibTeX:
    		@inproceedings{Jon04-GLS04,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {Agents Communicating for Dynamic Service Generation},
    		  booktitle = {1st International Workshop on Grid Learning Services, GLS'04},
    		  year = {2004},
    		  pages = {39-53},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-GLS04-Jonquet-Cerri.pdf}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. Apprentissage issu de la communication pour des agents cognitifs, In 11èmes Journées Francophones sur les Systèmes Multi-Agents, JFSMA'03. Hammamet, Tunisia, November 2003. pp. 83-87. Hermès.

    french

    Abstract: La communication entre Agents cognitifs est un domaine de recherche en pleine effervescence. Nous proposons ici un modèle, basé sur le modèle STROBE, qui considère les Agents comme des interpréteurs Scheme. Ces Agents sont capables d'interpréter les messages d'une conversation dans un environnement donné, avec un interpréteur donné, tous les deux dédiés à la conversation courante. Ces interpréteurs peuvent en outre évoluer dynamiquement au fur et à mesure des conversations et représentent la connaissance de ces Agents au niveau méta. Nous proposons un mécanisme d'apprentissage à ce méta-niveau basé sur la communication. Ce court article a pour but de présenter brièvement notre modèle et ses intérêts pour des domaines tels que le Web, la génération de services, le GRID etc...
    BibTeX:
    		@inproceedings{Jon03-JFSMA03,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {Apprentissage issu de la communication pour des agents cognitifs},
    		  booktitle = {11èmes Journées Francophones sur les Systèmes Multi-Agents, JFSMA'03},
    		  publisher = {Hermès},
    		  year = {2003},
    		  pages = {83-87},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/ArticleJFSMA03-Jonquet-Cerri.pdf}
    		}
    		
    Clement Jonquet & Stefano A. Cerri. Cognitive Agents Learning by Communicating, In Colloque Agents Logiciels, Coopération, Apprentissage et Activité Humaine, ALCAA'03. Bayonne, France, September 2003. pp. 29-39.

    french

    Abstract: Cognitive Agent communication is a research field in full development. We propose here an extension and an implementation of the STROBE model, which regards the Agents as Scheme interpreters. These Agents are able to interpret messages in a dedicated environment including an interpreter that learns from the current conversation. These interpreters evolve dynamically, progressively with the conversations, and thus represent evolving meta level Agent knowledge. We illustrate this theoretical model by a "teacher-student" dialogue experimentation, where an Agent learns a new performative at the completion of the conversation. Details of the implementation are not provided here, but are available.
    BibTeX:
    		@inproceedings{Jon03-ALCAA03,
    		  author = {Clement Jonquet and Stefano A. Cerri},
    		  title = {Cognitive Agents Learning by Communicating},
    		  booktitle = {Colloque Agents Logiciels, Coopération, Apprentissage et Activité Humaine, ALCAA'03},
    		  year = {2003},
    		  pages = {29-39},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/ArticleALCAA03-Jonquet-Cerri.pdf}
    		}
    		
    Clement Jonquet, Adrien Coulet, Nigam H. Shah & Mark A. Musen. Indexation et intégration de ressources textuelles à l'aide d'ontologies : application au domaine biomédical, In 21èmes Journées Francophones d'Ingénierie des Connaissances, IC'10. Nimes, France, June 2010. pp. 271-282.

    french

    Abstract: De nombreuses découvertes scientifiques sont contraintes aujourd'hui par la difficile intégration des données misent à disposition dans différentes ressources. L'utilisation d'ontologies pour indexer et intégrer les ressources de données est un moyen de valoriser la connaissance d'un domaine en facilitant la recherche et la fouille de données. Dans cet article nous présentons un mécanisme d'indexation de ressources de données textuelles dirigé par les ontologies. Nous détaillons la création et l'utilisation d'un index de ressources de données biomédicales qui fournit un accès uniforme à plus d'une vingtaine de ressources indexées avec plus de 200 ontologies. Cet index est accessible via la plateforme Web BioPortal du Centre National pour les Ontologies Biomédicales (NCBO) : http://bioportal.bioontology.org/
    BibTeX:
    		@inproceedings{Jon10-IC10,
    		  author = {Clement Jonquet and Adrien Coulet and Nigam H. Shah and Mark A. Musen},
    		  title = {Indexation et intégration de ressources textuelles à l'aide d'ontologies : application au domaine biomédical},
    		  booktitle = {21èmes Journées Francophones d'Ingénierie des Connaissances, IC'10},
    		  year = {2010},
    		  pages = {271-282},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IC10_Jonquet_et-al.pdf}
    		}
    		
    Clement Jonquet, Pascal Dugenie & Stefano A. Cerri. Agent-Grid Integration Language, Multiagent and Grid Systems. 2008. Vol. 4 (2), pp. 167-211.

    journal

    Abstract: The GRID and MAS (Multi-Agent Systems) communities believe in the potential of GRID and MAS to enhance each other as these models have developed significant complementarities. Thus, both communities agree on the 'what' to do: promote an integration of GRID and MAS models. However, while the 'why' to do it has been stated and assessed, the 'how' to do it remains a research problem. This paper addresses this problem by means of a service-oriented approach. Services are exchanged (i.e., provided and used) by agents through GRID mechanisms and infrastructure. The paper first consists of a set of states of the art about integration approaches in GRID, MAS and Service-Oriented Computing (SOC). It secondly proposes a model for GRID-MAS integrated systems. Concepts, relations between them and rules are semantically described by a set-theory formalization and a common graphical description language, called Agent-Grid Integration Language (AGIL). This language may be used to describe future GRID-MAS integrated systems. AGIL's concepts are directly influenced by OGSA (Open Grid Service Architecture) and the STROBE agent communication and representation model.
    BibTeX:
    		@article{Jon08-MAGS08,
    		  author = {Clement Jonquet and Pascal Dugenie and Stefano A. Cerri},
    		  title = {Agent-Grid Integration Language},
    		  journal = {Multiagent and Grid Systems},
    		  year = {2008},
    		  volume = {4},
    		  number = {2},
    		  pages = {167-211},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-MAGS06-Jonquet-Dugenie-Cerri_final.pdf}
    		}
    		
    Clement Jonquet, Pascal Dugenie & Stefano A. Cerri. Service-Based Integration of Grid and Multi-Agent Systems Models, In International Workshop on Service-Oriented Computing: Agents, Semantics, and Engineering, SOCASE'08. Estoril, Portugal, May 2008. Lecture Notes in Computer Science, Vol. 5006 pp. 56-68. Springer.

    workshop

    Abstract: This position paper addresses the question of integrating GRID and MAS (Multi-Agent Systems) models by means of a service oriented approach. Service Oriented Computing (SOC) tries to address many challenges in the world of computing with services. The concept of service is clearly at the intersection of GRID and MAS and their integration allows to address one of these key challenges: the implementation of dynamically generated services based on conversations. In our approach, services are exchanged (i.e., provided and used) by agents through GRID mechanisms and infrastructure. Integration goes beyond the simple interoperation of applications and standards, it has to be intrinsic to the underpinning model. We introduce here an (quite unique) integration model for GRID and MAS. This model is formalized and represented by a graphical description language called Agent-Grid Integration Language (AGIL). This integration is based on two main ideas: (i) the representation of agent capabilities as Grid services in service containers; (ii) the assimilation of the service instantiation mechanism (from GRID) with the creation of a new conversation context (from MAS). The integrated model may be seen as a formalization of agent interaction for service exchange.
    BibTeX:
    		@inproceedings{Jon08-SOCASE08,
    		  author = {Clement Jonquet and Pascal Dugenie and Stefano A. Cerri},
    		  title = {Service-Based Integration of Grid and Multi-Agent Systems Models},
    		  booktitle = {International Workshop on Service-Oriented Computing: Agents, Semantics, and Engineering, SOCASE'08},
    		  publisher = {Springer},
    		  year = {2008},
    		  volume = {5006},
    		  pages = {56-68},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-SOCASE08-Jonquet-Dugenie-Cerri_50060056.pdf}
    		}
    		
    Clement Jonquet, Pascal Dugénie & Stefano A. Cerri. Intégration orientée service des modèles Grid et Multi-Agents, In 14èmes Journées Francophones sur les Systèmes Multi-Agents, JFSMA'06. Annecy, France, October 2006. pp. 271-274. Hermès.

    french

    Abstract: Cet article s'intéresse à la question de l'intégration des modèles GRID et SMA (Système Multi-Agents) par une approche orientée service. Le concept de service se situe à l'intersection de ces domaines et leur intégration permet la réalisation de service générés dynamiquement basés sur des conversations. Dans notre approche, les services sont échangés (i.e., fournis et utilisés) par des agents grâce à et au travers de l'infrastructure et des mécanismes de GRID. Nous présentons les concepts clés de GRID (inspirés de OGSA) et des SMA (inspirés du modèle STROBE) ainsi que leur intégration. Le modèle intégré est formalisé et représenté par un langage de description graphique appelé Langage d'Intégration Agent-Grid (AGIL). Cette intégration repose sur deux idées : (i) interfacer les capacité des agents comme des services Grid dans des containers de services ; (ii) assimiler les mécanismes d'instanciation de service (de GRID) et de création d'un contexte de conversation dédié (de SMA). Le modèle intégré peut être vu comme une formalisation des interactions agents pour l'échange de service.
    BibTeX:
    		@inproceedings{Jon06-JFSMA06,
    		  author = {Clement Jonquet and Pascal Dugénie and Stefano A. Cerri},
    		  title = {Intégration orientée service des modèles Grid et Multi-Agents},
    		  booktitle = {14èmes Journées Francophones sur les Systèmes Multi-Agents, JFSMA'06},
    		  publisher = {Hermès},
    		  year = {2006},
    		  pages = {271-274},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-JFSMA06-Jonquet-Dugenie-Cerri.pdf}
    		}
    		
    Clement Jonquet, Esther Dzalé-Yeumo, Elizabeth Arnaud & Pierre Larmande. AgroPortal: a proposition for ontology-based services in the agronomic domain, In 3ème atelier INtégration de sources/masses de données hétérogènes et Ontologies, dans le domaine des sciences du VIVant et de l'Environnement, IN-OVIVE'15. Rennes, France, June 2015. pp. 5.

    french

    Abstract: Our project is to develop and support a reference ontology repository for the agronomic domain. By reusing the NCBO BioPortal technology, we have already designed and implemented a prototype ontology repository for plants and a few crops. We plan to turn that prototype into a real service to the community. The AgroPortal project aims at reusing the scientific outcomes and experience of the biomedical domain in the context of plant, agronomic and environment sciences. We will offer an ontology portal which features ontology hosting, search, versioning, visualization, comment, but we will also offer services for semantically annotating data with the ontologies, as well as storing and exploiting ontology alignments and data annotations. All of these within a fully semantic web compliant infrastructure. The main objective of this project is to enable straightforward use of agronomic related ontologies, avoiding data managers and researchers the burden to deal with complex knowledge engineering issues to annotate the research data. The AgroPortal project will specifically pay attention to respect the requirements of the agronomic community and the specificities of the crop domain. We will first focus on the outputs of a few existing driving agronomic use cases related to rice and wheat, with the goal of generalizing to other Crop Ontology related use cases. AgroPortal will offer a robust and stable platform that we anticipate will be highly valued by the community.
    BibTeX:
    		@inproceedings{Jon15-InOvive,
    		  author = {Clement Jonquet and Esther Dzalé-Yeumo and Elizabeth Arnaud and Pierre Larmande},
    		  title = {AgroPortal: a proposition for ontology-based services in the agronomic domain},
    		  booktitle = {3ème atelier INtégration de sources/masses de données hétérogènes et Ontologies, dans le domaine des sciences du VIVant et de l'Environnement, IN-OVIVE'15},
    		  year = {2015},
    		  pages = {5},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_InOvive2015_AgroPortal.pdf}
    		}
    		
    Clement Jonquet, Esther Dzalé-Yeumo, Elizabeth Arnaud, Pierre Larmande, Anne Toulet & Marie-Angélique Laporte. AgroPortal : A Proposition for Ontology-Based Services in the Agronomic Domain, In 23rd Plant & Animal Genome Conference, poster session. San Diego, USA, January 2016. pp. P0343.

    poster-demo

    Abstract: Our project is to develop and support a reference ontology repository for the agronomic domain. By reusing the NCBO BioPortal technology, we have already designed and implemented a prototype ontology repository for plants and a few crops. We plan to turn that prototype into a real service to the community. The AgroPortal project aims at reusing the scientific outcomes and experience of the biomedical domain in the context of plant, agronomic and environment sciences. We will offer an ontology portal which features ontology hosting, search, versioning, visualization, comment, but we will also offer services for semantically annotating data with the ontologies, as well as storing and exploiting ontology alignments and data annotations. All of these within a fully semantic web compliant infrastructure. The main objective of this project is to enable straightforward use of agronomic related ontologies, avoiding data managers and researchers the burden to deal with complex knowledge engineering issues to annotate the research data. The AgroPortal project will specifically pay attention to respect the requirements of the agronomic community and the specificities of the crop domain. We will first focus on the outputs of a few existing driving agronomic use cases related to rice and wheat, with the goal of generalizing to other Crop Ontology related use cases. AgroPortal will offer a robust and stable platform that we anticipate will be highly valued by the community.
    BibTeX:
    		@inproceedings{Jon16-PAG,
    		  author = {Clement Jonquet and Esther Dzalé-Yeumo and Elizabeth Arnaud and Pierre Larmande and Anne Toulet and Marie-Angélique Laporte},
    		  title = {AgroPortal : A Proposition for Ontology-Based Services in the Agronomic Domain},
    		  booktitle = {23rd Plant & Animal Genome Conference, poster session},
    		  year = {2016},
    		  pages = {P0343},
    		  url = {https://pag.confex.com/pag/xxiv/webprogram/Handout/Paper21605/Poster_AgroPortal_PAG2016_light.pdf}
    		}
    		
    Clement Jonquet, Marc Eisenstadt & Stefano A. Cerri. Learning agents and Enhanced Presence for generation of services on the Grid, In Towards the Learning GRID: advances in Human Learning Services. November 2005. Frontiers in Artificial Intelligence and Applications, Vol. 127 pp. 203-213. IOS Press.

    serie

    Abstract: Human learning on the Grid will be based on the synergies between advanced artificial and human agents. These synergies will be possible to the extent that conversational protocols among agents, human and/or artificial ones, can be adapted to the ambitious goal of dynamically generating services for human learning. In the paper we highlight how conversations may procure learning both in human and in artificial agents. The STROBE model for communicating agents and its current evolutions shows how an artificial agent may "learn" dynamically (at run time) at the Data, Control and Interpreter level, in particular exemplifying the "learning by being told" modality. The enhanced presence research, exemplified by Buddyspace, in parallel, puts human agents in a rich communicative context where learning effects may occur also as a "serendipitous" side effect of communication.
    BibTeX:
    		@incollection{Jon05-LeGEWG-Book,
    		  author = {Clement Jonquet and Marc Eisenstadt and Stefano A. Cerri},
    		  title = {Learning agents and Enhanced Presence for generation of services on the Grid},
    		  booktitle = {Towards the Learning GRID: advances in Human Learning Services},
    		  publisher = {IOS Press},
    		  year = {2005},
    		  volume = {127},
    		  pages = {203-213},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/ArticleFAIA05-Jonquet-Eisenstadt-Cerri.pdf}
    		}
    		
    Clement Jonquet, Vincent Emonet & Mark A. Musen. Roadmap for a multilingual BioPortal, In 4th Workshop on the Multilingual Semantic Web, MSW4'15. Portoroz, Slovenia, June 2015. CEUR Workshop Proceedings, Vol. 1532 pp. 15-26.

    workshop

    Abstract: Ontology indexes and repositories are important in the realization of the Semantic Web; however, the need has clearly moved to multilingual capabilities that are hard to offer when dealing with multiple ontologies, originally in different formats and contributed by an open community. In this paper, we present a roadmap for addressing the issues of dealing with multilingual or monolingual ontologies in BioPortal, the reference ontology repository in biomedicine, currently mostly English-oriented. We propose a set of representations to support multilingualism in the portal and to enable a complete use of the functionalities and services for any kind of ontologies and data. While encouraging the community to use the best available specifications to represent multilingual content e.g., Lemon; our objective is to handle multilingualism in a proper semantically rich and consistent manner in the ontology repository. We are currently deploying and implementing these representations in a local instance of BioPortal for French ontologies.
    BibTeX:
    		@inproceedings{Jon15-MSW4,
    		  author = {Clement Jonquet and Vincent Emonet and Mark A. Musen},
    		  title = {Roadmap for a multilingual BioPortal},
    		  booktitle = {4th Workshop on the Multilingual Semantic Web, MSW4'15},
    		  year = {2015},
    		  volume = {1532},
    		  pages = {15-26},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_MSW4_MultilingualBioPortal.pdf}
    		}
    		
    Clement Jonquet, Christophe Fiorio, Philippe Papet, Stéphanie Belin-Mejean, Claudine Pastor & 'Cellule iPad des enseignants de Polytech Montpellier'. REX : Innovation pédagogique via l'utilisation de tablettes numériques à Polytech Montpellier, In 9ème conférence des Technologies de l'Information et de la Communication pour l'Enseignement, TICE'14, Session Retour d'Expérience (REX). Béziers, France, November 2014. pp. 97-106.

    french

    Abstract: L'école polytechnique universitaire de Montpellier est impliquée dans une démarche d'adoption du numérique et d'innovation pédagogique. Dans cette logique, l'école équipe depuis janvier 2013 ses étudiants de tablette iPad personnelles pour développer de nouvelles méthodes d'apprentissage et de pédagogie avec les outils numériques actuels. Les enseignants de l'école travaillent à repenser les approches d'enseignements (principalement en présentiels) à l'aide des tablettes et formalisent plusieurs scénarios pédagogiques expérimentés avec les étudiants. Par exemples : présentation simple & prises de note ; présentation interactive ; compte rendu de travaux pratiques ; QCM ; suivi de présence ; etc. Ces scénarios (non exclusifs) servent de cas d'utilisation de référence pour lesquels entre aide, formation, logistique, et budget peuvent être planifiés pour assurer le plus grand succès à l'opération.
    BibTeX:
    		@inproceedings{Jon14-TICE,
    		  author = {Clement Jonquet and Christophe Fiorio and Philippe Papet and Stéphanie Belin-Mejean and Claudine Pastor and 'Cellule iPad des enseignants de Polytech Montpellier'},
    		  title = {REX : Innovation pédagogique via l'utilisation de tablettes numériques à Polytech Montpellier},
    		  booktitle = {9ème conférence des Technologies de l'Information et de la Communication pour l'Enseignement, TICE'14, Session Retour d'Expérience (REX)},
    		  year = {2014},
    		  pages = {97-106},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_REX_TICE2014_iPad_Polytech.pdf}
    		}
    		
    Clement Jonquet, Christophe Fiorio, Philippe Papet, Stéphanie Belin-Mejean & 'Cellule iPad des enseignants de Polytech Montpellier'. Scénarios pédagogiques numériques via l'utilisation de l'iPad par et pour les étudiants de Polytech Montpellier, In 2ème Sommet iPad en éducation. Montreal, Canada, May 2014. pp. 1.

    poster-demo

    Abstract: L'école polytechnique universitaire de Montpellier (1200 étudiants) fait partie du réseau POLYTECH et propose dix spécialités de formation (énergie, eau, alimentaire, mécanique, informatique, etc.). L'école est impliquée avec l'Université Montpellier 2 dans une démarche d'adoption du numérique, en particulier via la mise en place de MOOC et l'utilisation de la pédagogie inversée. Depuis 2012, l'école équipe les étudiants de tablette iPad personnelles pour développer de nouvelles méthodes d'apprentissage et de pédagogie avec les outils numériques actuels. Les iPad offrent de nouvelles possibilités d'enseignement mais ils servent également à combler la fracture numérique en pérennisant l'accès aux ressources (web, MOOCs, UNTs) indispensables aujourd'hui. Pour lancer et animer l'opération, l'école a mis en place une cellule enseignante (60 personnes) qui travaille à repenser les approches d'enseignements (principalement en présentiels) à l'aide des tablettes et qui formalise plusieurs scénarios pédagogiques expérimentés avec les étudiants. Par exemples : présentation simple & prises de note ; présentation interactive ; compte rendu de travaux pratiques ; QCM ; suivi de présence ; etc. Ces scénarios (non exclusifs) servent de cas d'utilisation de référence pour lesquels entre aide, formation, logistique, et budget peuvent être planifiés pour assurer le plus grand succès à l'opération. Plus d'informations : www.polytech.univ-montp2.fr/Ipad
    BibTeX:
    		@inproceedings{Jon14-SommetIPad,
    		  author = {Clement Jonquet and Christophe Fiorio and Philippe Papet and Stéphanie Belin-Mejean and 'Cellule iPad des enseignants de Polytech Montpellier'},
    		  title = {Scénarios pédagogiques numériques via l'utilisation de l'iPad par et pour les étudiants de Polytech Montpellier},
    		  booktitle = {2ème Sommet iPad en éducation},
    		  year = {2014},
    		  pages = {1},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Resume_iPadSommet_2013_Jonquet.pdf}
    		}
    		
    Clement Jonquet, Paea LePendu, Sean Falconer, Adrien Coulet, Natalya F. Noy, Mark A. Musen & Nigam H. Shah. NCBO Resource Index: Ontology-Based Search and Mining of Biomedical Resources, Web Semantics. September 2011. Vol. 9 (3), pp. 316-324. Elsevier.

    journal

    Abstract: The volume of publicly available data in biomedicine is constantly increasing. However, these data are stored in different formats and on different platforms. Integrating these data will enable us to facilitate the pace of medical discoveries by providing scientists with a unified view of this diverse information. Under the auspices of the National Center for Biomedical Ontology (NCBO), we have developed the Resource Index—a growing, large-scale ontology-based index of more than twenty heterogeneous biomedical resources. The resources come from a variety of repositories maintained by organizations from around the world. We use a set of over 200 publicly available ontologies contributed by researchers in various domains to annotate the elements in these resources. We use the semantics that the ontologies encode, such as different properties of classes, the class hierarchies, and the mappings between ontologies, in order to improve the search experience for the Resource Index user. Our user interface enables scientists to search the multiple resources quickly and efficiently using domain terms, without even being aware that there is semantics "under the hood."
    BibTeX:
    		@article{Jon11-JWS11,
    		  author = {Clement Jonquet and Paea LePendu and Sean Falconer and Adrien Coulet and Natalya F. Noy and Mark A. Musen and Nigam H. Shah},
    		  title = {NCBO Resource Index: Ontology-Based Search and Mining of Biomedical Resources},
    		  journal = {Web Semantics},
    		  publisher = {Elsevier},
    		  year = {2011},
    		  volume = {9},
    		  number = {3},
    		  pages = {316-324},
    		  note = {1st prize of Semantic Web Challenge at the 9th International Semantic Web Conference, ISWC'10, Shanghai, China},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-JWS11-ncbo_final.pdf},
    		  doi = {https://doi.org/10.1016/j.websem.2011.06.005}
    		}
    		
    Clement Jonquet, Paea LePendu, Sean M. Falconer, Adrien Coulet, Natalya F. Noy, Mark A. Musen & Nigam H. Shah. NCBO Resource Index: Ontology-Based Search and Mining of Biomedical Resources, In Semantic Web Challenge, 9th International Semantic Web Conference, ISWC'10. Shanghai, China, November 2010. pp. 8.

    conference

    Abstract: The volume of publicly available data in biomedicine is constantly increasing. However, this data is stored in different formats on different platforms. Integrating this data will enable us to facilitate the pace of medical discoveries by providing scientists with a unified view of this diverse information. Under the auspices of the National Center for Biomedical Ontology, we have developed the Resource Index—a growing, large-scale index of more than twenty diverse biomedical resources. The resources include heterogeneous data from a variety of repositories maintained by different researchers from around the world. Furthermore, we use a set of 200 publicly available ontologies, also contributed by researchers in various domains, to annotate and to aggregate these descriptions. We use the semantics that the ontologies encode, such as different properties of classes, the class hierarchies, and the mappings between ontologies in order to improve the search experience for the Resource Index user. Our user interface enables scientists to search the multiple resources quickly and efficiently using domain terms, without even being aware that there is semantics under the hood.
    BibTeX:
    		@inproceedings{Jon10-SWC10,
    		  author = {Clement Jonquet and Paea LePendu and Sean M. Falconer and Adrien Coulet and Natalya F. Noy and Mark A. Musen and Nigam H. Shah},
    		  title = {NCBO Resource Index: Ontology-Based Search and Mining of Biomedical Resources},
    		  booktitle = {Semantic Web Challenge, 9th International Semantic Web Conference, ISWC'10},
    		  year = {2010},
    		  pages = {8},
    		  note = {1st prize},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/SemWebChallenge_submission_RI_final.pdf}
    		}
    		
    Clement Jonquet & Mark A. Musen. Gestion du multilinguisme dans un portail d'ontologies: étude de cas pour le NCBO BioPortal, In Terminology & Ontology : Theories and applications Workshop, TOTh'14. Brussels, Belgium, December 2014. pp. 2.

    french

    BibTeX:
    		@inproceedings{Jon14-TOTh,
    		  author = {Clement Jonquet and Mark A. Musen},
    		  title = {Gestion du multilinguisme dans un portail d'ontologies: étude de cas pour le NCBO BioPortal},
    		  booktitle = {Terminology & Ontology : Theories and applications Workshop, TOTh'14},
    		  year = {2014},
    		  pages = {2},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-TOTh2014-BioPortal.pdf}
    		}
    		
    Clement Jonquet, Mark A. Musen & Nigam H. Shah. Building a Biomedical Ontology Recommender Web Service, Biomedical Semantics. June 2010. Vol. 1 (S1), BMC.

    journal

    Abstract: Background: Researchers in biomedical informatics use ontologies and terminologies to annotate their data in order to facilitate data integration and translational discoveries. As the use of ontologies for annotation of biomedical datasets has risen, a common challenge is to identify ontologies that are best suited to annotating specific datasets. The number and variety of biomedical ontologies is large, and it is cumbersome for a researcher to figure out which ontology to use. Methods: We present the Biomedical Ontology Recommender web service. The system uses textual metadata or a set of keywords describing a domain of interest and suggests appropriate ontologies for annotating or representing the data. The service makes a decision based on three criteria. The first one is coverage, or the ontologies that provide most terms covering the input text. The second is connectivity, or the ontologies that are most often mapped to by other ontologies. The final criterion is size, or the number of concepts in the ontologies. The service scores the ontologies as a function of scores of the annotations created using the National Center for Biomedical Ontology (NCBO) Annotator web service. We used all the ontologies from the UMLS Metathesaurus and the NCBO BioPortal. Results: We compare and contrast our Recommender by an exhaustive functional comparison to previously published efforts. We evaluate and discuss the results of several recommendation heuristics in the context of three real world use cases. The best recommendations heuristics, rated 'very relevant' by expert evaluators, are the ones based on coverage and connectivity criteria. The Recommender service (alpha version) is available to the community and is embedded into BioPortal.
    BibTeX:
    		@article{Jon10-JBMS10,
    		  author = {Clement Jonquet and Mark A. Musen and Nigam H. Shah},
    		  title = {Building a Biomedical Ontology Recommender Web Service},
    		  journal = {Biomedical Semantics},
    		  publisher = {BMC},
    		  year = {2010},
    		  volume = {1},
    		  number = {S1},
    		  note = {Selected in Pr. R. Altman's 2011 Year in Review at AMIA TBI.},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-JBMS10-Jonquet.pdf},
    		  doi = {https://doi.org/10.1186/2041-1480-1-S1-S1}
    		}
    		
    Clement Jonquet, Mark A. Musen & Nigam H. Shah. A System for Ontology-Based Annotation of Biomedical Data, In International Workshop on Data Integration in the Life Sciences, DILS'08. Evry, France, June 2008. Lecture Notes in BioInformatics, Vol. 5109 pp. 144-152. Springer.

    conference

    Abstract: We present a system for ontology based annotation and indexing of biomedical data; the key functionality of this system is to provide a service that enables users to locate biomedical data resources related to particular ontology concepts. The system's indexing workflow processes the text metadata of diverse resource elements such as gene expression data sets, descriptions of radiology images, clinical-trial reports, and PubMed article abstracts to annotate and index them with concepts from appropriate ontologies. The system enables researchers to search biomedical data sources using ontology concepts. What distinguishes this work from other biomedical search tools is:(i) the use of ontology semantics to expand the initial set of annotations automatically generated by a concept recognition tool; (ii) the unique ability to use almost all publicly available biomedical ontologies in the indexing workflow; (iii) the ability to provide the user with integrated results from different biomedical resource in one place. We discuss the system architecture as well as our experiences during its prototype implementation (http://www.bioontology.org/ tools.html).
    BibTeX:
    		@inproceedings{Jon08-DILS08,
    		  author = {Clement Jonquet and Mark A. Musen and Nigam H. Shah},
    		  title = {A System for Ontology-Based Annotation of Biomedical Data},
    		  booktitle = {International Workshop on Data Integration in the Life Sciences, DILS'08},
    		  publisher = {Springer},
    		  year = {2008},
    		  volume = {5109},
    		  pages = {144-152},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-DILS08_Jonquet_Musen_Shah_published.pdf}
    		}
    		
    Clement Jonquet, Mark A. Musen & Nigam H. Shah. Help will be provided for this task: Ontology-Based Annotator Web Service, Stanford University, Stanford University, CA, USA, May 2008. Research report (BMIR-2008-1317),

    report

    Abstract: Semantic annotation is part of the vision for the semantic web. Ontologies are required for this task, and although they are in common use, there is a lack of annotation tools for users that are convenient, simple to use and easily integrated into their processes. This paper presents an ontology-based annotator web service methodology that can annotate a piece of text with ontology concepts and return annotations in OWL. Currently, the annotation workflow is based on syntactic concept recognition (using concept names and synonyms) and on a set of semantic expansion algorithms that leverage the semantics in ontologies (e.g., is_a relations). The paper also describes an implementation of this service for life sciences and biomedicine. Our biomedical annotator service uses one of the largest available set of publicly available terminologies and ontologies. We used it to create an index of open biomedical resources. Both the deployed web service and a user interface can be accessed at (http://www.bioontology.org/tools.html).
    BibTeX:
    		@techreport{Jon08-OBAReport,
    		  author = {Clement Jonquet and Mark A. Musen and Nigam H. Shah},
    		  title = {Help will be provided for this task: Ontology-Based Annotator Web Service},
    		  school = {Stanford University},
    		  year = {2008},
    		  number = {BMIR-2008-1317},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/RR-BMIR-2008-1317_OBAwebservice_report_Jonquet_Musen_Shah.pdf}
    		}
    		
    Clement Jonquet, Nigam H. Shah & Mark A. Musen. Prototyping a Biomedical Ontology Recommender Service, In Bio-Ontologies: Knowledge in Biology, SIG, ISMB-ECCB'09. Stockholm, Sweden, July 2009. pp. 65-68.

    workshop

    Abstract: As the use of ontologies for annotation of biomedical datasets rises, a common question researchers face is that of identifying which ontologies are relevant to annotate their datasets. The number and variety of biomedical ontologies is now quite large and it is cumbersome for a scientist to figure out which ontology to (re)use in their annotation tasks. In this paper we describe an early version of an ontology recommender service, which informs the user of the most appropriate ontologies relevant for their given dataset. We provide results to illustrate that situation. The recommender service uses a semantic annotation based approach and scores the ontologies according to those annotations. The prototype service can recommend ontologies from UMLS and the NCBO BioPortal and is accessible at http://bioontology.org/tools.html
    BibTeX:
    		@inproceedings{Jon09-BioSIG09,
    		  author = {Clement Jonquet and Nigam H. Shah and Mark A. Musen},
    		  title = {Prototyping a Biomedical Ontology Recommender Service},
    		  booktitle = {Bio-Ontologies: Knowledge in Biology, SIG, ISMB-ECCB'09},
    		  year = {2009},
    		  pages = {65-68},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-Bio-Ontologies2009-Jonquet-Shah-Musen.pdf}
    		}
    		
    Clement Jonquet, Nigam H. Shah & Mark A. Musen. The Open Biomedical Annotator, In American Medical Informatics Association Symposium on Translational BioInformatics, AMIA-TBI'09. San Francisco, CA, USA, March 2009. pp. 56-60.

    conference

    Abstract: The range of publicly available biomedical data is enormous and is expanding fast. This expansion means that researchers now face a hurdle to extracting the data they need from the large numbers of data that are available. Biomedical researchers have turned to ontologies and terminologies to structure and annotate their data with ontology concepts for better search and retrieval. However, this annotation process cannot be easily automated and often requires expert curators. Plus, there is a lack of easy-to-use systems that facilitate the use of ontologies for annotation. This paper presents the Open Biomedical Annotator (OBA), an ontology-based Web service that annotates public datasets with biomedical ontology concepts based on their textual metadata (www.bioontology.org). The biomedical community can use the annotator service to tag datasets automatically with ontology terms (from UMLS and NCBO BioPortal ontologies). Such annotations facilitate translational discoveries by integrating annotated data.
    BibTeX:
    		@inproceedings{Jon09-STB09,
    		  author = {Clement Jonquet and Nigam H. Shah and Mark A. Musen},
    		  title = {The Open Biomedical Annotator},
    		  booktitle = {American Medical Informatics Association Symposium on Translational BioInformatics, AMIA-TBI'09},
    		  year = {2009},
    		  pages = {56-60},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-AmiaSTB09_Jonquet_Shah_Musen.pdf}
    		}
    		
    Clement Jonquet, Nigam H. Shah & Mark A. Musen. Un service Web pour l'annotation sémantique de données biomédicales avec des ontologies, In 13èmes Journées Francophones d'Informatique Médicale, JFIM'09. Nice, France, April 2009. Informatique et Santé, Vol. 17

    french

    Abstract: The range of publicly available biomedical data is enormous and is expanding fast. This expansion means that researchers now face a hurdle to extracting the data they need from the large numbers of data that are available. Biomedical researchers have turned to ontologies and terminologies to structure and annotate their data with ontology concepts for better search and retrieval. However, this annotation process cannot be easily automated and often requires expert curators. Plus, there is a lack of easy-to-use systems that facilitate the use of ontologies for annotation. This paper presents the Open Biomedical Annotator (OBA), an ontology-based Web service that annotates public datasets with biomedical ontology concepts based on their textual metadata. The biomedical community can use the annotator service to tag datasets automatically with ontology and terminology terms (from UMLS and NCBO). We have used the annotator service internally to index several online datasets (e.g., ArrayExpress, PubMed, ClinicalTrial.gov). The index is directly queriable in the NCBO BioPortal ontology repository (www.bioontology.org). Such semantic annotations facilitate translational discoveries by integrating annotated data.
    BibTeX:
    		@inproceedings{Jon09-JFIM09,
    		  author = {Clement Jonquet and Nigam H. Shah and Mark A. Musen},
    		  title = {Un service Web pour l'annotation sémantique de données biomédicales avec des ontologies},
    		  booktitle = {13èmes Journées Francophones d'Informatique Médicale, JFIM'09},
    		  year = {2009},
    		  volume = {17},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-JFIM09-Jonquet-Shah-Musen.pdf}
    		}
    		
    Clement Jonquet, Nigam H. Shah, Cherie H. Youn, Chris Callendar, Margaret-Anne Storey & Mark A. Musen. NCBO Annotator: Semantic Annotation of Biomedical Data, In 8th International Semantic Web Conference, Poster and Demonstration Session, ISWC'09. Washington DC, USA, November 2009.

    poster-demo

    Abstract: The National Center for Biomedical Ontology Annotator is an ontology-based web service for annotation of textual biomedical data with biomedical ontology concepts. The biomedical community can use the Annotator service to tag datasets automatically with concepts from more than 200 ontologies coming from the two most important set of biomedical ontology & terminology repositories: the UMLS Metathesaurus and NCBO BioPortal. Through annotation (or tagging) of datasets with ontology concepts, unstructured free-text data becomes structured and standardized. Such annotations contribute to create a biomedical semantic web that facilitates translational scientific discoveries by integrating annotated data.
    BibTeX:
    		@inproceedings{Jon09-ISWC09-demo,
    		  author = {Clement Jonquet and Nigam H. Shah and Cherie H. Youn and Chris Callendar and Margaret-Anne Storey and Mark A. Musen},
    		  title = {NCBO Annotator: Semantic Annotation of Biomedical Data},
    		  booktitle = {8th International Semantic Web Conference, Poster and Demonstration Session, ISWC'09},
    		  year = {2009},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo-ISWC09-Jonquet.pdf}
    		}
    		
    Clement Jonquet, Anne Toulet, Elizabeth Arnaud, Sophie Aubin, Esther Dzalé-Yeumo, Vincent Emonet, John Graybeal, Mark A. Musen, Cyril Pommier & Pierre Larmande. Reusing the NCBO BioPortal technology for agronomy to build AgroPortal, In 7th International Conference on Biomedical Ontologies, ICBO'16, Demo Session. Corvallis, Oregon, USA, August 2016. CEUR Workshop Proceedings, Vol. 1747 (D202), pp. 3.

    poster-demo

    Abstract: Many vocabularies and ontologies are produced to represent and annotate agronomic data. By reusing the NCBO BioPortal technology, we have already designed and implemented an advanced prototype ontology repository for the agronomy domain. We plan to turn that prototype into a real service to the community. The AgroPortal project aims at reusing the scientific outcomes and experience of the biomedical domain in the context of plant, agronomic, food, environment (perhaps animal) sciences. We offer an ontology portal which features ontology hosting, search, versioning, visualization, comment, recommendation, enables semantic annotation, as well as storing and exploiting ontology alignments. All of these within a fully semantic web compliant infrastructure. The AgroPortal specifically pays attention to respect the requirements of the agronomic community in terms of ontology formats (e.g., SKOS, trait dictionaries) or supported features. In this paper, we present our prototype as well as preliminary outputs of four driving agronomic use cases. With the experience acquired in the biomedical domain and building atop of an already existing technology, we think that AgroPortal offers a robust and stable reference repository that will become highly valuable for the agronomic domain.
    BibTeX:
    		@inproceedings{Jon16-ICBO,
    		  author = {Clement Jonquet and Anne Toulet and Elizabeth Arnaud and Sophie Aubin and Esther Dzalé-Yeumo and Vincent Emonet and John Graybeal and Mark A. Musen and Cyril Pommier and Pierre Larmande},
    		  title = {Reusing the NCBO BioPortal technology for agronomy to build AgroPortal},
    		  booktitle = {7th International Conference on Biomedical Ontologies, ICBO'16, Demo Session},
    		  year = {2016},
    		  volume = {1747},
    		  number = {D202},
    		  pages = {3},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_ICBO_2016_AgroPortal.pdf}
    		}
    		
    Clement Jonquet, Anne Toulet, Elizabeth Arnaud, Sophie Aubin, Esther Dzalé Yeumo, Vincent Emonet, John Graybeal, Marie-Angélique Laporte, Mark A. Musen, Valeria Pesce & Pierre Larmande. AgroPortal: an ontology repository for agronomy, Computers and Electronics in Agriculture. January 2018. Vol. 144 pp. 126-143. Elsevier.

    journal

    Abstract: Many vocabularies and ontologies are produced to represent and annotate agronomic data. However, those ontologies are spread out, in different formats, of different size, with different structures and from overlapping domains. Therefore, there is need for a common platform to receive and host them, align them, and enabling their use in agro-informatics applications. By reusing the National Center for Biomedical Ontologies (NCBO) BioPortal technology, we have designed AgroPortal, an ontology repository for the agronomy domain. The AgroPortal project re-uses the biomedical domain's semantic tools and insights to serve agronomy, but also food, plant, and biodiversity sciences. We offer a portal that features ontology hosting, search, versioning, visualization, comment, and recommendation; enables semantic annotation; stores and exploits ontology alignments; and enables interoperation with the semantic web. The AgroPortal specifically satisfies requirements of the agronomy community in terms of ontology formats (e.g., SKOS vocabularies and trait dictionaries) and supported features (offering detailed metadata and advanced annotation capabilities). In this paper, we present our platform's content and features, including the additions to the original technology, as well as preliminary outputs of five driving agronomic use cases that participated in the design and orientation of the project to anchor it in the community. By building on the experience and existing technology acquired from the biomedical domain, we can present in AgroPortal a robust and feature-rich repository of great value for the agronomic domain.
    BibTeX:
    		@article{Jon17-COMPAG,
    		  author = {Clement Jonquet and Anne Toulet and Elizabeth Arnaud and Sophie Aubin and Esther Dzalé Yeumo and Vincent Emonet and John Graybeal and Marie-Angélique Laporte and Mark A. Musen and Valeria Pesce and Pierre Larmande},
    		  title = {AgroPortal: an ontology repository for agronomy},
    		  journal = {Computers and Electronics in Agriculture},
    		  publisher = {Elsevier},
    		  year = {2018},
    		  volume = {144},
    		  pages = {126-143},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_COMPAG_1-s2.0-S0168169916309541_published.pdf},
    		  doi = {https://doi.org/10.1016/j.compag.2017.10.012}
    		}
    		
    Clement Jonquet, Anne Toulet, Elizabeth Arnaud, Sophie Aubin, Esther Dzalé Yeumo, Vincent Emonet, Valeria Pesce & Pierre Larmande. AgroPortal: an open repository of ontologies and vocabularies for agriculture and nutrition data, In GODAN Summit Open Data Research Symposium on Agriculture and Nutrition, GODAN'16. New York, NY, USA, September 2016.

    poster-demo

    BibTeX:
    		@inproceedings{Jon16-GODAN,
    		  author = {Clement Jonquet and Anne Toulet and Elizabeth Arnaud and Sophie Aubin and Esther Dzalé Yeumo and Vincent Emonet and Valeria Pesce and Pierre Larmande},
    		  title = {AgroPortal: an open repository of ontologies and vocabularies for agriculture and nutrition data},
    		  booktitle = {GODAN Summit Open Data Research Symposium on Agriculture and Nutrition, GODAN'16},
    		  year = {2016},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Submission_Abstract_GODAN_Summit_AgroPortal.pdf}
    		}
    		
    Clement Jonquet, Anne Toulet, Biswanath Dutta & Vincent Emonet. Harnessing the power of unified metadata in an ontology repository: the case of AgroPortal, Data Semantics. August 2018. pp. 1-31. Elsevier.

    journal

    Abstract: As any resources, ontologies, thesaurus, vocabularies and terminologies need to be described with relevant metadata to facilitate their identification, selection and reuse. For ontologies to be FAIR, there is a need for metadata authoring guidelines and for harmonization of existing metadata vocabularies –taken independently none of them can completely describe an ontology. Ontology libraries and repositories also have to play an important role. Indeed, some metadata properties are intrinsic to the ontology (name, license, description); other information, such as community feedbacks, or relations to other ontologies are typically information that an ontology library shall capture, populate and consolidate to facilitate the processes of identifying and selecting the right ontology(ies) to use. We have studied ontology metadata practices by: (i) analyzing metadata annotations of 805 ontologies; (ii) reviewing the most standard and relevant vocabularies (23 totals) currently available to describe metadata for ontologies (such as Dublin Core, Ontology Metadata Vocabulary, VoID, etc.); (iii) comparing different metadata implementation in multiple ontology libraries or repositories. We have then built a new metadata model for our AgroPortal vocabulary and ontology repository, a platform dedicated to agronomy based on the NCBO BioPortal technology. AgroPortal now recognizes 346 properties from existing metadata vocabularies that could be used to describe different aspects of ontologies: intrinsic descriptions, people, date, relations, content, metrics, community, administration, and access. We use them to populate an internal model of 127 properties implemented in the portal and harmonized for all the ontologies. We –and AgroPortal’s users– have spent a significant amount of time to edit and curate the metadata of the ontologies to offer a better synthetized and harmonized information and enable new ontology identification features. Our goal was also to facilitate the comprehension of the agronomical ontology landscape by displaying diagrams and charts about all the ontologies on the portal. We have evaluated our work with a user appreciation survey which confirms the new features are indeed relevant and helpful to ease the processes of identification and selection of ontologies. This paper presents how to harness the potential of a complete and unified metadata model with dedicated features in an ontology repository, however the new AgroPortal’s model is not a new vocabulary as it relies on pre-existing ones. A generalization of this work is studied in a community driven standardization effort in the context of the RDA Vocabulary and Semantic Services Interest Group.
    BibTeX:
    		@article{Jon17-JODS,
    		  author = {Clement Jonquet and Anne Toulet and Biswanath Dutta and Vincent Emonet},
    		  title = {Harnessing the power of unified metadata in an ontology repository: the case of AgroPortal},
    		  journal = {Data Semantics},
    		  publisher = {Elsevier},
    		  year = {2018},
    		  pages = {1-31},
    		  url = {https://link.springer.com/content/pdf/10.1007%2Fs13740-018-0091-5.pdf},
    		  doi = {https://doi.org/10.1007/s13740-018-0091-5}
    		}
    		
    Clement Jonquet, Anne Toulet & Vincent Emonet. Two years after: a review of vocabularies and ontologies in AgroPortal, In International Workshop on sources and data integration in agriculture, food and environment using ontologies, IN-OVIVE'17. Montpellier, France, July 2017. pp. 13. EFITA.

    workshop

    Abstract: Mid 2014, we started the AgroPortal project (http://agroportal.lirmm.fr) with the vision of offering a vocabulary & ontology repository for agronomy and related domains such as biodiversity, plant sciences and nutrition. The prototype found a good adoption, and growing interest appeared when presenting it to several interlocutors in the agronomy community (e.g., CGIAR (Bioversity International), INRA, IRD, CIRAD, IRSTEA, FAO, RDA, Planteome, EBI). We have now an advanced prototype platform which latest version (v1.3) was released in March 2017, that currently hosts 64 public ontologies including 38 not present in any such ontology repository (e.g., NCBO BioPortal) and 8 privates. This paper presents a short review of our current use cases and of the ontologies & vocabularies hosted in AgroPortal in Mai 2017. Thanks to a new ontology metadata model, we can now aggregate ontology descriptions to display information about the "landscape of agronomical ontologies" as presented.
    BibTeX:
    		@inproceedings{Jon17-IN-OVIVE,
    		  author = {Clement Jonquet and Anne Toulet and Vincent Emonet},
    		  title = {Two years after: a review of vocabularies and ontologies in AgroPortal},
    		  booktitle = {International Workshop on sources and data integration in agriculture, food and environment using ontologies, IN-OVIVE'17},
    		  publisher = {EFITA},
    		  year = {2017},
    		  pages = {13},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IN-OVIVE-2017_AgroPortal_ontologies_review.pdf}
    		}
    		
    Clement Jonquet, Anne Toulet, Vincent Emonet & Pierre Larmande. AgroPortal: an ontology repository for agronomy, In European conference dedicated to the future use of ICT in the agri-food sector, bioresource and biomass sector, EFITA'17, demonstration session. Montpellier, France, July 2017. pp. 261.

    poster-demo

    Abstract: Many vocabularies and ontologies are produced to represent and annotate agronomic data. Therefore, there is a need of a common platform to identify, host and use them in agroinformatics application. By reusing the NCBO BioPortal technology, we have designed AgroPortal an ontology repository for the agronomy domain. The AgroPortal project aims at reusing the scientific outcomes and experience of the biomedical domain in the context of plant, agronomy, food, and biodiversity. We offer an ontology portal which features ontology hosting, search, versioning, visualization, comment, recommendation, enables semantic annotation, as well as storing and exploiting ontology alignments. All of these within a fully semantic web compliant infrastructure. The AgroPortal specifically pays attention to respect the requirements of the agronomic community in terms of ontology formats (e.g., SKOS, trait dictionaries) or supported features. In this demonstration, we will present our platform currently open and accessible at http://agroportal.lirmm.fr.
    BibTeX:
    		@inproceedings{Jon17-EFITA,
    		  author = {Clement Jonquet and Anne Toulet and Vincent Emonet and Pierre Larmande},
    		  title = {AgroPortal: an ontology repository for agronomy},
    		  booktitle = {European conference dedicated to the future use of ICT in the agri-food sector, bioresource and biomass sector, EFITA'17, demonstration session},
    		  year = {2017},
    		  pages = {261},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo_EFITA2017_AgroPortal.pdf}
    		}
    		
    Philippe Lemoisson, Guillaume Surroca, Clement Jonquet & Stefano A. Cerri. ViewpointS: capturing formal data and informal contributions into an adaptive knowledge graph, Knowledge and Learning. May 2018. Vol. 12 (2), pp. 119-145. InderScience.

    journal

    Abstract: Formal data is supported by means of specific languages from which the syntax and semantics have to be mastered, which represents an obstacle for collective intelligence. In contrast, informal knowledge relies on weak/ambiguous contributions e.g., I like. Reconciling the two forms of knowledge is a big challenge. We propose a brain-inspired knowledge representation approach called ViewpointS where formal data and informal contributions are merged into an adaptive knowledge graph which is then topologically, rather than logically, explored and assessed. We firstly illustrate within a mock-up simulation, where the hypothesis of knowledge emerging from preference dissemination is positively tested. Then we use a real-life web dataset (MovieLens) that mixes formal data about movies with user ratings. Our results show that ViewpointS is a relevant, generic and powerful innovative approach to capture and reconcile formal and informal knowledge and enable collective intelligence.
    BibTeX:
    		@article{Lem17-IJKL,
    		  author = {Philippe Lemoisson and Guillaume Surroca and Clement Jonquet and Stefano A. Cerri},
    		  title = {ViewpointS: capturing formal data and informal contributions into an adaptive knowledge graph},
    		  journal = {Knowledge and Learning},
    		  publisher = {InderScience},
    		  year = {2018},
    		  volume = {12},
    		  number = {2},
    		  pages = {119-145},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-IJKL-2016_Viewpoints_Lemoisson_etal_final.pdf},
    		  doi = {https://doi.org/10.1504/IJKL.2017.10012219}
    		}
    		
    Philippe Lemoisson, Guillaume Surroca, Clement Jonquet & Stefano A. Cerri. ViewpointS: When Social Ranking Meets the Semantic Web, In 30th International Florida Artificial Intelligence Research Society Conference, FLAIRS'17. Marco Island, FL, USA, May 2017. pp. 329-334. AAAI Press.

    conference

    Abstract: Reconciling the ecosystem of semantic Web data with the ecosystem of social Web participation has been a major issue for the Web Science community. To answer this need, we propose an innovative approach called ViewpointS where the knowledge is topologically, rather than logically, explored and assessed. Both social contributions and linked data are represented by triples agent-resource-resource called "viewpoints". A "viewpoint" is the subjective declaration by an agent (human or artificial) of some semantic proximity between two resources. Knowledge resources and viewpoints form a bipartite graph called "knowledge graph". Information retrieval is processed on demand by choosing a user's "perspective" i.e., rules for quantifying and aggregating "viewpoints" which yield a "knowledge map". This map is equipped with a topology: the more viewpoints between two given resources, the shorter the distance; moreover, the distances between resources evolve along time according to new viewpoints, in the metaphor of synapses' strengths. Our hypothesis is that these dynamics actualize an adaptive, actionable collective knowledge.
    BibTeX:
    		@inproceedings{Lem17-FLAIRS,
    		  author = {Philippe Lemoisson and Guillaume Surroca and Clement Jonquet and Stefano A. Cerri},
    		  title = {ViewpointS: When Social Ranking Meets the Semantic Web},
    		  booktitle = {30th International Florida Artificial Intelligence Research Society Conference, FLAIRS'17},
    		  publisher = {AAAI Press},
    		  year = {2017},
    		  pages = {329-334},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_FLAIRS2017_ViewpointS.pdf}
    		}
    		
    Paea LePendu, Natalya F. Noy, Clement Jonquet, Paul R. Alexander, Nigam H. Shah & Mark A. Musen. Optimize First, Buy Later: Analyzing Metrics to Ramp-up Very Large Knowledge Bases, In 9th International Semantic Web Conference, ISWC'10. Shanghai, China, November 2010. Lecture Notes in Computer Science, Vol. 6496 pp. 486-501. Springer.

    conference

    Abstract: As knowledge bases move into the landscape of larger ontologies and have terabytes of related data, we must work on optimizing the performance of our tools. We are easily tempted to buy bigger machines or to fill rooms with armies of little ones to address the scalability problem. Yet, careful analysis and evaluation of the characteristics of our data-using metrics-often leads to dramatic improvements in performance. Firstly, are current scalable systems scalable enough? We found that for large or deep ontologies (some as large as 500,000 classes) it is hard to say because benchmarks obscure the load-time costs for materialization. Therefore, to expose those costs, we have synthesized a set of more representative ontologies. Secondly, in designing for scalability, how do we manage knowledge over time? By optimizing for data distribution and ontology evolution, we have reduced the population time, including materialization, for the NCBO Resource Index, a knowledge base of 16.4 billion annotations linking 2.4 million terms from 200 ontologies to 3.5 million data elements, from one week to less than one hour for one of the large datasets on the same machine.
    BibTeX:
    		@inproceedings{Lep10-ISWC10,
    		  author = {Paea LePendu and Natalya F. Noy and Clement Jonquet and Paul R. Alexander and Nigam H. Shah and Mark A. Musen},
    		  title = {Optimize First, Buy Later: Analyzing Metrics to Ramp-up Very Large Knowledge Bases},
    		  booktitle = {9th International Semantic Web Conference, ISWC'10},
    		  publisher = {Springer},
    		  year = {2010},
    		  volume = {6496},
    		  pages = {486-501},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-ISWC10-NCBO.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Jiang Bian, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. A novel framework for biomedical entity sense induction, Biomedical Informatics. August 2018. Vol. 84 pp. 31-41. Elsevier.

    journal

    Abstract: Background. Rapid advancements in biomedical research have accelerated the number of relevant electronic documents published online, ranging from scholarly articles to news, blogs, and user-generated social media content. Nevertheless, the vast amount of this information is poorly organized, making it difficult to navigate. Emerging technologies such as ontologies and knowledge bases (KBs) could help organize and track the information associated with biomedical research developments. A major challenge in the automatic construction of ontologies and KBs is the identification of words with its respective sense(s) from a free-text corpus. Word-sense induction (WSI) is a task to automatically induce the different senses of a target word in the different contexts. In the last two decades, there have been several efforts on WSI. However, few methods are effective in biomedicine and life sciences. Methods. We developed a framework for biomedical entity sense induction using a mixture of natural language processing, supervised, and unsupervised learning methods with promising results. It is composed of three main steps: (1) a polysemy detection method to determine if a biomedical entity has many possible meanings; (2) a clustering quality index-based approach to predict the number of senses for the biomedical entity; and (3) a method to induce the concept(s) (i.e., senses) of the biomedical entity in a given context. Results. To evaluate our framework, we used the well-known MSH WSD polysemic dataset that contains 203 annotated ambiguous biomedical entities, where each entity is linked to 2–5 concepts. Our polysemy detection method obtained an F-measure of 98%. Second, our approach for predicting the number of senses achieved an F-measure of 93%. Finally, we induced the concepts of the biomedical entities based on a clustering algorithm and then extracted the keywords of reach cluster to represent the concept. Conclusions. We have developed a framework for biomedical entity sense induction with promising results. Our study results can benefit a number of downstream applications, for example, help to resolve concept ambiguities when building Semantic Web KBs from biomedical text.
    BibTeX:
    		@article{Los17-JBI,
    		  author = {Juan Antonio Lossio-Ventura and Jiang Bian and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {A novel framework for biomedical entity sense induction},
    		  journal = {Biomedical Informatics},
    		  publisher = {Elsevier},
    		  year = {2018},
    		  volume = {84},
    		  pages = {31-41},
    		  url = {https://www.sciencedirect.com/science/article/pii/S1532046418301138?via%3Dihub},
    		  doi = {https://doi.org/10.1016/j.jbi.2018.06.007}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. A Way to Automatically Enrich Biomedical Ontologies, In 19th International Conference on Extending Database Technology, EDBT'16, Poster Session. Bordeaux, France, March 2016. (305), pp. 2. OpenProceedings.org.

    poster-demo

    Abstract: Biomedical ontologies play an important role for information extraction in the biomedical domain. We present a workflow for updating automatically biomedical ontologies, composed of four steps. We detail two contributions concerning the concept extraction and semantic linkage of extracted terminology.
    BibTeX:
    		@inproceedings{Los16-EDBT,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {A Way to Automatically Enrich Biomedical Ontologies},
    		  booktitle = {19th International Conference on Extending Database Technology, EDBT'16, Poster Session},
    		  publisher = {OpenProceedings.org},
    		  year = {2016},
    		  number = {305},
    		  pages = {2},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster-EDBT2016_Enrich_Lossio.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Automatic Biomedical Term Polysemy Detection, In 10th International Conference on Language Resources and Evaluation, LREC'16. Portoroz, Slovenia, May 2016. pp. 23-28. European Language Resources Association.

    conference

    Abstract: Polysemy is the capacity for a word to have multiple meanings. Polysemy detection is a first step for Word Sense Induction (WSI), which allows to find different meanings for a term. The polysemy detection is also important for information extraction (IE) systems. In addition, the polysemy detection is important for building/enriching terminologies and ontologies. In this paper, we present a novel approach to detect if a biomedical term is polysemic, with the long term goal of enriching biomedical ontologies. This approach is based on the extraction of new features. In this context we propose to extract features following two manners: (i) extracted directly from the text dataset, and (ii) from an induced graph. Our method obtains an Accuracy and F-Measure of 0.978.
    BibTeX:
    		@inproceedings{Los16-LREC,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Automatic Biomedical Term Polysemy Detection},
    		  booktitle = {10th International Conference on Language Resources and Evaluation, LREC'16},
    		  publisher = {European Language Resources Association},
    		  year = {2016},
    		  pages = {23-28},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_LREC2016_Polysemy_Lossio.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Biomedical Terminology Extraction: A new combination of Statistical and Web Mining Approaches, In 12th International Workshop on Statistical Analysis of Textual Data, JADT'14. Paris, France, June 2014. pp. 421-432.

    workshop

    Abstract: The objective of this work is to combine statistical and web mining methods for the automatic extraction, and ranking of biomedical terms from free text. We present new extraction methods that use linguistic patterns specialized for the biomedical field, and use term extraction measures, such as C-value, and keyword extraction measures, such as Okapi BM25, and TFIDF. We propose several combinations of these measures to improve the extraction and ranking process and we investigate which combinations are more relevant for different cases. Each measure gives us a ranked list of candidate terms that we finally re-rank with a new web-based measure. Our experiments show, first that an appropriate harmonic mean of C-value used with keyword extraction measures offers better precision results than used alone, either for the extraction of single-word and multi-words terms; second, that best precision results are often obtained when we re-rank using the web-based measure. We illustrate our results on the extraction of English and French biomedical terms from a corpus of laboratory tests available online in both languages. The results are validated by using UMLS (in English) and only MeSH (in French) as reference dictionary.
    BibTeX:
    		@inproceedings{Los14-JADT,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Biomedical Terminology Extraction: A new combination of Statistical and Web Mining Approaches},
    		  booktitle = {12th International Workshop on Statistical Analysis of Textual Data, JADT'14},
    		  year = {2014},
    		  pages = {421-432},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_JADT14_Lossio.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. BIOTEX: A system for Biomedical Terminology Extraction, Ranking, and Validation, In 13th International Semantic Web Conference, Demonstration, ISWC'14. Riva del Garda, Italy, October 2014. CEUR Workshop Proceedings, Vol. 1272 pp. 157-160.

    poster-demo

    Abstract: Term extraction is an essential task in domain knowledge acquisition. Although hundreds of terminologies and ontologies exist in the biomedical domain, the language evolves faster than our ability to formalize and catalog it. We may be interested in the terms and words explicitly used in our corpus in order to index or mine this corpus or just to enrich currently available terminologies and ontologies. Automatic term recognition and keyword extraction measures are widely used in biomedical text mining applications. We present BIOTEX, a Web application that implements state-of-the-art measures for automatic extraction of biomedical terms from free text in English and French.
    BibTeX:
    		@inproceedings{Los14-ISWC,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {BIOTEX: A system for Biomedical Terminology Extraction, Ranking, and Validation},
    		  booktitle = {13th International Semantic Web Conference, Demonstration, ISWC'14},
    		  year = {2014},
    		  volume = {1272},
    		  pages = {157-160},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo-ISWC2014_BioTex_final.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Extraction automatique de termes combinant différentes informations, In 21ème Traitement Automatique des Langues Naturelles, TALN'14. Marseille, France, July 2014. Vol. 2 pp. 407-412.

    french

    Abstract: Pour une communauté, la terminologie est essentielle car elle permet de décrire, échanger et récupérer les données. Dans de nombreux domaines, l'explosion du volume des données textuelles nécessite de recourir à une automatisation du processus d'extraction de la terminologie, voire son enrichissement. L'extraction automatique de termes peut s'appuyer sur des approches de traitement du langage naturel. Des méthodes prenant en compte les aspects linguistiques et statistiques proposées dans la littérature, résolvent quelques problèmes liés à l'extraction de termes tels que la faible fréquence, la complexité d'extraction de termes de plusieurs mots, ou l'effort humain pour valider les termes candidats. Dans ce contexte, nous proposons deux nouvelles mesures pour l'extraction et le "ranking" des termes formés de plusieurs mots à partir des corpus spécifiques d'un domaine. En outre, nous montrons comment l'utilisation du Web pour évaluer l'importance d'un terme candidat permet d'améliorer les résultats en terme de précision. Ces expérimentations sont réalisées sur le corpus biomédical GENIA en utilisant des mesures de la littérature telles que C-value.
    BibTeX:
    		@inproceedings{Los14-TALN,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Extraction automatique de termes combinant différentes informations},
    		  booktitle = {21ème Traitement Automatique des Langues Naturelles, TALN'14},
    		  year = {2014},
    		  volume = {2},
    		  pages = {407-412},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-TALN2014_Lossio_actes.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Integration of Linguistic and Web Information to Improve Biomedical Terminology Extraction, In 18th International Database Engineering & Applications Symposium, IDEAS'14. Porto, Portugal, July 2014. pp. 265-269. ACM.

    conference

    Abstract: Comprehensive terminology is essential for a community to describe, exchange, and retrieve data. In multiple domain, the explosion of text data produced has reached a level for which automatic terminology extraction and enrichment is mandatory. Automatic Term Extraction (or Recognition) methods use natural language processing to do so. Methods featuring linguistic and statistical aspects as often proposed in the literature, solve some problems related to term extraction as low frequency, complexity of the multi-word term extraction, human effort to validate candidate terms. In contrast, we present two new measures for extracting and ranking muli-word terms from domain-specific corpora, covering the all mentioned problems. In addition we demonstrate how the use of the Web to evaluate the significance of a multi-word term candidate, helps us to outperform precision results obtain on the biomedical GENIA corpus with previous reported measures such as C-value.
    BibTeX:
    		@inproceedings{Los14-IDEAS,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Integration of Linguistic and Web Information to Improve Biomedical Terminology Extraction},
    		  booktitle = {18th International Database Engineering & Applications Symposium, IDEAS'14},
    		  publisher = {ACM},
    		  year = {2014},
    		  pages = {265-269},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IDEAS14_Lossio.pdf},
    		  doi = {https://doi.org/10.1145/2628194.2628208}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. SIFR project: The Semantic Indexing of French Biomedical Data Resources, In 1st International Symposium on Information Management and Big Data, SIMBig'14. Cusco, Peru, September 2014. CEUR Workshop Proceedings, Vol. 1318 pp. 58-61.

    workshop

    Abstract: The Semantic Indexing of French Biomedical Data Resources project proposes to investigate the scientific and technical challenges in building ontology-based services to leverage biomedical ontologies and terminologies in indexing, mining and retrieval of French biomedical data.
    BibTeX:
    		@inproceedings{Los14-SimBIG,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {SIFR project: The Semantic Indexing of French Biomedical Data Resources},
    		  booktitle = {1st International Symposium on Information Management and Big Data, SIMBig'14},
    		  year = {2014},
    		  volume = {1318},
    		  pages = {58-61},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-SIMBig2014-SIFR.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Towards a mixed approach to extract biomedical terms from text corpus, Knowledge Discovery in Bioinformatics. 2014. Vol. 4 (1), pp. 15. IGI Global.

    journal

    Abstract: The objective of this paper is to present a methodology to extract and rank automatically biomedical terms from free text. We present new extraction methods taking into account linguistic patterns specialized for the biomedical domain, statistic term extraction measures such as C-value and statistic keyword extraction measures such as Okapi BM25, and TFIDF. These measures are combined in order to improve the extraction process and we investigate which combinations are the more relevant associated to different contexts. Experimental results show that an appropriate harmonic mean of C-value associated to keyword extraction measures offers better precision, both for single-word and multi-words term extraction. Experiments describe the extraction of English and French biomedical terms from a corpus of laboratory tests available online. The results are validated by using UMLS (in English) and only MeSH (in French) as reference dictionary.
    BibTeX:
    		@article{Los14-IJKDB,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Towards a mixed approach to extract biomedical terms from text corpus},
    		  journal = {Knowledge Discovery in Bioinformatics},
    		  publisher = {IGI Global},
    		  year = {2014},
    		  volume = {4},
    		  number = {1},
    		  pages = {15},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IJKDB_2013_Lossio_et_al.pdf},
    		  doi = {https://doi.org/10.4018/ijkdb.2014010101}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Yet Another Ranking Function for Automatic Multiword Term Extraction, In 9th International Conference on Natural Language Processing, PolTAL'14. Warsaw, Poland, September 2014. Lecture Notes in Artificial Intelligence, Vol. 8686 pp. 52-64. Springer.

    conference

    Abstract: Term extraction is an essential task in domain knowledge acquisition. We propose two new measures to extract multiword terms from a domain-specific text. The first measure is both linguistic and statistical based. The second measure is graph-based, allowing assessment of the importance of a multiword term of a domain. Existing measures often solve some problems related (but not completely) to term extraction, e.g., noise, silence, low frequency, large-corpora, complexity of the multiword term extraction process. Instead, we focus on managing the entire set of problems, e.g., detecting rare terms and overcoming the low frequency issue. We show that the two proposed measures outperform precision results previously reported for automatic multiword extraction by comparing them with the state-of-the-art reference measures.
    BibTeX:
    		@inproceedings{Los14-PolTAL,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Yet Another Ranking Function for Automatic Multiword Term Extraction},
    		  booktitle = {9th International Conference on Natural Language Processing, PolTAL'14},
    		  publisher = {Springer},
    		  year = {2014},
    		  volume = {8686},
    		  pages = {52-64},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_PolTAL2014_Lossio.pdf}
    		}
    		
    Juan Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Combining C-value and Keyword Extraction Methods for Biomedical Terms Extraction, In 5th International Symposium on Languages in Biology and Medicine, LBM'13. Tokyo, Japan, December 2013. pp. 45-49. Database Center for Life Science.

    conference

    Abstract: The objective of this work is to extract and to rank biomedical terms from free text. We present new extraction methods that use linguistic patterns specialized for the biomedical field, and use term extraction measures, such as C-value, and keyword extraction measures, such as Okapi BM25, and TF-IDF. We propose several combinations of these measures to improve the extraction and ranking process. Our experiments show that an appropriate harmonic mean of C-value used with keyword extraction measures offers better precision results than used alone, either for the extraction of single-word and multi-words terms. We illustrate our results on the extraction of English and French biomedical terms from a corpus of laboratory tests. The results are validated by using UMLS (in English) and only MeSH (in French) as reference dictionary.
    BibTeX:
    		@inproceedings{Los13-LBM13,
    		  author = {Juan Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Combining C-value and Keyword Extraction Methods for Biomedical Terms Extraction},
    		  booktitle = {5th International Symposium on Languages in Biology and Medicine, LBM'13},
    		  publisher = {Database Center for Life Science},
    		  year = {2013},
    		  pages = {45-49},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_LBM2013_Lossio.pdf}
    		}
    		
    Juan-Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Biomedical term extraction: overview and a new methodology, Information Retrieval, Special issue on Medical Information Retrieval. August 2015. Vol. 19 (1), pp. 59-99. Springer.

    journal

    Abstract: Terminology extraction is an essential task in domain knowledge acquisition, as well as for Information Retrieval (IR). It is also a mandatory first step aimed at building/enriching terminologies and ontologies. As often proposed in the literature, existing terminology extraction methods feature linguistic and statistical aspects and solve some problems related (but not completely) to term extraction, e.g. noise, silence, low frequency, large-corpora, complexity of the multi-word term extraction process. In contrast, we propose a cutting edge methodology to extract and to rank biomedical terms, covering the all mentioned problems. This methodology offers several measures based on linguistic, statistical, graphic and web aspects. These measures extract and rank candidate terms with excellent precision: we demonstrate that they outperform previously reported precision results for automatic term extraction, and work with different languages (English, French, and Spanish). We also demonstrate how the use of graphs and the web to assess the significance of a term candidate, enables us to outperform precision results. We evaluated our methodology on biomedical GENIA and LabTestsOnline corpora as compared with previously reported measures.
    BibTeX:
    		@article{Los15-IRMedInfo,
    		  author = {Juan-Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Biomedical term extraction: overview and a new methodology},
    		  journal = {Information Retrieval, Special issue on Medical Information Retrieval},
    		  publisher = {Springer},
    		  year = {2015},
    		  volume = {19},
    		  number = {1},
    		  pages = {59-99},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IR-medinfo_Lossio_final.pdf},
    		  doi = {https://doi.org/10.1007/s10791-015-9262-2}
    		}
    		
    Juan-Antonio Lossio-Ventura, Clement Jonquet, Mathieu Roche & Maguelonne Teisseire. Prédiction de la polysémie pour un terme biomédical, In 12ème COnférence en Recherche d'Information et Applications, CORIA'15. March, Paris, France 2015. pp. 437-452.

    french

    Abstract: La polysémie est la caractéristique d'un terme à avoir plusieurs significations. La prédiction de la polysémie est une première étape pour l'Induction de Sens (IS), qui permet de trouver des significations différentes pour un terme, ainsi que pour les systèmes d'extraction d'information. En outre, la détection de la polysémie est importante pour la construction et l'en-richissement de terminologies et d'ontologies. Dans cet article, nous présentons une nouvelle approche pour prédire si un terme biomédical est polysémique ou non, avec l'objectif à long terme d'enrichir les ontologies biomédicales après avoir désambiguiser les termes candidats. Cette approche est basée sur l'utilisation de techniques de méta-apprentissage, plus précisé-ment sur des méta-descripteurs. Dans ce contexte, nous proposons la définition de nouveaux méta-descripteurs, extraits directement du texte, et d'un graphe de co-occurrences des termes. Notre méthode donne des résultats très satisfaisants, avec une exactitude et F-mesure de 0.978.
    BibTeX:
    		@inproceedings{Los15-CORIA,
    		  author = {Juan-Antonio Lossio-Ventura and Clement Jonquet and Mathieu Roche and Maguelonne Teisseire},
    		  title = {Prédiction de la polysémie pour un terme biomédical},
    		  booktitle = {12ème COnférence en Recherche d'Information et Applications, CORIA'15},
    		  year = {2015},
    		  pages = {437-452},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_CORIA 2015_Lossio.pdf}
    		}
    		
    Marcos Martinez-Romero, Clement Jonquet, Martin J. O'Connor, John Graybeal, Alejandro Pazos & Mark A. Musen. NCBO Ontology Recommender 2.0: An Enhanced Approach for Biomedical Ontology Recommendation, Biomedical Semantics. June 2017. Vol. 8 (21), BMC.

    journal

    Abstract: Background: Ontologies and controlled terminologies have become increasingly important in biomedical research. Researchers use ontologies to annotate their data with ontology terms, enabling better data integration and interoperability across disparate datasets. However, the number, variety and complexity of current biomedical ontologies make it cumbersome for researchers to determine which ones to reuse for their specific needs. To overcome this problem, in 2010 the National Center for Biomedical Ontology (NCBO) released the Ontology Recommender, which is a service that receives a biomedical text corpus or a list of keywords and suggests ontologies appropriate for referencing the indicated terms. Methods: We developed a new version of the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a novel recommendation approach that evaluates the relevance of an ontology to biomedical text data according to four different criteria: (1) the extent to which the ontology covers the input data; (2) the acceptance of the ontology in the biomedical community; (3) the level of detail of the ontology classes that cover the input data; and (4) the specialization of the ontology to the domain of the input data. Results: Our evaluation shows that the enhanced recommender provides higher quality suggestions than the original approach, providing better coverage of the input data, more detailed information about their concepts, increased specialization for the domain of the input data, and greater acceptance and use in the community. In addition, it provides users with more explanatory information, along with suggestions of not only individual ontologies but also groups of ontologies to use together. It also can be customized to fit the needs of different ontology recommendation scenarios. Conclusions: Ontology Recommender 2.0 suggests relevant ontologies for annotating biomedical text data. It combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness. Ontology Recommender 2.0 recommends over 500 biomedical ontologies from the NCBO BioPortal platform, where it is openly available (both via the user interface at http://bioportal.bioontology.org/recommender, and via a Web service API).
    BibTeX:
    		@article{Mar17-JBMS,
    		  author = {Marcos Martinez-Romero and Clement Jonquet and Martin J. O'Connor and John Graybeal and Alejandro Pazos and Mark A. Musen},
    		  title = {NCBO Ontology Recommender 2.0: An Enhanced Approach for Biomedical Ontology Recommendation},
    		  journal = {Biomedical Semantics},
    		  publisher = {BMC},
    		  year = {2017},
    		  volume = {8},
    		  number = {21},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_JBMS_Recommender2_s13326-017-0128-y.pdf},
    		  doi = {https://doi.org/10.1186/s13326-017-0128-y}
    		}
    		
    Soumia Melzi & Clement Jonquet. Representing NCBO Annotator results in standard RDF with the Annotation Ontology, In 7th International Semantic Web Applications and Tools for Life Sciences - poster session, SWAT4LS'14. Berlin, Germany, December 2014. CEUR Workshop Proceedings, Vol. 1320 pp. 5. CEUR-WS.org.

    poster-demo

    Abstract: Semantic annotation is part of the Semantic Web vision. The Annotation Ontology is a model that have been proposed to represent any annotations in standard RDF. The NCBO Annotator Web service is a broadly used service for annotations in the biomedical domain, offered within the BioPortal platform and giving access to more than 350+ ontologies. This paper presents a new output format to represent the NCBO Annotator results in RDF with the Annotation Ontology. We briefly present both technologies and describe the mappings to enable the representation. A Java library is available to parse the current JSON outputs to RDF/XML format. By rendering results in RDF, we make the annotations generated by the NCBO Annotator follow the Semantic Web standards making possible among other things to offer them as linked data.
    BibTeX:
    		@inproceedings{Mel14-SWAT4LSposter,
    		  author = {Soumia Melzi and Clement Jonquet},
    		  title = {Representing NCBO Annotator results in standard RDF with the Annotation Ontology},
    		  booktitle = {7th International Semantic Web Applications and Tools for Life Sciences - poster session, SWAT4LS'14},
    		  publisher = {CEUR-WS.org},
    		  year = {2014},
    		  volume = {1320},
    		  pages = {5},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-SWAT4LS_2014-poster.pdf}
    		}
    		
    Soumia Melzi & Clement Jonquet. Scoring semantic annotations returned by the NCBO Annotator, In 7th International Semantic Web Applications and Tools for Life Sciences, SWAT4LS'14. Berlin, Germany, December 2014. CEUR Workshop Proceedings, Vol. 1320 pp. 15. CEUR-WS.org.

    conference

    Abstract: Semantic annotation using biomedical ontologies is required to enable data integration, interoperability, indexing and mining of biomedical data. When used to support semantic indexing the scoring and ranking of annotations become as important as provenance and metadata on the annotations themselves. In the biomedical domain, one broadly used service for annotations is the NCBO Annotator Web service, offered within the BioPortal platform and giving access to more than 350+ ontologies or terminologies. This paper presents a new scoring method for the NCBO Annotator allowing to rank the annotation results and enabling to use such scores for better indexing of the annotated data. By using a natural language processing-based term extraction measure, C-Value, we are able to enhance the original scoring algorithm which uses basic frequencies of the matches and in addition to positively discriminate multi-words term annotations. We show results obtained by comparing three different methods with a reference corpus of PubMed-MeSH manual annotations.
    BibTeX:
    		@inproceedings{Mel14-SWAT4LS,
    		  author = {Soumia Melzi and Clement Jonquet},
    		  title = {Scoring semantic annotations returned by the NCBO Annotator},
    		  booktitle = {7th International Semantic Web Applications and Tools for Life Sciences, SWAT4LS'14},
    		  publisher = {CEUR-WS.org},
    		  year = {2014},
    		  volume = {1320},
    		  pages = {15},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-SWAT4LS_2014.pdf}
    		}
    		
    Pierre Monnin, Clement Jonquet, Joel Legrand, Amedeo Napoli & Adrien Coulet. PGxO: A very lite ontology to reconcile pharmacogenomic knowledge units, In Network Tools and Applications in Biology Workshop, NETTAB'17. Palermo, Italy, October 2017. Preprints, pp. 4. PeerJ.

    workshop

    Abstract: We present in this article a lightweight ontology named PGxO and a set of rules for its instantiation, which we developed as a frame for reconciling and tracing pharmacogenomics (PGx) knowledge. PGx studies how genomic variations impact variations in drug response phenotypes. Knowledge in PGx is typically composed of units that have the form of ternary relationships gene variant;drug;adverse event, stating that an adverse event may occur for patients having the gene variant when being exposed to the drug. These knowledge units (i) are available in reference databases, such as PharmGKB, are reported in the scientific biomedical literature and (ii) may be discovered by mining clinical data such as Electronic Health Records (EHRs). Therefore, knowledge in PGx is heterogeneously described (i.e., with various quality, granularity, vocabulary, etc.). It is consequently worth to extract, then compare, assertions from distinct resources. Using PGxO, one can represent multiple provenances for pharmacogenomic knowledge units, and reconcile duplicates when they come from distinct sources.
    BibTeX:
    		@inproceedings{Mon17-NETTAB,
    		  author = {Pierre Monnin and Clement Jonquet and Joel Legrand and Amedeo Napoli and Adrien Coulet},
    		  title = {PGxO: A very lite ontology to reconcile pharmacogenomic knowledge units},
    		  booktitle = {Network Tools and Applications in Biology Workshop, NETTAB'17},
    		  publisher = {PeerJ},
    		  year = {2017},
    		  pages = {4},
    		  note = {Peer reviewed by NETTAB'17 PC},
    		  url = {https://peerj.com/preprints/3140v1.pdf}
    		}
    		
    Mark A. Musen, Nigam H. Shah, Natasha F. Noy, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, James Buntrock, Clement Jonquet, Michael Montegut & Daniel L. Rubin. BioPortal: Ontologies and Data Resources with the Click of a Mouse, In American Medical Informatics Association Annual Symposium, Demonstrations, AMIA'08. Washington DC, USA, November 2008. pp. 1223-1224.

    poster-demo

    BibTeX:
    		@inproceedings{Mus08-AMIA08,
    		  author = {Mark A. Musen and Nigam H. Shah and Natasha F. Noy and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and James Buntrock and Clement Jonquet and Michael Montegut and Daniel L. Rubin},
    		  title = {BioPortal: Ontologies and Data Resources with the Click of a Mouse},
    		  booktitle = {American Medical Informatics Association Annual Symposium, Demonstrations, AMIA'08},
    		  year = {2008},
    		  pages = {1223-1224}
    		}
    		
    Natalya F. Noy, Nigam H. Shah, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Michael Montegut, Daniel L. Rubin, Cherie Youn & Mark A. Musen. BioPortal: A Web Repository for Biomedical Ontologies and Data Resources, In 7th International Semantic Web Conference, Poster and Demonstration Session, ISWC'08. Karlsruhe, Germany, October 2008. CEUR Workshop Proceedings, Vol. 401 CEUR-WS.org.

    poster-demo

    Abstract: Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural language processing, and decision support. The National Center for Biomedical Ontology is developing BioPortal, a Web-based system that serves as a repository for biomedical ontologies. BioPortal de nes relationships among those ontologies and between the ontologies and online data resources such as PubMed, ClinicalTrials.gov, and the Gene Expression Omnibus (GEO). BioPortal supports not only the technical requirements for access to biomedical ontologies either via Web browsers or via Web services, but also community-based participation in the evaluation and evolution of ontology content. BioPortal enables ontology users to learn what biomedical ontologies exist, what a particular ontology might be good for, and how individual ontologies relate to one another. BioPortal is available online at http://alpha.bioontology.org.
    BibTeX:
    		@inproceedings{Noy08-ISWC08,
    		  author = {Natalya F. Noy and Nigam H. Shah and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Michael Montegut and Daniel L. Rubin and Cherie Youn and Mark A. Musen},
    		  title = {BioPortal: A Web Repository for Biomedical Ontologies and Data Resources},
    		  booktitle = {7th International Semantic Web Conference, Poster and Demonstration Session, ISWC'08},
    		  publisher = {CEUR-WS.org},
    		  year = {2008},
    		  volume = {401},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo-ISWC2008-BioPortal.pdf}
    		}
    		
    Natalya F. Noy, Nigam H. Shah, Patricia L. Whetzel, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Daniel L. Rubin, Margaret-Anne Storey, Christopher G. Chute & Mark A. Musen. BioPortal: ontologies and integrated data resources at the click of a mouse, Nucleic Acids Research. May 2009. Vol. 37 (web server), pp. 170-173.

    journal

    Abstract: Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural language processing and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides access via Web services and Web browsers to ontologies developed in OWL, RDF, OBO format and Protégé frames. BioPortal functionality includes the ability to browse, search and visualize ontologies. The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms and ontology reviews based on criteria such as usability, domain coverage, quality of content, and documentation and support. BioPortal also enables integrated search of biomedical data resources such as the Gene Expression Omnibus (GEO), ClinicalTrials.gov, and ArrayExpress, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers a 'one-stop shopping' to programmatically access biomedical ontologies, but also provides support to integrate data from a variety of biomedical resources.
    BibTeX:
    		@article{Noy09-NAR09,
    		  author = {Natalya F. Noy and Nigam H. Shah and Patricia L. Whetzel and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Daniel L. Rubin and Margaret-Anne Storey and Christopher G. Chute and Mark A. Musen},
    		  title = {BioPortal: ontologies and integrated data resources at the click of a mouse},
    		  journal = {Nucleic Acids Research},
    		  year = {2009},
    		  volume = {37},
    		  number = {web server},
    		  pages = {170-173},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-NAR09-NCBO.pdf},
    		  doi = {https://doi.org/10.1093/nar/gkp440}
    		}
    		
    Gautam K. Parai, Clement Jonquet, Rong Xu, Mark A. Musen & Nigam H. Shah. The Lexicon Builder Web service: Building Custom Lexicons from two hundred Biomedical Ontologies, In American Medical Informatics Association Annual Symposium, AMIA'10. Washington, DC, USA, November 2010.

    conference

    Abstract: Domain specific biomedical lexicons are extensively used by researchers for natural language processing tasks. Currently these lexicons are created manually by expert curators and there is a pressing need for automated methods to compile such lexicons. The Lexicon Builder Web service addresses this need and reduces the investment of time and effort involved in lexicon maintenance. The service has three components: Inclusion ; selects one or several ontologies (or its branches) and includes preferred names and synonym terms; Exclusion ; filters terms based on the term's Medline frequency, syntactic type, UMLS semantic type and match with stopwords; Output ; aggregates information, handles compression and output formats. Evaluation demonstrates that the service has high accuracy and runtime performance. It is currently being evaluated for several use cases to establish its utility in biomedical information processing tasks. The Lexicon Builder promotes collaboration, sharing and standardization of lexicons amongst researchers by automating the creation, maintainence and cross referencing of custom lexicons.
    BibTeX:
    		@inproceedings{Par10-LexiconBuilder,
    		  author = {Gautam K. Parai and Clement Jonquet and Rong Xu and Mark A. Musen and Nigam H. Shah},
    		  title = {The Lexicon Builder Web service: Building Custom Lexicons from two hundred Biomedical Ontologies},
    		  booktitle = {American Medical Informatics Association Annual Symposium, AMIA'10},
    		  year = {2010},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-AMIA10-LexiconBuilder.pdf}
    		}
    		
    Valeria Pesce, Jeni Tennison, Lisette Mey, Clement Jonquet, Anne Toulet, Sophie Aubin & Panagiotis Zervas. A Map of Agri-food Data Standards, Global Open Data for Agriculture and Nutrition (GODAN), Global Open Data for Agriculture and Nutrition (GODAN), February 2018. F1000 Research Technical Report (7-177),

    report

    BibTeX:
    		@techreport{Pes18-F1000,
    		  author = {Valeria Pesce and Jeni Tennison and Lisette Mey and Clement Jonquet and Anne Toulet and Sophie Aubin and Panagiotis Zervas},
    		  title = {A Map of Agri-food Data Standards},
    		  school = {Global Open Data for Agriculture and Nutrition (GODAN)},
    		  year = {2018},
    		  number = {7-177},
    		  note = {Not peer reviewed},
    		  url = {https://f1000research.com/documents/7-177},
    		  doi = {https://doi.org/10.7490/F1000RESEARCH.1115260.1}
    		}
    		
    Christophe Roeder, Clement Jonquet, Nigam H. Shah, William A. Baumgartner Jr & Lawrence Hunter. A UIMA Wrapper for the NCBO Annotator, Bioinformatics. May 2010. Vol. 26 (14), pp. 1800-1801.

    journal

    Abstract: Summary: The Unstructured Information Management Architecture (UIMA) framework and Web Services are emerging as useful tools for integrating biomedical text mining tools. This note describes our work, which wraps the National Center for Biomedical Ontology (NCBO) Annotator ; an ontology-based annotation service ; to make it available as a component in UIMA workflows. Availability: This wrapper is freely available on the web at http://bionlp-uima.sourceforge.net/ as part of the UIMA tools distribution from the Center for Computational Pharmacology (CCP) at the University of Colorado School of Medicine. It has been implemented in Java for support on Mac OS X, Linux and MS Windows.
    BibTeX:
    		@article{Roe10-BioInfo10,
    		  author = {Christophe Roeder and Clement Jonquet and Nigam H. Shah and William A. Baumgartner Jr and Lawrence Hunter},
    		  title = {A UIMA Wrapper for the NCBO Annotator},
    		  journal = {Bioinformatics},
    		  year = {2010},
    		  volume = {26},
    		  number = {14},
    		  pages = {1800-1801},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_UIMA_Wrapper_BioInformatics10.pdf},
    		  doi = {https://doi.org/10.1093/bioinformatics/btq250}
    		}
    		
    Nigam H. Shah, Nipun Bhatia, Clement Jonquet, Daniel L. Rubin, Annie P. Chiang & Mark A. Musen. Comparison of concept recognizers for building the Open Biomedical Annotator, BMC Bioinformatics. September 2009. Vol. 10 (9:S14), BMC.

    journal

    Abstract: The National Center for Biomedical Ontology (NCBO) is developing a system for automated, ontology-based access to online biomedical resources. The system's indexing workflow processes the text metadata of diverse resources such as datasets from GEO and ArrayExpress to annotate and index them with concepts from appropriate ontologies. This indexing requires the use of a concept-recognition tool to identify ontology concepts in the resource's textual metadata. In this paper, we present a comparison of two concept recognizers ; NLM's MetaMap and the University of Michigan's Mgrep. We utilize a number of data sources and dictionaries to evaluate the concept recognizers in terms of precision, recall, speed of execution, scalability and customizability. Our evaluations demonstrate that Mgrep has a clear edge over MetaMap for large-scale service oriented applications. Based on our analysis we also suggest areas of potential improvements for Mgrep. We have subsequently used Mgrep to build the Open Biomedical Annotator service. The Annotator service has access to a large dictionary of biomedical terms derived from the United Medical Language System (UMLS) and NCBO ontologies. The Annotator also leverages the hierarchical structure of the ontologies and their mappings to expand annotations. The Annotator service is available to the community as a REST Web service for creating ontology-based annotations of their data.
    BibTeX:
    		@article{Sha09-BMCBio09-Mgrep,
    		  author = {Nigam H. Shah and Nipun Bhatia and Clement Jonquet and Daniel L. Rubin and Annie P. Chiang and Mark A. Musen},
    		  title = {Comparison of concept recognizers for building the Open Biomedical Annotator},
    		  journal = {BMC Bioinformatics},
    		  publisher = {BMC},
    		  year = {2009},
    		  volume = {10},
    		  number = {9:S14},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-BMCBioInfo09-Mgrep_Shah-Bhatia-Jonquet.pdf},
    		  doi = {https://doi.org/10.1186/1471-2105-10-S9-S14}
    		}
    		
    Nigam H. Shah, Clement Jonquet, Annie P. Chiang, Atul J. Butte, Rong Chen & Mark A. Musen. Ontology-driven Indexing of Public Datasets for Translational Bioinformatics, BMC Bioinformatics. February 2009. Vol. 10 (2:S1), BMC.

    journal

    Abstract: The volume of publicly available genomic scale data is increasing. Genomic datasets in public repositories are annotated with free-text fields describing the pathological state of the studied sample. These annotations are not mapped to concepts in any ontology, making it difficult to integrate these datasets across repositories. We have previously developed methods to map text-annotations of tissue microarrays to concepts in the NCI thesaurus and SNOMED-CT. In this work we generalize our methods to map text annotations of gene expression datasets to concepts in the UMLS. We demonstrate the utility of our methods by processing annotations of datasets in the Gene Expression Omnibus. We demonstrate that we enable ontology-based querying and integration of tissue and gene expression microarray data. We enable identification of datasets on specific diseases across both repositories. Our approach provides the basis for ontology-driven data integration for translational research on gene and protein expression data. Based on this work we have built a prototype system for ontology based annotation and indexing of biomedical data. The system processes the text metadata of diverse resource elements such as gene expression data sets, descriptions of radiology images, clinical-trial reports, and PubMed article abstracts to annotate and index them with concepts from appropriate ontologies. The key functionality of this system is to enable users to locate biomedical data resources related to particular ontology concepts.
    BibTeX:
    		@article{Sha09-BMCBio09,
    		  author = {Nigam H. Shah and Clement Jonquet and Annie P. Chiang and Atul J. Butte and Rong Chen and Mark A. Musen},
    		  title = {Ontology-driven Indexing of Public Datasets for Translational Bioinformatics},
    		  journal = {BMC Bioinformatics},
    		  publisher = {BMC},
    		  year = {2009},
    		  volume = {10},
    		  number = {2:S1},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article-BMCBioInformatics08-Shah-Jonquet-Ontolgy-based_Indexing.pdf},
    		  doi = {https://doi.org/10.1186/1471-2105-10-S2-S1}
    		}
    		
    Nigam H. Shah, Clement Jonquet & Mark A. Musen. Ontrez project report, Stanford University, Stanford University, CA, USA, November 2007. Research report (BMIR-2007-1289),

    report

    BibTeX:
    		@techreport{Jon07-OntrezReport,
    		  author = {Nigam H. Shah and Clement Jonquet and Mark A. Musen},
    		  title = {Ontrez project report},
    		  school = {Stanford University},
    		  year = {2007},
    		  number = {BMIR-2007-1289},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/RR-BMIR-2007-1289_Shah-Jonquet-Musen_Ontrez_report.pdf}
    		}
    		
    Nigam H. Shah, Natasha F. Noy, Clement Jonquet, Adrien Coulet, Patricia L. Whetzel, Nicholas B. Griffith, Cherie H. Youn, Benjamin Dai, Michael Dorf & Mark A. Musen. Ontology Services for Semantic Applications in Health and Life Sciences, In American Medical Informatics Association Annual Symposium, Demonstrations, AMIA'09. Washington DC, USA, November 2009.

    poster-demo

    Abstract: Recently, researchers have turned to the Semantic Web to integrate, summarize, and interpret disparate know-ledge. Ontologies provide the domain knowledge to drive such data integration and information retrieval, on the Semantic Web. The successful creation of semantic applications in the health and life sciences requires ser-vices that provide software applications with access to bioontologies over the Web. The National Center for Biomedical Ontology, one of the seven National Centers for Biomedical Computing created under the NIH Roadmap, has developed BioPortal, which provides access to one of the largest repositories of biomedical ontologies both via Web browsers and Web services. BioPortal enables ontology users to visualize, browse and search ontologies.. The BioPortal Web services allow programmatic access, download and traversal of ontologies in software applications; such as in the recent-ly released Microsoft Word Ontology add-in. The Web services also allow submission of textual metadata from public databases to automatically 'tag' the text with ontology terms as well as allow users to access an-notated elements from public data resources.
    BibTeX:
    		@inproceedings{Sha09-AMIA09,
    		  author = {Nigam H. Shah and Natasha F. Noy and Clement Jonquet and Adrien Coulet and Patricia L. Whetzel and Nicholas B. Griffith and Cherie H. Youn and Benjamin Dai and Michael Dorf and Mark A. Musen},
    		  title = {Ontology Services for Semantic Applications in Health and Life Sciences},
    		  booktitle = {American Medical Informatics Association Annual Symposium, Demonstrations, AMIA'09},
    		  year = {2009},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo-AMIA09-Shah-Noy-Jonquet.pdf}
    		}
    		
    Guillaume Surroca, Philippe Lemoisson, Clement Jonquet & Stefano A. Cerri. Subjective and generic distance in ViewpointS: an experiment on WordNet, In 6th International Conference on Web Intelligence, Mining and Semantics, WIMS'16. Nimes, France, June 2016. (11), pp. 6. ACM.

    conference

    Abstract: We first briefly recall the ViewpointS knowledge representation formalism and discuss the genericity it enables in terms of semantic distance computation. ViewpointS enables representation and storage of individual viewpoints in a shared knowledge graph. Knowledge providers (i.e., agents) express their individual opinions by emitting viewpoints on the semantic similarity or proximity between resources of the knowledge graph which can either be agents, documents (i.e., knowledge supports) or concepts (i.e., descriptors). In this paper, we benchmark the ViewpointS approach against other classic semantic distances (graph based or information content based) on a WordNet experiment. Our goal is to demonstrate the value of keeping the subjectivity of the represented knowledge, while having a generic approach that can handle any kind of knowledge and compute similarity between any kinds of objects.
    BibTeX:
    		@inproceedings{Sur16-WIMS,
    		  author = {Guillaume Surroca and Philippe Lemoisson and Clement Jonquet and Stefano A. Cerri},
    		  title = {Subjective and generic distance in ViewpointS: an experiment on WordNet},
    		  booktitle = {6th International Conference on Web Intelligence, Mining and Semantics, WIMS'16},
    		  publisher = {ACM},
    		  year = {2016},
    		  number = {11},
    		  pages = {6},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_WIMS2016_Viewpoints_Surroca.pdf},
    		  doi = {https://doi.org/10.1145/2912845.2912870}
    		}
    		
    Guillaume Surroca, Philippe Lemoisson, Clement Jonquet & Stefano A. Cerri. Diffusion de systèmes de préférences par confrontation de points de vue, vers une simulation de la Sérendipité, In 26èmes Journées Francophones d'Ingénierie des Connaissances, IC'15. Rennes, France, June 2015. pp. 12.

    french

    Abstract: Le Web d'aujourd'hui est formé, entre autres, de deux types de contenus que sont les données structurées et liées du Web sémantique et les contributions d'utilisateurs du Web social. Notre ambition est d'offrir un modèle pour représenter ces contenus et en tirer communément avantage pour l'apprentissage collectif et la découverte de connaissances. En particulier, nous souhaitons capturer le phénomène de Sérendipité (i.e., de l'apprentissage fortuit) à l'aide d'un formalisme de représentation des connaissances subjectives d'un ensemble de points de vue forment un graphe de connaissances interprétable de façon personnalisée. Nous établissons une preuve de concept sur la capacité d'apprentissage collectif que permet ce formalisme appelé Viewpoints en construisant une simulation de la diffusion de connaissances telle qu'elle peut exister sur le Web grâce à la coexistence des données liées et des contributions des utilisateurs. A l'aide d'un modèle comportemental paramétré pour représenter diverses stratégies de navigation Web, nous cherchons à optimiser la diffusion de systèmes de préférences. Nos résultats nous permettent d'identifier les stratégies les plus adéquates pour l'apprentissage fortuit et d'approcher la notion de Sérendipité. Une implémentation du noyau du formalisme Viewpoints est disponible ; le modèle sous-jacent permet l'indexation de tous types de jeux de données.
    BibTeX:
    		@inproceedings{Sur15-IC,
    		  author = {Guillaume Surroca and Philippe Lemoisson and Clement Jonquet and Stefano A. Cerri},
    		  title = {Diffusion de systèmes de préférences par confrontation de points de vue, vers une simulation de la Sérendipité},
    		  booktitle = {26èmes Journées Francophones d'Ingénierie des Connaissances, IC'15},
    		  year = {2015},
    		  pages = {12},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IC2015_Serendipite.pdf}
    		}
    		
    Guillaume Surroca, Philippe Lemoisson, Clement Jonquet & Stefano A. Cerri. Preference Dissemination by Sharing Viewpoints : Simulating Serendipity, In 7th Intertnational Conference on Knowledge Engineering and Ontology Development KEOD'15. Lisbon, Portugal, November 2015. Vol. 2 (2), pp. 402-409.

    conference

    Abstract: The Web currently stores two types of content. These contents include linked data from the semantic Web and user contributions from the social Web. Our aim is to represent simplified aspects of these contents within a unified topological model and to harvest the benefits of integrating both content types in order to prompt collective learning and knowledge discovery. In particular, we wish to capture the phenomenon of Serendipity (i.e., incidental learning) using a subjective knowledge representation formalism, in which several " viewpoints " are individually interpretable from a knowledge graph. We prove our own Viewpoints approach by evidencing the collective learning capacity enabled by our approach. To that effect, we build a simulation that disseminates knowledge with linked data and user contributions, similar to the way the Web is formed. Using a behavioral model configured to represent various Web navigation strategies, we seek to optimize the distribution of preference systems. Our results outline the most appropriate strategies for incidental learning, bringing us closer to understanding and modeling the processes involved in Serendipity. An implementation of the Viewpoints formalism kernel is available. The underlying Viewpoints model allows us to abstract and generalize our current proof of concept for the indexing of any type of data set.
    BibTeX:
    		@inproceedings{Sur15-KEOD,
    		  author = {Surroca, Guillaume and Lemoisson, Philippe and Jonquet, Clement and Cerri, Stefano A.},
    		  title = {Preference Dissemination by Sharing Viewpoints : Simulating Serendipity},
    		  booktitle = {7th Intertnational Conference on Knowledge Engineering and Ontology Development KEOD'15},
    		  year = {2015},
    		  volume = {2},
    		  number = {2},
    		  pages = {402-409},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_KEOD2015_Viewpoints.pdf}
    		}
    		
    Guillaume Surroca, Philippe Lemoisson, Clement Jonquet & Stefano A. Cerri. Construction et évolution de connaissances par confrontation de points de vue : prototype pour la recherche d'information scientifique, In 25èmes Journées Francophones d'Ingénierie des Connaissances, IC'14. Clermont-Ferrand, France, Mai 2014. pp. 12.

    french

    Abstract: Avec le Web 2.0, les utilisateurs, devenus contributeurs, ont pris une place centrale dans les processus de consommation et de production de connaissances ; cependant la paternité des contributions est souvent perdue lors de l'indexation de l'information. VIEWPOINTS est un formalisme de représentation des connaissances centré sur le point de vue individuel, humain ou artificiel. Nous considérons trois types d'objets de connaissance : les documents (supports), les agents (émetteurs) et les topics (descripteurs). Un viewpoint émis par un agent exprime son opinion sur la proximité entre deux objets. Les viewpoints permettent de définir et de calculer une distance entre objets qui évolue au fil des interactions (requêtes et retours d'utilisation) et de l'ajout de nouveaux viewpoints. Un prototype de moteur de recherche pour des données de publications scientifiques tirées de HAL-LIRMM montre comment VIEWPOINTS peut faire émerger, de façon transparente, une intelligence collective à partir des interactions des utilisateurs contributeurs.
    BibTeX:
    		@inproceedings{Sur14-IC,
    		  author = {Guillaume Surroca and Philippe Lemoisson and Clement Jonquet and Stefano A. Cerri},
    		  title = {Construction et évolution de connaissances par confrontation de points de vue : prototype pour la recherche d'information scientifique},
    		  booktitle = {25èmes Journées Francophones d'Ingénierie des Connaissances, IC'14},
    		  year = {2014},
    		  pages = {12},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_IC2014_Viewpoints_final.pdf}
    		}
    		
    Andon Tchechmedjiev, Amine Abdaoui, Vincent Emonet & Clement Jonquet. ICD10 Coding of Death Certificates with the NCBO and SIFR Annotator(s) at CLEF eHealth 2017 Task 1, In Working Notes of CLEF eHealth Evaluation Lab. Dublin, Ireland, September 2017. CEUR Workshop Proceedings, Vol. 1866 pp. 16.

    workshop

    Abstract: The SIFR BioPortal is an open platform to host French biomedical ontologies and terminologies based on the technology developed by the US National Center for Biomedical Ontology (NCBO). The portal facilitates the use and fostering of terminologies and ontologies by offering a set of services including semantic annotation. The SIFR Annotator (http://bioportal.lirmm.fr/annotator) is a publicly accessible, easily usable ontology-based annotation tool to process French text data and facilitate semantic indexing. The web service relies on the ontology ontent (preferred labels and synonyms) as well as on the semantic of the ontologies (is-a hierarchies) and their mappings. The SIFR BioPortal also offers the possibility of querying the original NCBO Annotator for English text via a dedicated proxy that extends the original functionality. In this paper, we present a preliminary performance evaluation of the generic annotation web service (i.e., not specifically customized) for coding death certificates i.e., annotating with ICD-10 codes. This evaluation is done against the CépiDC/CDC CLEF eHealth 2017 task 1 manually annotated corpus. For this purpose, we have built custom SKOS vocabularies from the CéPIDC/CDC dictionaries as well as training and development corpora, for all three tasks using a most frequent code heuristic to assign ambiguous labels. We then submitted the vocabularies to the NCBO and SIFR BioPortal and used the annotation services on task 1 datasets. We obtained, for our best runs on each corpus the following results: English raw corpus (69.08% P, 51.37% R, 58,92% F1); French raw corpus (54.11% P, 48.00% R, 50,87% F1); French aligned corpus (50.63% P, 52.97% R, 51.77% F1).
    BibTeX:
    		@inproceedings{Tch17-CLEFeHealth,
    		  author = {Andon Tchechmedjiev and Amine Abdaoui and Vincent Emonet and Clement Jonquet},
    		  title = {ICD10 Coding of Death Certificates with the NCBO and SIFR Annotator(s) at CLEF eHealth 2017 Task 1},
    		  booktitle = {Working Notes of CLEF eHealth Evaluation Lab},
    		  year = {2017},
    		  volume = {1866},
    		  pages = {16},
    		  url = {http://ceur-ws.org/Vol-1866/paper_62.pdf}
    		}
    		
    Andon Tchechmedjiev, Amine Abdaoui, Vincent Emonet, Soumia Melzi, Jitendra Jonnagaddala & Clement Jonquet. Enhanced Functionalities for Annotating and Indexing Clinical Text with the NCBO Annotator+, Bioinformatics. January 2018. pp. 3. Oxford.

    journal

    Abstract: Summary: Second use of clinical data commonly involves annotating biomedical text with terminologies and ontologies. The National Center for Biomedical Ontology Annotator is a frequently used annotation service, originally designed for biomedical data, but not very suitable for clinical text annotation. In order to add new functionalities to the NCBO Annotator without hosting or modifying the original Web service, we have designed a proxy architecture that enables seamless extensions by pre-processing of the input text and parameters, and post processing of the annotations. We have then implemented enhanced functionalities for annotating and indexing free text such as: scoring, detection of context (negation, experiencer, temporality), new output formats, and coarse-grained concept recognition (with UMLS Semantic Groups). In this paper, we present the NCBO Annotator+, a Web service which incorporates these new functionalities as well as a small set of evaluation results for concept recognition and clinical context detection on two standard evaluation tasks (Clef eHealth 2017, SemEval 2014). Availability and Implementation: The Annotator+ has been successfully integrated into the SIFR BioPortal platform –an implementation of NCBO BioPortal for French biomedical terminologies and ontologies– to annotate English text. A Web user interface is available for testing and ontology selection (http://bioportal.lirmm.fr/ncbo_annotatorplus); however the Annotator+ is meant to be used through the Web service application programming interface (http://services.bioportal.lirmm.fr/ncbo_annotatorplus). The code is openly available, and we also provide a Docker packaging to enable easy local deployment to process sensitive (e.g., clinical) data in-house (https://github.com/sifrproject). Supplementary information: Technical details and documentation available online.
    BibTeX:
    		@article{Tch17-BioInformatics,
    		  author = {Andon Tchechmedjiev and Amine Abdaoui and Vincent Emonet and Soumia Melzi and Jitendra Jonnagaddala and Clement Jonquet},
    		  title = {Enhanced Functionalities for Annotating and Indexing Clinical Text with the NCBO Annotator+},
    		  journal = {Bioinformatics},
    		  publisher = {Oxford},
    		  year = {2018},
    		  pages = {3},
    		  url = {https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/bty009/4802221},
    		  doi = {https://doi.org/10.1093/bioinformatics/bty009}
    		}
    		
    Andon Tchechmedjiev & Clement Jonquet. Enrichment of French Biomedical Ontologies with UMLS Concepts and Semantic Types for Biomedical Named Entity Recognition Though Ontological Semantic Annotation, In Workshop on Language, Ontology, Terminology and Knowledge Structures, LOTKS'17. Montpellier, France, September 2017. (W17-7007), pp. 8. ACL.

    workshop

    Abstract: Medical terminologies and ontologies are a crucial resource for semantic annotation of biomedical text. In French, there are considerably less resources and tools to use them than in English. Some terminologies from the Unified Medical Language System have been translated but often the identifiers used in the UMLS Metathesaurus, that make its huge integrated value, have been 'lost' during the process. In this work, we present our method and results in enriching seven French versions of UMLS sources with UMLS Concept Unique Identifiers and Semantic Types based on information extracted from class labels, multilingual translation mappings and codes. We then measure the impact of the enrichment through the application of the SIFR Annotator, a service to identify ontology concepts in free text deployed within the SIFR BioPortal, a repository for French biomedical ontologies and terminologies. We use the Quaero Corpus to evaluate.
    BibTeX:
    		@inproceedings{Tch17-LOTKS,
    		  author = {Andon Tchechmedjiev and Clement Jonquet},
    		  title = {Enrichment of French Biomedical Ontologies with UMLS Concepts and Semantic Types for Biomedical Named Entity Recognition Though Ontological Semantic Annotation},
    		  booktitle = {Workshop on Language, Ontology, Terminology and Knowledge Structures, LOTKS'17},
    		  publisher = {ACL},
    		  year = {2017},
    		  number = {W17-7007},
    		  pages = {8},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_LOTKS2017_Enrichment.pdf}
    		}
    		
    Anne Toulet, Vincent Emonet & Clement Jonquet. Modèle de métadonnées dans un portail d'ontologies, In 6èmes Journées Francophones sur les Ontologies, JFO'16. Bordeaux, France, October 2016.

    french

    Abstract: Les communautés scientifiques utilisent un nombre croissant d'ontologies. Pour les mettre à disposition, il existe des portails d'ontologies, à l'exemple du NCBO BioPortal qui regroupe actuellement plus de 500 ontologies biomédicales. Mais face à cette avalanche de ressources, comment trouver l'ontologie qui répondra à nos besoins ? Une solution consiste à décrire chaque ontologie avec des métadonnées appropriées. Or, il n'existe pas à ce jour de vocabulaire de métadonnées suffisamment exhaustif pour répondre à ce besoin. Nous avons passé en revue un grand nombre de vocabulaires, tels que Dublin Core, OMV, DCAT ou VOID ainsi que les propriétés implémentées par les portails d'ontologies les plus courant. Nous en avons produit un modèle simplifié composé de 124 propriétés. Nous présentons ici quelques exemples d'utilisation de ces propriétés dans AgroPortal, un portail d'ontologies dédié à l'agronomie, et nous expliquons comment elles sont gérées et utilisées pour la description et l'identification d'ontologies.
    BibTeX:
    		@inproceedings{Tou16-JFO,
    		  author = {Anne Toulet and Vincent Emonet and Clement Jonquet},
    		  title = {Modèle de métadonnées dans un portail d'ontologies},
    		  booktitle = {6èmes Journées Francophones sur les Ontologies, JFO'16},
    		  year = {2016},
    		  note = {Best paper award},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Article_JFO2016_metadata_AgroPortal.pdf}
    		}
    		
    Simon N. Twigger, Jennifer Smith, Rajni Nigam, Clement Jonquet & Mark A. Musen. Billion Data Points Trapped in International Data Repository - Daring rescue Planned!, In 3rd International Biocuration Conference, Poster presentations. Berlin, Germany, April 2009. pp. 34.

    poster-demo

    Abstract: Researchers from the Medical College of Wisconsin released plans today to rescue over 1.5 billion data points currently trapped inside NCBI's Gene Expression Omnibus (GEO) data repository. In a collaboration with the National Center for Biomedical Ontology (NCBO), the Wisconsin team lead by Dr. Simon N. Twigger intends to use the NCBO's extensive ontological indexing of the GEO data to gain access to the wealth of expression data currently stored in this vital public database. "The data just wants to be free, we have to find ways to make this happen!" Twigger told reporters in a recent interview. Building on the needs of researchers using the Rat as model system, the MCW team will explore the use of automated ontological indexing of the GEO database as a framework for subsequent data mining. The NCBO already indexes a variety of data resources with a large number of biomedical ontologies as part of its Open Biomedical Resource project. This has good coverage in the areas of disease and anatomy and will allow the identification of expression datasets measured under relevant physiological conditions or from specific organs or tissues. By itself this has great value to researchers but the MCW team hopes to do more than provide improved searching. By combining ontological annotations from the expression datasets and samples with the gene-level expression data contained in the database it should be possible to create many millions of new functional annotations. These will capture the tissues and organs where genes have been shown to be expressed and allow this information to be used in many other settings. Discussing the project, Twigger was enthusiastic, "There's clearly a lot of work to be done and its not going to be easy but there's a huge amount of data in these types of repositories and if we can find effective ways to set it free then the benefits could be enormous!". More details of the project along with preliminary data and progress to date will be presented at the 3rd International BioCurator meeting in Berlin in April.
    BibTeX:
    		@inproceedings{Twi09-BioCurator09,
    		  author = {Simon N. Twigger and Jennifer Smith and Rajni Nigam and Clement Jonquet and Mark A. Musen},
    		  title = {Billion Data Points Trapped in International Data Repository - Daring rescue Planned!},
    		  booktitle = {3rd International Biocuration Conference, Poster presentations},
    		  year = {2009},
    		  pages = {34},
    		  url = {http://projects.eml.org/sdbv/events/BiocurationMeeting/abstractbook_online.pdf}
    		}
    		
    Aravind Venkatesan, Pierre Larmande, Clement Jonquet, Manuel Ruiz & Patrick Valduriez. Facilitating efficient knowledge management and discovery in the Agronomic Sciences, In 4th Plenary Meeting of the Research Data Alliance. Amsterdam, The Netherlands, September 2014.

    poster-demo

    Abstract: The advancement of empirical and information technologies in the recent years have drastically increased the amount of data in fields of Life Sciences, and Agronomic Sciences. To understand the complexity of a given system it is important to link (integrate) diverse datasets. A promising solution towards data integration challenges is offered by the Semantic Web technologies1. The Semantic Web was proposed, to remedy the fragmentation of all potentially useful information on the web. Currently, the bio-medical domain has accepted the Semantic Web technologies as a means to manage (integrate) knowledge. Although we are witnessing an increased usage of ontologies within the Agronomic Sciences, the data in this domain is highly distributed in nature. Utilizing these data resources more effectively and taking advantage of associated cross-disciplinary research opportunities poses a major challenge to both domain experts and information technologists.
    BibTeX:
    		@inproceedings{Vin14-RDA,
    		  author = {Aravind Venkatesan and Pierre Larmande and Clement Jonquet and Manuel Ruiz and Patrick Valduriez},
    		  title = {Facilitating efficient knowledge management and discovery in the Agronomic Sciences},
    		  booktitle = {4th Plenary Meeting of the Research Data Alliance},
    		  year = {2014},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster_RDA2014_Venkatesan_et_al.pdf}
    		}
    		
    Aravind Venkatesan, Gildas Tagny, Nordine El Hassouni, Imene Chentli, Valentin Guignon, Clement Jonquet, Manuel Ruiz & Pierre Larmande. Agronomic Linked Data: a knowledge system to enable integrative biology in Agronomy, PLoS One. 2018. Vol. IN PRESS PLOS.

    journal

    BibTeX:
    		@article{Ven17-PLoSOne,
    		  author = {Aravind Venkatesan and Gildas Tagny and Nordine El Hassouni and Imene Chentli and Valentin Guignon and Clement Jonquet and Manuel Ruiz and Pierre Larmande},
    		  title = {Agronomic Linked Data: a knowledge system to enable integrative biology in Agronomy},
    		  journal = {PLoS One},
    		  publisher = {PLOS},
    		  year = {2018},
    		  volume = {IN PRESS}
    		}
    		
    Patricia L. Whetzel, Clement Jonquet, Cherie H. Youn, Michael Dorf, Ray Fergerson, Mark A. Musen & Nigam H. Shah. The NCBO Annotator: Ontology-Based Annotation as a Web Service, In International Conference on Biomedical Ontology, ICBO'11, Demonstration session. Buffalo, NY, USA, July 2011. pp. 302-303.

    poster-demo

    BibTeX:
    		@inproceedings{Whe11-ICBO11,
    		  author = {Patricia L. Whetzel and Clement Jonquet and Cherie H. Youn and Michael Dorf and Ray Fergerson and Mark A. Musen and Nigam H. Shah},
    		  title = {The NCBO Annotator: Ontology-Based Annotation as a Web Service},
    		  booktitle = {International Conference on Biomedical Ontology, ICBO'11, Demonstration session},
    		  year = {2011},
    		  pages = {302-303},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Demo-ICBO-2011_NCBO-Annotator.pdf}
    		}
    		
    Patricia L. Whetzel, Natasha F. Noy, Nigam H. Shah, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Michael J. Montegut, Daniel L. Rubin, Cherie H. Youn & Mark A. Musen. BioPortal: A Web Repository for Biomedical Ontologies and Ontology-indexed Data Resources, In Pacific Symposium on Biocomputing, Poster presentations, PSB'09. Hawaii, USA, January 2009. pp. 90.

    poster-demo

    Abstract: Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, natural language processing, and decision support. The National Center for Biomedical Ontology, one of the seven National Centers for Biomedical Computing created under the NIH Roadmap, is developing BioPortal, a Web-based system that serves as a repository for biomedical ontologies. BioPortal defines relationships among those ontologies and between the ontologies and online data resources such as PubMed, ClinicalTrials.gov, and the Gene Expression Omnibus (GEO). BioPortal supports not only the technical requirements for access to biomedical ontologies either via Web browsers or via Web services, but also community-based participation in the evaluation and evolution of ontology content. BioPortal enables ontology users to learn what biomedical ontologies exist, what a particular ontology might be good for, and how individual ontologies relate to one another. The BioPortal system is available online at the following location: http://bioportal.bioontology.org/.
    BibTeX:
    		@inproceedings{Whe09-PSB09,
    		  author = {Patricia L. Whetzel and Natasha F. Noy and Nigam H. Shah and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Michael J. Montegut and Daniel L. Rubin and Cherie H. Youn and Mark A. Musen},
    		  title = {BioPortal: A Web Repository for Biomedical Ontologies and Ontology-indexed Data Resources},
    		  booktitle = {Pacific Symposium on Biocomputing, Poster presentations, PSB'09},
    		  year = {2009},
    		  pages = {90}
    		}
    		
    Patricia L. Whetzel, Natalya F. Noy, Nigam H. Shah, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Cherie H. Youn, Michael J. Montegut, Daniel L. Rubin, Margaret-Anne Storey, Chris G. Chute & Mark A. Musen. BioPortal: A Web Repository for Biomedical Ontologies and Data Resources, In 3rd International Biocuration Conference, Poster presentations. Berlin, Germany, April 2009. pp. 97.

    poster-demo

    Abstract: Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, naturallanguage processing, and decision support. BioPortal (http://bioportal.bioontology.org) is a Web-based system that serves as a repository for biomedical ontologies developed in OWL, OBO format, RRF, or Protégé frames. Features of BioPortal include mappings between ontologies, visualization of terms and relations, and the ability to comment on individual terms within an ontology or the entire ontology. In addition, users can add information on projects that use ontologies and link these to ontologies in BioPortal as well as add reviews and ratings of ontologies. In this way, BioPortal supports community-based participation in the evaluation and evolution of ontology content. In addition to Web-based access, BioPortal provides programmatic access to the ontology content via a suite of REST web services. BioPortal also serves as a gateway to search and integrate multiple biomedical resources such as PubMed abstracts, ClinicalTrials.gov, the Gene Expression Omnibus (GEO), and ArrayExpress. The included biomedical resources are first annotated with terms from ontologies in BioPortal and then the annotations are expanded through the use of semantic expansion components, such as is_a transitive closure and ontology mappings. The key functionality of the ontology-based indexing is to enable users to locate biomedical data resources related to particular ontology concepts. We have also made the ontology-term recognition functionality available as an automatic 'annotator' web service for public use in collaborative curation workflows or web applications. In this way, BioPortal provides investigators, curators, and developers a 'one-stop shop' where they can learn what biomedical ontologies exist, what a particular ontology might be good for, how ontologies are being used, how individual ontologies relate to one another, as well as to be able to access biomedical ontologies programmatically for use in their own applications and data repositories. BioPortal is a product of the National Center for Biomedical Ontology, one of seven National Centers for Biomedical Computing created under the NIH Roadmap.
    BibTeX:
    		@inproceedings{Whe09-BioCurator09,
    		  author = {Patricia L. Whetzel and Natalya F. Noy and Nigam H. Shah and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Cherie H. Youn and Michael J. Montegut and Daniel L. Rubin and Margaret-Anne Storey and Chris G. Chute and Mark A. Musen},
    		  title = {BioPortal: A Web Repository for Biomedical Ontologies and Data Resources},
    		  booktitle = {3rd International Biocuration Conference, Poster presentations},
    		  year = {2009},
    		  pages = {97},
    		  url = {http://projects.eml.org/sdbv/events/BiocurationMeeting/abstractbook_online.pdf}
    		}
    		
    Patricia L. Whetzel, Natalya F. Noy, Nigam H. Shah, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Cherie H. Youn, Daniel L. Rubin & Mark A. Musen. BioPortal: Ontologies and Integrated Data Resources at the Click of a Mouse, In 17th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB'09) and the 8th European Conference on Computational Biology (ECCB'09), Poster session. Stockholm, Sweden, July 2009.

    poster-demo

    Abstract: Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural language processing, and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides programmatic and web-based access to ontologies developed in OBO, OWL, Protege frames, and RDF. BioPortal functionality includes the ability to browse, search, and visualize ontologies. The web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms, and provide an overall review of the ontology. BioPortal also provides an integrated search of biomedical data resources such as PubMed, ClinicalTrials.gov, the Gene Expression Omnibus (GEO), and Array Express, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers 'one-stop shopping' to programmatically access biomedical ontologies, but also integrates data from various biomedical resources.
    BibTeX:
    		@inproceedings{Whe09-ISMB09,
    		  author = {Patricia L. Whetzel and Natalya F. Noy and Nigam H. Shah and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Cherie H. Youn and Daniel L. Rubin and Mark A. Musen},
    		  title = {BioPortal: Ontologies and Integrated Data Resources at the Click of a Mouse},
    		  booktitle = {17th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB'09) and the 8th European Conference on Computational Biology (ECCB'09), Poster session},
    		  year = {2009}
    		}
    		
    Patricia L. Whetzel, Nigam H. Shah, Natalya F. Noy, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Cherie H. Youn, Chris Callendar, Adrien Coulet, Daniel L. Rubin, Barry Smith, Margaret-Anne Storey, Christopher G. Chute & Mark A. Musen. BioPortal: Ontologies and Integrated Data Resources at the Click of the Mouse, In International Conference on Biomedical Ontology, ICBO'09. Buffalo, NY, USA, July 2009. pp. 197.

    poster-demo

    Abstract: BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides programmatic and web-based access to ontologies developed in OBO, OWL, Protégé frames, and RDF. Features include browsing, searching, and visualization of ontologies. Searching of integrated data resources is also possible through ontologybased indexing of biomedical resources with BioPortal ontologies.
    BibTeX:
    		@inproceedings{Whe09-ICBO09,
    		  author = {Patricia L. Whetzel and Nigam H. Shah and Natalya F. Noy and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Cherie H. Youn and Chris Callendar and Adrien Coulet and Daniel L. Rubin and Barry Smith and Margaret-Anne Storey and Christopher G. Chute and Mark A. Musen},
    		  title = {BioPortal: Ontologies and Integrated Data Resources at the Click of the Mouse},
    		  booktitle = {International Conference on Biomedical Ontology, ICBO'09},
    		  year = {2009},
    		  pages = {197},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/PosterICBO09-NCBO.pdf}
    		}
    		
    Patricia L. Whetzel, Nigam H. Shah, Natalya F. Noy, Benjamin Dai, Michael Dorf, Nicholas B. Griffith, Clement Jonquet, Cherie H. Youn, Adrien Coulet, Chris Callendar, Daniel L. Rubin, Barry Smith, Margaret-Anne Storey, Christopher G. Chute & Mark A. Musen. BioPortal: Ontologies and Integrated Data Resources at the Click of a Mouse, In Bio-Ontologies: Knowledge in Biology, SIG, Poster session, ISMBECCB'09. Stockholm, Sweden, July 2009.

    poster-demo

    Abstract: Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural language processing, and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides programmatic and web-based access to ontologies developed in OBO, OWL, Protege frames, and RDF. BioPortal functionality includes the ability to browse, search, and visualize ontologies. The web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms, and provide an overall review of the ontology. BioPortal also provides an integrated search of biomedical data resources such as PubMed, ClinicalTrials.gov, the Gene Expression Omnibus (GEO), and Array Express, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers 'one-stop shopping' to programmatically access biomedical ontologies, but also integrates data from various biomedical resources.
    BibTeX:
    		@inproceedings{Whe09-BioSIG09,
    		  author = {Patricia L. Whetzel and Nigam H. Shah and Natalya F. Noy and Benjamin Dai and Michael Dorf and Nicholas B. Griffith and Clement Jonquet and Cherie H. Youn and Adrien Coulet and Chris Callendar and Daniel L. Rubin and Barry Smith and Margaret-Anne Storey and Christopher G. Chute and Mark A. Musen},
    		  title = {BioPortal: Ontologies and Integrated Data Resources at the Click of a Mouse},
    		  booktitle = {Bio-Ontologies: Knowledge in Biology, SIG, Poster session, ISMBECCB'09},
    		  year = {2009}
    		}
    		
    Patricia L. Whetzel, Nigam H. Shah, Natalya F. Noy, Clement Jonquet, Adrien Coulet, Nicholas B. Griffith, Cherie H. Youn, Michael Dorf & Mark A. Musen. Ontology Web Services for Semantic Applications, In Pacific Symposium on Biocomputing, Poster presentations, PSB'10. Hawaii, USA, January 2010.

    poster-demo

    BibTeX:
    		@inproceedings{Whe10-PSB10,
    		  author = {Patricia L. Whetzel and Nigam H. Shah and Natalya F. Noy and Clement Jonquet and Adrien Coulet and Nicholas B. Griffith and Cherie H. Youn and Michael Dorf and Mark A. Musen},
    		  title = {Ontology Web Services for Semantic Applications},
    		  booktitle = {Pacific Symposium on Biocomputing, Poster presentations, PSB'10},
    		  year = {2010},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster-PSB-2010_NCBO_Web_services.pdf}
    		}
    		
    Patricia L. Whetzel, Nigam H. Shah, Natalya F. Noy, Clement Jonquet, Adrien Coulet, Cherie Youn, Michael Dorf & Mark A. Musen. Ontology-based Web Services for Data Annotation and Integration, In 18th International Conference on Intelligent Systems for Molecular Biology, Technology track, ISMB'10. Boston, MA, USA, July 2010. pp. 1.

    poster-emo

    BibTeX:
    		@inproceedings{Whe10-ISMB10,
    		  author = {Patricia L. Whetzel and Nigam H. Shah and Natalya F. Noy and Clement Jonquet and Adrien Coulet and Cherie Youn and Michael Dorf and Mark A. Musen},
    		  title = {Ontology-based Web Services for Data Annotation and Integration},
    		  booktitle = {18th International Conference on Intelligent Systems for Molecular Biology, Technology track, ISMB'10},
    		  year = {2010},
    		  pages = {1},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster-ISMB10-NCBO.pdf}
    		}
    		
    Patricia L. Whetzel, Nigam H. Shah, Natalya F. Noy, Clement Jonquet, Adrien Coulet, Cherie Youn, Michael Dorf & Mark A. Musen. Ontology-based Web Services for Semantic Applications, In Bio-Ontologies: Semantic Applications in Life Sciences, SIG, Poster session, ISMB'10. Boston, MA, USA, July 2010.

    poster-demo

    BibTeX:
    		@inproceedings{Whe10-BioSIG10,
    		  author = {Patricia L. Whetzel and Nigam H. Shah and Natalya F. Noy and Clement Jonquet and Adrien Coulet and Cherie Youn and Michael Dorf and Mark A. Musen},
    		  title = {Ontology-based Web Services for Semantic Applications},
    		  booktitle = {Bio-Ontologies: Semantic Applications in Life Sciences, SIG, Poster session, ISMB'10},
    		  year = {2010},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster-BioOntologies-2010-NCBO.pdf}
    		}
    		
    Patricia L. Whetzel, Nigam H. Shah, Natasha F. Noy, Clement Jonquet, Cherie H. Youn, Paul R. Alexander, Michael Dorf & Mark A. Musen. Ontology-based Tools to Enhance the Curation Workflow, In 4th International Biocuration Conference, Poster Session. Tokyo, Japan, October 2010.

    poster-demo

    Abstract: In order to effectively search, retrieve, and analyze data oftentimes it is curated and tagged with ontology terms. However, the amount of effort to curate the existing set of data resources is beyond the limits of purely manual curation. We present three ontology-based tools developed by the National Center for Biomedical Ontology to enhance the curation workflow: Ontology Widgets, Notes, and the Annotator. The Ontology Widgets provide a mechanism to use ontologies in Web-based forms without the need to locally parse and store the ontology. The Ontology Widgets provide a variety of functionality including term autocompletion and ontology visualization. The Ontology Widgets are implemented for all BioPortal ontologies, including those from the OBO Foundry and Unified Medical Language System. The Notes feature of BioPortal allows structured term proposals to be submitted in order to request the addition or modification of a term in an ontology. The term proposals can be added directly via the BioPortal Web interface or programmatically via the Notes Web service. Notification of new Notes and replies are both RSS- and Email-enabled. Once the term curation process is complete, the OWL class or OBO stanza can be generated via the Notes Web service. Finally, the Annotator can be used to automatically process textual metadata to identify ontology terms found within the text. The Annotator can be accessed programmatically via the Annotator Web service and can be used with all BioPortal ontologies. In summary, the Ontology Widgets, Notes, and Annotator provide mechanisms to enhance curation by helping collect annotated data upon data submission, by facilitating ontology term curation, and by tagging unstructured textual data with ontology terms.
    BibTeX:
    		@inproceedings{Whe10-BioCuration10,
    		  author = {Patricia L. Whetzel and Nigam H. Shah and Natasha F. Noy and Clement Jonquet and Cherie H. Youn and Paul R. Alexander and Michael Dorf and Mark A. Musen},
    		  title = {Ontology-based Tools to Enhance the Curation Workflow},
    		  booktitle = {4th International Biocuration Conference, Poster Session},
    		  year = {2010},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Poster-BioCuration-2010-NCBO.pdf}
    		}
    		
    Esther Dzale Yeumo, Michael Alaux, Elizabeth Arnaud, Sophie Aubin, Ute Baumann, Patrice Buche, Laurel Cooper, Robert P. Davey, Richard A. Fulss, Clement Jonquet, Marie-Angélique Laporte, Pierre Larmande, Cyril Pommier, Vassilis Protonotarios, Carmen Reverte, Rosemary Shrestha, Imma Subirats, Aravind Venkatesan, Alex Whan & Hadi Quesneville. Developing data interoperability through standards: a wheat community use case, F1000 Research. December 2017. Vol. 6 (1843), F1000.

    journal

    Abstract: In this article, we present a joint effort of the wheat research community, along with data and ontology experts, to develop wheat data interoperability guidelines. Interoperability is the ability of two or more systems and devices to cooperate and exchange data, and interpret that shared information. Interoperability is a growing concern to the wheat scientific community, and agriculture in general, as the need to interpret the deluge of data obtained through high-throughput technologies grows. Agreeing on common data formats, metadata, and vocabulary standards is an important step to obtain the required data interoperability level in order to add value by encouraging data sharing, and subsequently facilitate the extraction of new information from existing and new datasets. During a period of more than 18 months, the RDA Wheat Data Interoperability Working Group (WDI-WG) surveyed the wheat research community about the use of data standards, then discussed and selected a set of recommendations based on consensual criteria. The recommendations promote standards for data types identified by the wheat research community as the most important for the coming years: nucleotide sequence variants, genome annotations, phenotypes, germplasm data, gene expression experiments, and physical maps. For each of these data types, the guidelines recommend best practices in terms of use of data formats, metadata standards and ontologies. In addition to the best practices, the guidelines provide examples of tools and implementations that are likely to facilitate the adoption of the recommendations. To maximize the adoption of the recommendations, the WDI-WG used a community-driven approach that involved the wheat research community from the start, took into account their needs and practices, and provided them with a framework to keep the recommendations up to date. We also report this approach's potential to be generalizable to other (agricultural) domains.
    BibTeX:
    		@article{Dza17-F1000,
    		  author = {Esther Dzale Yeumo and Michael Alaux and Elizabeth Arnaud and Sophie Aubin and Ute Baumann and Patrice Buche and Laurel Cooper and Robert P. Davey and Richard A. Fulss and Clement Jonquet and Marie-Angélique Laporte and Pierre Larmande and Cyril Pommier and Vassilis Protonotarios and Carmen Reverte and Rosemary Shrestha and Imma Subirats and Aravind Venkatesan and Alex Whan and Hadi Quesneville},
    		  title = {Developing data interoperability through standards: a wheat community use case},
    		  journal = {F1000 Research},
    		  publisher = {F1000},
    		  year = {2017},
    		  volume = {6},
    		  number = {1843},
    		  url = {https://f1000research.com/articles/6-1843/v2},
    		  doi = {https://doi.org/10.12688/f1000research.12234.2}
    		}
    		
    Proceedings of the 2nd International Workshop on Semantics for Biodiversity, S4BioDiv'17, Vienna, Austria, October 2017. CEUR Workshop Proceedings, Vol. 1933

    editors

    Abstract: Biodiversity research aims at comprehending the totality and variability of organisms, their morphology, genetics, life history, habitats and geographical ranges. It usually refers to biological diversity at three levels: genetics, species, and ecology. Biodiversity is an outstanding domain that deals with heterogeneous datasets and concepts generated from a large number of disciplines in order to build a coherent picture of the extend of life on earth. The presence of such a myriad of data resources makes integrative biodiversity research increasingly important, but at the same time very challenging. It is severely strangled by the way data and information are made available and handled today. Semantic Web techniques have shown their potential to enhance data interoperability, discovery, and integration by providing common formats to achieve a formalized conceptual environment, but have not been widely applied to address open data management issues in the biodiversity domain. The 2nd International Workshop on Semantics for Biodiversity (S4BioDiv) thus aimed to bring together computer scientists and biologists working on Semantic Web approaches for biodiversity and related areas such as agriculture or agro-ecology. The goal was to exchange experiences, build a state of the art of realizations and challenges and reuse and adapt solutions that have been proposed in other domains. The focus was on presenting challenging issues and solutions for the design of high quality biodiversity information systems based on Semantic Web techniques. The workshop was a full-day event on October 22nd co-located with the 16th International Semantic Web Conference (ISWC 2017), October 21-25, Vienna, Austria. In total, 13 paper submissions presenting new research results and ongoing projects have been submitted. All of these were reviewed by at least three members of the program committee. Out of the submitted contributions, 6 full papers and 4 poster papers have been accepted for presentation at the workshop and publication in these proceedings. The program included two keynote talks highlighting two vital and challenging topics related to biodiversity research and Open Science in general. Alison Specht, director of the Centre for the Synthesis and Analysis of Biodiversity (CESAB), talked about "Engaging the Domain Expert: Is it just a Dream?". Oscar Corcho, full professor at the Ontology Engineering Group, ETSI InformÂŽaticos, Universidad PolitÂŽecnica de Madrid, Spain presented his thoughts "Towards Reproducible Science: A few Building Blocks from my Personal Experience". To stimulate interdisciplinary debate, the workshop encompassed a one-hour panel discussing controversial topics in the field. We would like to thank the ISWC workshop chairs Aidan Hogan and Valentina Presutti for their kind support. We are also grateful to the workshop's program committee. We very much appreciate the financial support kindly provided by the Collaborative Research Centre AquaDiva (CRC 1076) funded by the Deutsche Forschungsgemeinschaft (DFG). Finally, we thank all authors that submitted their work to the workshop.
    BibTeX:
    		@proceedings{Alg17-S4bioDiv,,
    		  title = {Proceedings of the 2nd International Workshop on Semantics for Biodiversity, S4BioDiv'17},
    		  year = {2017},
    		  volume = {1933},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Proceedings_S4BioDiv-2017.pdf}
    		}
    		
    Proceedings of the 1st International Workshop on Semantics for Biodiversity, S4BioDiv'13, Montpellier, France, May 2013.

    editors

    Abstract: Semantic web standards, tools, ontologies and related technologies have considerably matured in the recent years. Nowadays, accessing a wide catalogue of biological, social, environmental, and ecological data sources helps stakeholders working on biodiversity to answer their complex questions. Will the real time access to web resources effectively support the definition of strategies to conserve and manage biodiversity? How might semantic web technologies help us to handle the complex and heterogeneous big data related to biodiversity? The workshop aims to identify the key challenges faced by the bioinformatics community, discuss potential solutions and identify the opportunities emerging from the trans-disciplinary interactions between Plant Science and Informatics experts. Therefore, we expect the bioinformatics experts to explain how they apply semantic web standards and tools to their scientific topic, from biology, agriculture, agro-ecology, genomics, environmental studies, to social sciences, citizen sciences. Research papers presenting various aspects of semantic web technologies applied to biodiversity data, ranging from position papers to implemented systems descriptions and their evaluation, will be selected. We are particularly interested in the use of the semantic technologies to design and develop systems that supports research on agricultural and wild biodiversity conservation and management for food security. Examples may include strategies for resilience to climate change pressures, sustainability of productive ecosystems, ecosystems services monitoring or territories management, rather than systems supporting only research on climate patterns or soil erosion.
    BibTeX:
    		@proceedings{Lar13-S4bioDiv,,
    		  title = {Proceedings of the 1st International Workshop on Semantics for Biodiversity, S4BioDiv'13},
    		  year = {2013},
    		  url = {http://www.lirmm.fr/ jonquet/publications/documents/Proceedings_S4BioDiv-2013.pdf}
    		}
    		
                                                             Clement Jonquet's homepage - 27/07/2017