Dissertations / Theses on the topic 'Recherche documentaire automatisée – Bénin'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Recherche documentaire automatisée – Bénin.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mahoussi, Wenceslas Ghanousmeid Gbétohou. "Analyse des pratiques informationnelles dans le champ juridique au Bénin." Electronic Thesis or Diss., Paris 8, 2017. http://www.theses.fr/2017PA080042.
Full textUnderstand, describe and document the information behavior of beninese lawyers in both the academic and professional fields. It is the main objective of this thesis that has chosen the positioning of informational practices in the information sciences especially in the study of users in context. This work has mobilized both theoretical and empirical studies. Four theoretical studies have been conducted to understand the legal and justice context in Benin, developments in information and communication technologies (TIC) in the legal and judicial sector, information practices in terms of models and the specific theories in the field of law. These theoretical approaches were confronted with four empirical studies, two quantitative and two qualitative. Regarding the quantitative approach, 375 students and 60 teacher-researchers from the law faculties of the universities of Benin were interviewed. As for qualitative studies, they are made up of semi-directive interviews with 35 magistrates of courts and courts in the southern part of the country and 15 lawyers in Cotonou. At the end of these studies, it is clear that Beninese lawyers in the course of their professional activities make use of information. They do it to solve legal problems. They refer in the first place to printed sources, namely books and legal works; Then to electronic sources, in this case the Internet, and finally consult their colleagues, confreres or comrades. The criteria governing these sources of information are primarily the accessibility-availability of information, the relevance-usefulness of information and the content of information. All lawyers surveyed share information but face several barriers to accessing information. Examples of such obstacles include the excessive cost of certain legal works, the instability of electric power, the breakdown of the Internet connection, the obsolete nature of certain documents
Mahoussi, Wenceslas Ghanousmeid Gbétohou. "Analyse des pratiques informationnelles dans le champ juridique au Bénin." Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080042.
Full textUnderstand, describe and document the information behavior of beninese lawyers in both the academic and professional fields. It is the main objective of this thesis that has chosen the positioning of informational practices in the information sciences especially in the study of users in context. This work has mobilized both theoretical and empirical studies. Four theoretical studies have been conducted to understand the legal and justice context in Benin, developments in information and communication technologies (TIC) in the legal and judicial sector, information practices in terms of models and the specific theories in the field of law. These theoretical approaches were confronted with four empirical studies, two quantitative and two qualitative. Regarding the quantitative approach, 375 students and 60 teacher-researchers from the law faculties of the universities of Benin were interviewed. As for qualitative studies, they are made up of semi-directive interviews with 35 magistrates of courts and courts in the southern part of the country and 15 lawyers in Cotonou. At the end of these studies, it is clear that Beninese lawyers in the course of their professional activities make use of information. They do it to solve legal problems. They refer in the first place to printed sources, namely books and legal works; Then to electronic sources, in this case the Internet, and finally consult their colleagues, confreres or comrades. The criteria governing these sources of information are primarily the accessibility-availability of information, the relevance-usefulness of information and the content of information. All lawyers surveyed share information but face several barriers to accessing information. Examples of such obstacles include the excessive cost of certain legal works, the instability of electric power, the breakdown of the Internet connection, the obsolete nature of certain documents
Soualmia, Lina Fatima. "Etude et évaluation d'approches multiples d'expansion de requêtes pour une recherche d'information intelligente : application au domaine de la santé sur l'internet." Rouen, INSA, 2004. http://www.theses.fr/2004ISAM0018.
Full textPérenon, Pascal. "Profil d'utilisateur et métadonnées associés dans un système de recherche d'information scientifique." Lyon 1, 2004. http://www.theses.fr/2004LYO10261.
Full textBellot, Patrice. "Méthodes de classification et de segmentation locales non supervisées pour la recherche documentaire." Avignon, 2000. http://lia.univ-avignon.fr/fileadmin/documents/Users/Intranet/fich_art/LIA--142-These.pdf.
Full textStatistical information retrieval systems allow to process natural language queries (whatever the language) on large and heterogeneous corpora. IR software programs compute similarities between a user's query and documents belonging to the target corpus. According to similarity values, a ranked list of documents is provided to the user. This answer list is often so long that users cannot explore all the documents retrieved. However, some of them are relevant but badly ranked and thus never recovered. The retrieved documents deal with several themes. A few of them are distant from the theme of the query either because the query is not clearly expressed or because the IR software was not able to recognize the theme. Thematic classification of the retrieved documents may be a way to organize them. It helps users navigate in the list of documents according to the global themes of the clusters. Thus, users can obtain relevant documents faster. If the classification is applied to the paragraphs or to the sentences of the documents, it allows to group together any extracts (segments) dealing with the same theme. Two extracts from a document are about two different themes if they belong to two different clusters. Thus, classification leads to segmentation. From this segmentation, similarity values between query and segments can be computed. This computation allows to provide users with a new ranked list. Segmentation permits to propose the segments considered as relevant to users. It allows to better rank long documents in which the searched theme is one of the themes the documents deal with. The ranked segments may be clustered to obtain a less fine-grained segmentation and so on. Any segmentation is linked to a classification. From any segmentation, a classification can be performed. The SIAC information retrieval system has been created to evaluate the methods described in this dissertation. In the first chapter, I describe the way SIAC computes the list of documents to be clustered and segmented. In the second chapter, a classification method combining hierarchical classification and a K-Means-like algorithm is presented. This method is evaluated over TREC-7 corpora and queries. In the third chapter, I propose a new classification method which relies on unsupervised decision trees. It is evaluated over the French corpora of the Amaryllis'99 campaign. The last chapter describes a segmentation algorithm using the classification method detailed in the third chapter. This segmentation method is evaluated over the Amaryllis'99 corpora
Defude, Bruno. "Etude et réalisation d'un système intelligent de recherche d'informations : le prototype Iota." Grenoble INPG, 1986. http://tel.archives-ouvertes.fr/tel-00321461.
Full textSellami, Maher. "Smard : un système multibase d'aide à la recherche documentaire." Montpellier 2, 1988. http://www.theses.fr/1988MON20151.
Full textLoupy, Claude de. "Evaluation de l'apport de connaissances linguistiques en desambigui͏̈sation sémantique et recherche documentaire." Avignon, 2000. http://www.theses.fr/2000AVIGA001.
Full textConductier, Bruno. "Recherche conceptuelle d'informations dans les banques de données en ligne : application au projet INDUSCOPE." Aix-Marseille 3, 1994. http://www.theses.fr/1994AIX30076.
Full textHabchi, Khaled. "Le biliguisme et les systèmes d'information documentaire en Tunisie." Bordeaux 3, 2007. http://www.theses.fr/2007BOR30044.
Full textThe impact of Arabic-French bilingualism is perceptible in Tunisian society at all levels, particularly in the areas of administration, culture and education. The library information systems (LIS) are no exception to this reality and are influenced by this bilingualism through the composition of the collections and the methods of processing and dissemination of information. A thorough review of the bilingual environment in which the Tunisian LIS have evolved, showed that it suffers from deficiencies in its size due mainly to the two-faceted library systems, the Arabic facet and the French one, with an obvious imbalance in favour of the French side. The identified obstacles are related to the human, material and technical means which are linked to the use of standards and procedures of bilingual information description, analysis and management. As an alternative to this situation, the study proposes a LIS prototype to bypass these difficulties. This prototype, inspired largely from the international library standardisation in practice and the progress of ICT, is built around three major features namely cataloguing, indexing, and automated information management and access. The proposed cataloguing sub-system relies on the principle of the use of standards and MAchine Readable Cataloguing formats adapted to the bilingual context and inspired from the new concepts of cataloguing such as the entity-relationship approach “FRBR”. The indexing feature is grounded on the adoption of international instruments to fit to the national context by the effort of arabisation. The computerized library data management is made through multilingual Library systems in conformity with the library and computer standards. As for the retrieval system functionality, it can overcome language barriers by means of interfaces enabling to formulate the request in one language to obtain information in several languages
Guezouli, Larbi. "Gestion de documents multimédia et recherche d'informations dans un système collaboratif." Paris 7, 2007. http://www.theses.fr/2007PA077002.
Full textSearching for and managing multimedia documents rely on an information searching System able to locale a set of data satisfying a request among a large data base. Our thesis deals more specifically with textual and video documents. Concerning the textual documents, the combination of a linguistic approach (standardization and lemmatisation) with a statistical approach simplifies the searching process. The statistical approach starts a quick search among the corpus to filter the documents in order to extract the relevant ones. The statistical approach applied to the remaining documents is based on the meaningful linguistic units roots. Video documents' searching requires pre-processing of every document. Video segmentation helps to identify the representative frames of the document. In order to save time, the search itself is performed among the pre-processed documents. Once the textual and video documents have been selected and prepared, a similarity rate is computed for every document compared to the question document. This computation depends on the linguistic units and frames positions, on their neighbourhood and frequency and on the documents sizes. The model proposed in the thesis shows that the combination of these approaches builds an efficient, robust and precise information searching System
Bassano, Jean-Claude. "DIALECT, un système expert pour la recherche documentaire." Paris 11, 1986. http://www.theses.fr/1986PA112008.
Full textThe aim of the project, with the experimental system DIALECT, is to improve retrieval effectiveness and to suggest some flexible uses for information retrieval systems. This research involves the integration of artificial intelligence (AI) and information retrieval (IR) techniques. The quality of the services usually provided by automatic information retrieval systems has been found inadequate. The particular problem to be solved here by the “expert system” in the environment of information retrieval is the automatic request reformulation: the replacement of the starting entries by a variety of “equivalent” sentence formulations. The request is automatically developed and transformed in order to retrieve additional documents. The system uses rules, meta-rules and linguistic models as meta-rules or another expert level. Starting from a request in natural language, the methods must first retrieve some of the more relevant bibliographical information with necessarily natural language texts. Rules of a linguistic model construct some reformulation rules from these previously selected sentences of texts… So the first document retrieval steps act as the selecting of the candidate reformulation rules. The query reformulation and the search process are highly interactive: they act as rule selecting and rules setting in the “expert system”. The system goes through a cycle: query (re)formulation, retrieval of candidate rules which are explicitly “built” only at this time. So the basic information provided in the “knowledge base”, which gives the inference rules capable of generating new facts from already existing ones, is the document collection itself. Simultaneously a set of relevant documents is proposed. This dynamic process is repeated over and over until it reaches the point where it becomes entitled to stop. This incorporation of linguistic procedures and “query reformulation” knowledge-based expert system in a retrieval setting increases the effectiveness of the system. But, it also leads to added benefits in the form of new and more sophisticated services. The following extensions of the “standard” retrieval service are: (1) the use of natural language front-end allows the user to interact with the system using French in their initial information request. (2) Friendly interfaces are available. We provide the users with a large degree of flexibility in choosing how to interact with the system. The system may operate under several modes: (i) A so called “casual user mode” provides the user with a fully transparent process which decides on its own on any opportunity to improve the request. The user only submits his query in French and lets the system search and display the relevant available information as a list of retrieved documents and/or portions of documents. (ii) The so called “expert assistant-documentalist-mode” allows the trained user to break more frequently into the process, if he wants to improve and control its returns. Such an improvement may consist in redefining some elements of the “semantic classes”, adding or removing propositions, using underlying models of the internal representations such as informative indexes and/or bibliographic information etc. For such a user, the system leads the dialogue: another function of the query reformulation part of this system is then to assist the user in producing complex Boolean descriptions of the required documents. The system is useful for consultation by expert users, but it can also train the unexperienced users… (iii) A so called “specialist mode” provides tracks allowing designeers –linguists, analysts, etc. - to oversee the operating processes and to break in by means of a specialized language in order to “modify the rules”
Grivolla, Jens. "Apprentissage et décision automatique en recherche documentaire : prédiction de difficulté de requêtes et sélection de modèle de recherche." Avignon, 2006. http://www.theses.fr/2006AVIG0142.
Full textThis thesis is centered around the subject of information retrieval, with a focus on those queries that are particularly difficult to handle for current retrieval systems. In the application and evaluation settings we were concerned with, a user expresses his information need as a natural language query. There are different approaches for treating those queries, but current systems typically use a single approach for all queries, without taking into account the specific properties of each query. However, it has been shown that the performance of one strategy relative to another can vary greatly depending on the query. We have approached this problem by proposing methods that will permit to automatically identify those queries that will pose particular difficulties to the retrieval system, in order to allow for a specific treatment. This research topic was very new and barely starting to be explored at the beginning of my work, but has received much attention these last years. We have developed a certain number of quality predictor functions that obtain results comparable to those published recently by other research teams. However, the ability of individual predictors to accurately classify queries by their level of difficulty remains rather limited. The major particularity and originality of our work lies in the combination of those different measures. Using methods of automatic classification with corpus-based training, we have been able to obtain quite reliable predictions, on the basis of measures that individually are far less discriminant. We have also adapted our approach to other application settings, with very encouraging results. We have thus developed a method for the selective application of query expansion techniques, as well as the selection of the most appropriate retrieval model for each query
Dinet, Jérôme. "La recherche documentaire informatisée à l'école : compréhension des difficultés des élèves et approche cognitive des processus de sélection des références." Poitiers, 2002. http://www.theses.fr/2002POIT5004.
Full textMouloudi, Hassina. "Personnalisation de requêtes et visualisations OLAP sous contraintes." Tours, 2007. http://www.theses.fr/2007TOUR4029.
Full textPersonalization is extensively used in information retrieval and databases. It helps the user to face to diversity and the volume of information he accesses. A data warehouse stores large volumes of consolidated and historized multidimensional data to be analyzed. The data warehouse is in particular designed to support complex decision queries (OLAP queries) whose results are displayed under the form of cross tables. These results can be very large and often they cannot be visualized entirely on the display device (PDA, mobile phone, etc. ). This work aims to study the personalization of information, for a user querying a data warehouse with OLAP queries. A state of the art of works on personalization in relational databases allows us to establish their principal characteristics and adapt them to the context of exploitation of data warehouses by OLAP queries. We first propose a formalization of the concept of OLAP queries results visualizations, and we show how visualizations can be built and manipulated. Then, we propose a method for personalizing visualizations based on a user profile (including preferences and constraints). Our method corresponds to the formal definition of personalization operator added to the query language for visualizations. This operator can be implemented by transformation of a query or by transformation of the query result. We propose an implementation of this operator, which is used as a basis for a prototype allowing a user to obtain his preferred visualization when querying the data warehouse via a mobile device. This prototype allows us to validate our approach and to check its effectiveness
Al-Hajj, Moustafa. "Extraction et formalisation de la sémantique des liens hypertextes dans des documents culturels, scientifiques et techniques." Tours, 2007. http://www.theses.fr/2007TOUR4023.
Full textThe use of hypertext links on the web makes sites more attractive and easier to read and allows enrichment of sites by information coming from other sites. However, this links produce some difficulties for readers and search engines. The hypertext links are carrying semantic information which, if it were completely formalized, would be exploitable by programs to improve navigation and research of information, and would take its place in the emergence of semantic web. In this thesis, we propose an original methodology for the formal semantic extraction of hypertext links. The suggested method has been tested on the links of a corpus. The formalism RDF has been used to represent the link semantics. Ontology for the links specific to the field of biographies of famous people was made up starting from the link semantics extracted and then represented in RDFS. Some tools of supervised learning and of web pages characterization by keywords has been used to help with the formal extraction of semantics
Reymond, David. "Dynamique informationnelle d'une ressource Web : apport sémantique de la taxinomie : étude webométrique des sites des universités françaises." Bordeaux 3, 2007. http://www.theses.fr/2007BOR30076.
Full textThe industrialisation of content generation and distribution has transformed the Web into an uncontrolled, semi-structured storehouse of data formatted in an indexing language. Qualitative methods are limited to encompass hypertextuality and the volume of publications. Hence, we propose a panorama of chosen disciplines dealing with the processing of vast amounts of hypertext data. We show also limits in dynamic evaluation and in the semantic of data characterisation statistics. Our contribution is organised on three levelsa method to construct a concise representation of the content of a website. The context of hypertext production is integrated in the creative process. In our experimental field, this is achieved by using the text supporting the structural hyperlinks of the site's homepage. The text provided for navigation on this page is strongly representative of the underlying contents of the website. A tool to measure the dynamic evolution of the content of a single html page. When applied to a homepage, this tool characterizes its evolution and stores the published "events". The taxonomy is successfully tested to classify textual events. Results also show the time-dependant relevance of the taxonomy. A systemic approach is applied to a corpus of French academic websites to explain the actual situation regarding scientometric evaluation. Our analysis combines results from qualitative and quantitative collections to put forward a case study of webometric tools. We show that using dynamically-built tools on a concise representation function of published data tends to fill the semantic gap between traditional binary data measurement and the contents measured
Villey-Migraine, Marjolaine. "Multimédia et carto-géographie : Ergonomie des interfaces de navigation hypermédia dans les systèmes documentaires." Paris 2, 2003. http://www.theses.fr/2003PA020016.
Full textMorneau, Maxime. "Recherche d'information sémantique et extraction automatique d'ontologie du domaine." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23828/23828.pdf.
Full textIt can prove to be diffcult, even for a small size organization, to find information among hundreds, even thousands of electronic documents. Most often, the methods employed by search engines on the Internet are used by companies wanting to improve information retrieval on their intranet. These techniques rest on statistical methods and do not make it possible neither to evaluate the semantics contained in the user requests, nor in the documents. Certain methods were developed to extract this semantics and thus, to improve the answer given to requests. On the other hand, the majority of these techniques were conceived to be applied on the entire World Wide Web and not on a particular field of knowledge, like corporative data. It could be interesting to use domain specific ontologies in trying to link a specific query to related documents and thus, to be able to better answer these queries. This thesis presents our approach which proposes the use of the Text-To-Onto software to automatically create an ontology describing a particular field. Thereafter, this ontology is used by the Sesei software, which is a semantic filter for conventional search engines. This method makes it possible to improve the relevance of documents returned to the user.
Zarri, Gian Piero. "Utilisation de techniques relevant de l'intelligence artificielle pour le traitement de données biographiques complexes." Paris 11, 1985. http://www.theses.fr/1985PA112342.
Full textThe aim of this thesis is to provide a general description of RESEDA, an « intelligent » Information Retrieval system dealing with biographical data and using techniques borrowed from Knowledge Engineering and Artificial Intelligence (AI). All the system’s “knowledge” is represented in purely declarative form. This is the case both for the “fact database” and the “rule base”; the fact database contains the data, in the usual sens of the word that the system has to retrieve. Together, the fact and rule bases make up RESEDA’s “knowledge base”. Information in the knowledge base is depicted using a single knowledge representation language (“metalanguage”), which makes use of quantified variables when describing date in the rule base; the metalanguage is a particularly powerful realization of an AI type “case grammar”. For reasons of computational efficiency, the low-level (“level zero”) inferencing (retrieving) is carried out in RESEDA by using only the resources of the system’s match machine. This machine owes a large part of its power to the judicious use of temporal data in efficiently indexing the fact database. Only high-level inferences require the creation of real “inference engines”. RESEDA’s inference engine hat the general characteristics a) of being “event driven” in its initialization; b) of solving problems by constructing a “choice tree”. Traversal of the choice tree is performed depth-first with systematic backtracking. The high-level inference operations, relying on information in the rule base and making use of the inference engine, that are implemented in the system, are known as “transformations” and “hypotheses”. The “hypotheses” enable new causal relationships to be established between events in the fact database which are a priori totally disjointed; the system is thus equipped with an, albeit elementary, learning capability
Névéol, Aurélie. "Automatisation des tâches documentaires dans un catalogue de santé en ligne." Rouen, INSA, 2005. http://www.theses.fr/2005ISAM0013.
Full textBigi, Brigitte. "Contribution à la modélisation du langage pour des applications de recherche documentaire et de traitement de la parole." Avignon, 2000. http://www.theses.fr/2000AVIG0125.
Full textAnane, Afaf. "L'impact des technologies de l'information et de la communication sur les stratégies de diffusion de l'information dans les centre de culture scientifique et technique." Bordeaux 3, 2005. http://www.theses.fr/2005BOR30055.
Full textThe last years were pronounced by an upheaval in the industrial, economic and social landscape on a world level. The liberalization of telecommunications, the spectacular development of the Internet and the progressive setting in network of the companies and the society are revealing of a single phenomenon: the advent of the society of information. The TIC integrated come to evolve the manners of working and this in several fields. The science centers form integral part of the fields that underwent considerable changes in their strategies of diffusion of scientific and technical information. They try to diffuse it by integrating new tools which will ensure a better assimilation and will facilitate the acquisition of these technologies by the public. Only, the science centers accommodate the public globally and they must face this diversity. The integration of multi-media as tool for diffusion appears to satisfy the organizers but not all the categories of the visitors, which obliges the actors to adopt innovating strategies falling under a prospect for demand and not for offer. The innovation is then registered inside debate and gives birth to new means of appropriation of the knowledge. The influence of science and technology on the science centers is not any more to show, they are already in an era of innovation and progress; the revolution is all the more radical as the acceleration of scientific and technological advance is done in synergy with a systematic redefinition of the strategies of the actors who must henceforth fall under a process of competitiveness and of effectiveness who became key factors of failure or success
Kuramoto, Hélio. "Proposition d'un système de recherche d'information assistée par ordinateur : avec application à la langue portugaise." Lyon 2, 1999. http://theses.univ-lyon2.fr/documents/lyon2/1999/hkuramoto.
Full textIn this research paper, we propose a model to address problems typically faced by users in information indexing and retrieval systems (IRS) applied to full text databases. Through discussion of these problems we arrive at a solution that had been formerly proposed by the SYDO group : the use of nominal phrases (or Nominal Group) as descriptors instead of words which are generally used by the traditional IRS. In order to verify the feasibility of this proposition, we have developed a prototype of a n IRS with a full text database. .
Fiset, Réjeane. "Faire une recherche en sixième année : analyse d'une expérience." Doctoral thesis, Université Laval, 1987. http://hdl.handle.net/20.500.11794/29253.
Full textBannour, Ines. "Recherche d’information s´emantique : Graphe sémantico-documentaire et propagation d’activation." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD024/document.
Full textSemantic information retrieval (SIR) aims to propose models that allow us to rely, beyond statistical calculations, on the meaning and semantics of the words of the vocabulary, in order to better represent relevant documents with respect to user’s needs, and better retrieve them.The aim is therefore to overcome the classical purely statistical (« bag of wordsé») approaches, based on strings’ matching and the analysis of the frequencies of the words and their distributions in the text.To do this, existing SIR approaches, through the exploitation of external semantic resources (thesauri, ontologies, etc.), proceed by injecting knowledge into the classical IR models (such as the vector space model) in order to disambiguate the vocabulary or to enrich the representation of documents and queries.These are usually adaptations of the classical IR models. We go so to a « bag of concepts » approach which allows us to take account of synonymy. The semantic resources thus exploited are « flattened », the calculations are generally confined to calculations of semantic similarities.In order to better exploit the semantics in RI, we propose a new model, which allows to unify in a coherent and homogeneous way the numerical (distributional) and symbolic (semantic) information without sacrificing the power of the analyzes of the one for the other. The semantic-documentary network thus modeled is translated into a weighted graph. The matching mechanism is provided by a Spreading activation mechanism in the graph. This new model allows to respond to queries expressed in the form of key words, concepts or even examples of documents. The propagation algorithm has the merit of preserving the well-tested characteristics of classical information retrieval models while allowing a better consideration of semantic models and their richness.Depending on whether semantics is introduced in the graph or not, this model makes it possible to reproduce a classical IR or provides, in addition, some semantic functionalities. The co-occurrence in the graph then makes it possible to reveal an implicit semantics which improves the precision by solving some semantic ambiguities. The explicit exploitation of the concepts as well as the links of the graph allow the resolution of the problems of synonymy, term mismatch, semantic coverage, etc. These semantic features, as well as the scaling up of the model presented, are validated experimentally on a corpus in the medical field
Ben, Romdhane Mohamed. "Navigation dans l'espace textuel : accès à l'information scientifique." Lyon 3, 2001. http://www.theses.fr/2001LYO31008.
Full textBento, Pereira Suzanne. "Indexation multi-terminologique de concepts en santé." Rouen, 2008. http://www.theses.fr/2008ROUES019.
Full textInformation retrieval and decision support systems need fast and accurate access to the content of documents and efficient medical knowledge processing. Indexing (describing using keywords) enables access to knowledge and knowledge processing. In the medical domain, an increasing number of resources are available in electronic format, and there is a growing need for automatic solutions to facilitate knowledge access and indexing. The objectives of my PhD work are the implementation of an automatic multi-terminology multi-document and multi-task indexing help-system namely F-MTI (French Multi-terminology Indexer). It uses Natural Language processing methods to produce an indexing proposition for medical documents. We applied it to resources indexing in a French online health catalogue, namely CISMeF, to therapeutical data indexing for drug medication and to diagnosis and health procedures indexing for patient medical records
Hernandez, Nathalie. "Ontologies de domaine pour la modélisation du contexte en recherche d'Information." Toulouse 3, 2006. http://www.theses.fr/2006TOU30012.
Full textAbbaci, Faïza. "Méthodes de sélection de collections dans un environnement de recherche d'informations distribuée." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2003. http://tel.archives-ouvertes.fr/tel-00849898.
Full textLe, Crosnier Hervé. "Systèmes d'accès à des ressources documentaires : vers des anté-serveurs intelligents." Phd thesis, Aix-Marseille 3, 1990. http://tel.archives-ouvertes.fr/tel-00004654.
Full textGueret, Christophe. "Navigateurs internet intelligents : algorithmes de fourmis artificielles pour la diffusion d'informations dans un réseau P2P." Tours, 2006. http://www.theses.fr/2006TOUR4020.
Full textIn this thesis, we propose the architecture PIAF (Personnal Intelligent Framework Agent) whose objective is to provide users with an environment of nonintrusive, autonomous and general-purpose exchange of information. The problems of diffusion of information between users and optimization of the network's topology are approached with an algorithm using artificial ants. The use of artificial pheromones deposited on connections between peers at the time of the transfers authorizes the constitution of a global memory of the exchanges and the detection of centers of shared centers of interests. Comparatively with the existing solutions, the advantage of our algorithm is to free the user from the definition of profiles. This last needs neither to subscribe with diffusion channel nor to define its centers of interests to be able to exchange information
Baziz, Mustapha. "Indexation conceptuelle guidée par ontologie pour la recherche d'information." Toulouse 3, 2005. http://www.theses.fr/2005TOU30265.
Full textThis thesis deals with the use of ontologies in information retrieval. More precisely, we aim at representing textual information (documents/queries) by means of concepts (rather than a bag of single words). This conceptual representation is based on matching document/query with ontology. Roughly, two principle propositions are developed within this framework. The first one, DocCore, proposes to represent information by means of semantic networks (called Document Semantic Cores) where the nodes represent the “most salient” concepts extracted from the document, and the arcs semantic similarity values between these nodes. In the second approach, DocTree, we use the concept hierarchy provided by the subsumption link of an ontology (is-a) to describe a document or a query by mean of sub-trees. A prototype is built and the two approaches are successfully used in the IR process
Rey, Christophe. "Découverte des meilleures couvertures d'un concept en utilisant une terminologie." Clermont-Ferrand 2, 2004. http://www.theses.fr/2004CLF22550.
Full textLamirel, Jean-Charles. "Application d'une approche symbolico-connexionniste pour la conception d'un système documentaire hautement interactif : le prototype NOMAD." Nancy 1, 1995. http://www.theses.fr/1995NAN10423.
Full textDelacroix, Quentin. "Un système pour la recherche plein texte et la consultation hypertexte de documents techniques." Clermont-Ferrand 2, 1999. http://www.theses.fr/1999CLF2A001.
Full textAttias, Mimoun. "Technologies du savoir et de l'information : effets et impact sur la division internationale du travail." Paris 10, 1986. http://www.theses.fr/1986PA100091.
Full textThis research is basically aiming at understanding the economic stakes conveyed by the merging between informatics and telecommunications made possible by the rapid development of information technology. It comprises two parts. The first one is devoted to the examination of the information concept and its economic nature. Furthermore it intends to be a critical appraisal of the so called information economy but tries also to provide a comprehensive approach of the recent development of information technology within the framework of advanced capitalism. As far as the second part is encored, it enters the topic of transborder data flows focusing mainly on their economic dimension. They are viewed in relationship with the word economy market by the predominance of transnational economic systems. A particular attention is deserved to the use of telematics network by transnational companies. We also stress the existence of new markets and products generated by the penetration of data processing systems with the world economy. A final stage studies the impact of international data transmission of developing economies and discusses briefly the computerization of the Third world
Moreau, Fabienne. "Revisiter le couplage traitement automatique des langues et recherche d'information." Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00524514.
Full textBerrani, Sid-Ahmed. "Recherche approximative de plus proches voisins avec contrôle probabiliste de la précision ; application à la recherche d'images par le contenu." Phd thesis, Université Rennes 1, 2004. http://tel.archives-ouvertes.fr/tel-00532854.
Full textAli, Khodor. "Nouvelles technologies de diffusion du savoir dans les CDI d'établissements de l'enseignement secondaire." Paris 8, 2008. http://www.theses.fr/2008PA083752.
Full textThis thesis deals with the school libraries of the four departments of Paris and its nearby suburbs (Paris department and three other surrounding ones: Hauts-de-Seine, Seine-Saint-Denis and Val-de-Marne). We have focused our study on the online documentation searching that pupils carry out in the libraries of secondary schools with documentation softwares. The main question we are trying to answer is: "To what extent does the use of documentation softwares help secondary school pupils in their learning in the French education system?". Through observation techniques, questionnaires and interviews, and through surveys about the use of the ITCE in school libraries, we have attempted to assess the impact of the use of documentation softwares for secondary school pupils. In addition to the documentary research, three field researches have been carried out in secondary schools: interviews of school librarians (20), questionnaires submitted to librarians (121) and to pupils (362). This work involved 428 secondary schools in total
Ceausu-Dragos, Valentina. "Définition d'un cadre sémantique pour la catégorisation de données textuelles : application à l'accidentologie." Paris 5, 2007. http://www.theses.fr/2007PA05S001.
Full textKnowledge engineering requires the application of techniques for knowledge extraction, modeling and formalization. The work presented concerns the definition of a semantic portal for text categorization. This portal relies on several techniques developed for semantic resources engineering. From a theorical point of view, this semantic portal allows ; (I) knowledge extraction from text ; (II) semantic resources construction ; (III) finding correspondances between semantic resources and textual corpora ; (IV) aligning semantic resources. The core of this portal is a knowledge system allowing text categorization in accidentology. This system implements a reasoning mechanism supported by domain knowledge and it offers a solution for the automatic exploitation of accident scenarios
Jacquemin, Bernard. "Construction et interrogation de la structure informationnelle d'une base documentaire en français." Phd thesis, Université de la Sorbonne nouvelle - Paris III, 2003. http://tel.archives-ouvertes.fr/halshs-00003957.
Full textEttaleb, Mohamed. "Approche de recommandation à base de fouille de données et de graphes étiquetés multi-couches : contributions à la RI sociale." Electronic Thesis or Diss., Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0588.
Full textIn general, the purpose of a recommendation system is to assist users in selecting relevant elements from a wide range of elements. In the context of the explosion in the number of academic publications available (books, articles, etc.) online, providing a personalized recommendation service is becoming a necessity. In addition, automatic book recommendation based on a query is an emerging theme with many scientific locks. It combines several issues related to information retrieval and data mining for the assessment of the degree of opportunity to recommend a book. This assessment must be made taking into account the query but also the user profile (reading history, interest, notes and comments associated with previous readings) and the entire collection to which the document belongs. Two main avenues have been addressed in this paper to deal with the problem of automatic book recommendation : - Identification of the user’s intentions from a query. - Recommendation of relevant books according to the user’s needs
Abi, Chahine Carlo. "Indexation et recherche conceptuelles de documents pédagogiques guidées par la structure de Wikipédia." Phd thesis, INSA de Rouen, 2011. http://tel.archives-ouvertes.fr/tel-00635978.
Full textNorman, Christopher. "Systematic review automation methods." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS028.
Full textRecent advances in artificial intelligence have seen limited adoption in systematic reviews,and much of the systematic review process remains manual, time-consuming, and expensive. Authors conducting systematic reviews face issues throughout the systematic review process. It is difficult and time-consuming to search and retrieve,collect data, write manuscripts, and perform statistical analyses. Screening automation has been suggested as a way to reduce the workload, but uptake has been limited due to a number of issues,including licensing, steep learning curves, lack of support, and mismatches to workflow. There is a need to better a lign current methods to the need of the systematic review community.Diagnostic test accuracy studies are seldom indexed in an easily retrievable way, and suffer from variable terminology and missing or inconsistently applied database labels. Methodological search queries to identify diagnostic studies therefore tend to have low accuracy, and are discouraged for use in systematic reviews. Consequently, there is a particular need for alternative methods to reduce the workload in systematic reviews of diagnostic test accuracy.In this thesis we have explored the hypothesis that automation methods can offer an efficient way tomake the systematic review process quicker and less expensive, provided we can identify and overcomebarriers to their adoption. Automated methods have the opportunity to make the process cheaper as well as more transparent, accountable, and reproducible
Lamprier, Sylvain. "Vers la conception de documents composites : extraction et organisation de l'information pertinente." Phd thesis, Université d'Angers, 2008. http://tel.archives-ouvertes.fr/tel-00417551.
Full textLesbegueries, Julien. "Plate-forme pour l'indexation spatiale multi-niveaux d'un corpus territorialisé." Phd thesis, Université de Pau et des Pays de l'Adour, 2007. http://tel.archives-ouvertes.fr/tel-00258534.
Full textNous proposons en effet une méthode de recherche d'information spatiale multi-niveaux indexant un corpus textuel brut. Cette méthode qui extrait l'information d'un corpus et l'interprète, permet d'améliorer l'efficacité de systèmes de recherche d'information à chaque fois que l'interrogation comporte une connotation spatiale. L'interprétation permet en outre de retrouver le contexte dans lequel l'information spatiale a été utilisée. En particulier, elle permet d'indexer des unités de texte en leur associant des contextes de type itinéraire, description locale ou comparaison de lieux.
Mimouni, Nada. "Interrogation d'un réseau sémantique de documents : l'intertextualité dans l'accès à l'information juridique." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD084/document.
Full textA collection of documents is generally represented as a set of documents but this simple representation does not take into account cross references between documents, which often defines their context of interpretation. This standard document model is less adapted for specific professional uses in specialized domains in which documents are related by many various references and the access tools need to consider this complexity. We propose two models based onformal and relational concept analysis and on semantic web techniques. Applied on documentary objects, these two models represent and query in a unified way documents content descriptors and documents relations
Noël, Romain. "Contribution à la veille stratégique : DOWSER, un système de découverte de sources Web d’intérêt opérationnel." Thesis, Rouen, INSA, 2014. http://www.theses.fr/2014ISAM0011/document.
Full textThe constant growth of the Web in recent years has made more difficult the discovery of new sources of information on a given topic. This is a prominent problem for Expert in Intelligence Analysis (EIA) who are faced with the search of pages on specific and sensitive topics. Because of their lack of popularity or because they are poorly indexed due to their sensitive content, these pages are hard to find with traditional search engine. In this article, we describe a new Web source discovery system called DOWSER. The goal of this system is to provide users with new sources of information related to their needs without considering the popularity of a page unlike classic Information Retrieval tools. The expected result is a balance between relevance and originality, in the sense that the wanted pages are not necessary popular. DOWSER in based on a user profile to focus its exploration of the Web in order to collect and index only related Web documents
Raymond, David Colin. "Synchronous environments for distance learning : combinning network and collaborative approaches." Toulouse 3, 2006. http://www.theses.fr/2006TOU30032.
Full text