To see the other types of publications on this topic, follow the link: Domain expert.

Dissertations / Theses on the topic 'Domain expert'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Domain expert.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pierre, Mattias. "Combining assembles of domain expert markings." Thesis, Umeå University, Department of Computing Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34405.

Full text
Abstract:

Breast cancer is diagnosed in more than 6300 Swedish women every year. Mammograms, which are X-ray images of breasts, are taken as part of a nationwide screening process and are analyzed for anomalies by radiologists. This analysis process could be made more efficient by using computer-aided image analysis to assist quality control of the mammograms. However, the development of such image analysis methods requires what is called a “ground truth”. The ground truth is used as a key in algorithm development and represents the true information in the depicted object. Mammograms are 2D projections of deformed 3D objects, and in these cases the ground truth is almost impossible to procure. Instead a surrogate ground truth is constructed. ALGSII, a novel method for ranking shapes within a given set, was recently developed for measuring the level of agreement among ensembles of markings produced by experts of glandular tissue in mammograms. It was hypothesized in this thesis that the ALGSII measure could be used to construct a surrogate truth based on the markings from domain experts.Markings from segmentations of glandular tissue, performed by 5 different field experts on 162 mammograms, comprised the working data for this thesis project. An algorithm was developed that, given a fixed set of markings, takes an initial shape and modifies it iteratively until it becomes the “optimal shape” - the shape with the highest level of agreement in the group of markings according to the ALGSII measure. The algorithm was optimized with egard to rate of accepted shape changes and computational complexity.The developed algorithm was successful in producing an optimal shape according to the definition of maximizing the ALGSII measure in 100% of the cases tested. The algorithm showed stability for the given data set, and its performance was significantly increased by the implemented optimizations.

APA, Harvard, Vancouver, ISO, and other styles
2

Wouters, Laurent. "Multi-domain expert-user modeling infrastructure." Paris 6, 2013. http://www.theses.fr/2013PA066200.

Full text
Abstract:
Ce travail a été réalisé dans un contexte industriel dans l’entreprise European Aeronautics Defense and Space Company (EADS). EADS recherche donc constamment de nouveaux moyens d’analyser la sécurité du système homme-machine dans sa globalité, c'est-à-dire l’avion, les pilotes et les procédures, comme un tout. Ces analyses de sécurité sont réalisées tout au long du cycle de conception, depuis la conception préliminaire à la conception détaillée et jusqu'à la phase de certification. EADS essaye de réaliser ces analyses beaucoup plus tôt dans le cycle de conception, lorsque seulement des modèles sont disponibles. Une question en suspend est alors comment assurer la collaboration entre les experts de plusieurs domaines (cockpit, procédures, psychologie cognitive) pour qu’ils puissent construite un artefact commun (un modèle) sur lequel ils pourraient s’appuyer pour réaliser les analyses de la sécurité du système homme-machine dans sa globalité. Cette thèse identifie et adresse trois questions. Premièrement, les langages de modélisation dédiés aux domaines doivent être sémantiquement alignés pour que l’artefact commun puisse être exprimé de manière cohérente. Deuxièmement, des notations visuelles spécifiques aux domaines doivent être produites pour le même artefact commun. Troisièmement, l’activité de modélisation des experts doit être supportée au mieux et pour cela les notations visuelles des langages de modélisation doivent être le plus proche possible des pratiques courantes dans les domaines respectifs. Cette thèse propose l’Infrastructure xOWL comme une solution intégrée à ces trois questions
This work has been realized in an industrial context at the European Aeronautics Defense and Space Company (EADS). EADS is researching new ways to assess the safety of the overall human-machine system, i. E. , the aircraft, pilots and operating procedures as a whole. These safety assessments are conducted throughout the design cycle of the product, from the preliminary design to the detailed design and up to the certification phase. EADS is trying to perform these thorough safety assessments much earlier in the development cycle, when only models are available, thus, phasing-in a model-driven approach of the problem. An issue is then how to enable the collaboration of experts from multiple domains (cockpit, procedures, and cognitive psychology) for them to build a common model artifact that can be leveraged in the safety assessment of the overallhuman-machine system. This work considers that experts in each domain must be provided a domain-specific modeling environment, giving them access to a common model artifact, but through a domain-specific notation. This thesis identifies and considers three issues in this regard. First, the domain-specific modeling languages need to be semantically aligned so that the common model artifact can be consistently expressed. Second, multiple domain-specific visual notations need to be produced for the same underlying common model artifact. Third, domain experts modeling activities need to be supported at best and thus the provided domain-specific notations need to be as close as possible to the existing practices in the domains. This thesis then proposes the xOWL Infrastructure as an integrated solution to the three issues
APA, Harvard, Vancouver, ISO, and other styles
3

Chronister, Julie Anne. "A domain-independent framework for structuring knowledge in the OFMspert architecture." Thesis, Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/25752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nasir, M. L. "Combining domain expert knowledge with neural networks for predicting corporate bankruptcies." Thesis, De Montfort University, 2000. http://hdl.handle.net/2086/10715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goswami, Madan Gopal. "An approach towards the development of an expert system for paediatric problem domain." Thesis, University of North Bengal, 1999. http://hdl.handle.net/123456789/1039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nasuti, Frank W. "Knowledge Acquisition using Multiple Domain Experts in the Design and Development of an Expert System for Disaster Recovery Planning." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/746.

Full text
Abstract:
The increasing dependence of organizations on data processing to perform the basic functions of corporate America, together with recent disasters such as earthquakes, tornadoes and hurricanes have awakened management to the realization that they require Disaster Recovery Plans (DRP) and Business Resumption Services (BRS). To address these needs, organizations frequently consult with outsiders to help them develop disaster recovery and business resumption plans. Although consultants and vendors specializing in disaster recovery planning are available, their number is limited and the quality of their services may be questionable. In addition, the information gathering process by consultants is a time consuming process and in most cases requires the use of multiple vendor experts, as well as various resources within the customer's organization. This research proposed, as a solution to address these deficiencies, the design and development of an expert system to assist in the determination of the needs of an organization for disaster recovery and business resumption services, as well as the evaluation of existing plans. This research resulted in the design of an expert system for disaster recovery planning. It included the knowledge acquisition processes necessary to elicit information from multiple domain experts. The specific goals of this research were: (1) knowledge acquisition specific to the problems of using multiple domain experts; (2) design and development of a prototype expert system for disaster recovery planning; and (3) validation of the prototype expert system.
APA, Harvard, Vancouver, ISO, and other styles
7

Sánchez, David. "Domain ontology learning from the web an unsupervised, automatic and domain independent approach." Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/991459016/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lyon, Bruce. "Teraphim : a domain-independent framework for constructing blackboard-based expert systems in Prolog /." Online version of thesis, 1987. http://hdl.handle.net/1850/8858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Okoli, Justin. "Expert knowledge elicitation in the firefighting domain and the implications for training novices." Thesis, Middlesex University, 2016. http://eprints.mdx.ac.uk/22940/.

Full text
Abstract:
Background/Purpose: Experienced fireground commanders are often required to make important decisions in time-pressured and dynamic environments that are characterized by a wide range of task constraints. The nature of these environments is such that firefighters are sometimes faced with novel situations that seek to challenge their expertise and therefore necessitate making knowledge-based as opposed to rule-based decisions. The purpose of this study is to elicit the tacitly held knowledge which largely underpinned expert competence when managing non-routine fire incidents. Design/Methodology/Approach: The study utilized a formal knowledge elicitation tool known as the critical decision method (CDM). The CDM method was preferred to other cognitive task analysis (CTA) methods as it is specifically designed to probe the cognitive strategies of domain experts with reference to a single incident that was both challenging and memorable. Thirty experienced firefighters and one staff development officer were interviewed in-depth across different fire stations in the UK and Nigeria (UK=15, Nigeria=16). The interview transcripts were analyzed using the emergent themes analysis (ETA) approach. Findings: Findings from the study revealed 42 salient cues that were sought by experts at each decision point. A critical cue inventory (CCI) was developed and cues were categorized into five distinct types based on the type of information each cue generated to an incident commander. The study also developed a decision making model — information filtering and intuitive decision making model (IFID), which describes how the experienced firefighters were able to make difficult fireground decisions amidst multiple informational sources without having to deliberate on their courses of action. The study also compiled and indexed the elicited tacit knowledge into a competence assessment framework (CAF) with which the competence of future incident commanders could potentially be assessed. Practical Implications: Through the knowledge elicitation process, training needs were identified, and the practical implications for transferring the elicited experts’ knowledge to novice firefighters were also discussed. The four component instructional design model aided the conceptualization of the CDM outputs for training purposes. Originality/Value: Although it is widely believed that experts perform exceptionally well in their domains of practice, the difficulty still lies in finding how best to unmask expert (tacit) knowledge, particularly when it is intended for training purposes. Since tacit knowledge operates in the unconscious realm, articulating and describing it has been shown to be challenging even for experts themselves. This study is therefore timely since its outputs can facilitate the development of training curricula for novices, who then will not have to wait for real fires to occur before learning new skills. This statement holds true particularly in this era where the rate of real fires and therefore the opportunity to gain experience has been on a decline. The current study also presents and discusses insights based on the cultural differences that were observed between the UK and the Nigerian fire service.
APA, Harvard, Vancouver, ISO, and other styles
10

Alshayji, Sameera. "The development of a fuzzy expert system to help top decision makers in political and investment domains." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6977.

Full text
Abstract:
The world’s increasing interconnectedness and the recent increase in the number of notable regional and international events pose greater and greater challenges for political decision-making, especially the decision to strengthen bilateral economic relationships between friendly nations. Typically, such critical decisions are influenced by certain factors and variables that are based on heterogeneous and vague information that exists in different domains. A serious problem that the decision-maker faces is the difficulty in building efficient political decision support systems (DSS) with heterogeneous factors. One must take many factors into account, for example, language (natural or human language), the availability, or lack thereof, of precise data (vague information), and possible consequences (rule conclusions). The basic concept is a linguistic variable whose values are words rather than numbers and are therefore closer to human intuition. A common language is thus needed to describe such information which requires human knowledge for interpretation. To achieve robustness and efficiency of interpretation, we need to apply a method that can be used to generate high-level knowledge and information integration. Fuzzy logic is based on natural language and is tolerant of imprecise data. Fuzzy logic’s greatest strength lies in its ability to handle imprecise data, and it is perfectly suited for this situation. In this thesis, we propose to use ontology to integrate the scattered information resources from the political and investment domains. The process started with understanding each concept and extracting key ideas and relationships between sets of information by constructing object paradigm ontology. Re-engineering according to the object-paradigm (OP) provided quality for the developed ontology where conceptualization can provide more expressive, reusable object and temporal ontology. Then fuzzy logic has been integrated with ontology. And a fuzzy ontology membership value that reflects the strength of an inter-concept relationship to represent pairs of concepts across ontology has been consistently used. Each concept is assigned a fixed numerical value representing the concept consistency. Concept consistency is computed as a function of strength of all the relationships associated with the concept. Fuzzy expert systems enable one to weigh the consequences (rule conclusions) of certain choices based on vague information. Rule conclusions follow from rules composed of two parts, the if antecedent (input) and the then consequent (output). With fuzzy expert systems, one uses fuzzy logic toolbox graphical user interface (GUI) tools to build up a fuzzy inference system (FIS) to aid in decision-making. This research includes four main phases to develop a prototype architecture for an intelligent DSS that can help top political decision makers.
APA, Harvard, Vancouver, ISO, and other styles
11

Piat, Guilhem Xavier. "Incorporating expert knowledge in deep neural networks for domain adaptation in natural language processing." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG087.

Full text
Abstract:
Les Modèles de Langage (LMs) de pointe sont capables de converser, résumer, traduire, résoudre des problèmes inédits, raisonner, et manipuler des concepts abstraits à niveau quasi-humain. Cependant, pour acquérir ces capacités, et en particulier pour acquérir une forme de ``bon sens'' ou des connaissances spécifiques à un domaine, ils requièrent de vastes quantités de texte, qui ne sont pas disponibles pour toutes les langues ou tous les domaines. De surcroît, leurs besoins en puissance de calcul ne sont atteignables que par quelques organisations, limitant leur spécificité ainsi que leur applicabilité aux données sensibles.Les Graphes de Connaissances (GCs) sont des sources de connaissances structurées qui associent des concepts linguistiques entre eux par le biais de relations sémantiques. Ces graphes sont des sources de connaissances de haute qualité, préexistantes dans une variété de domaines même peu dotés en ressources, et plus denses en informations que du texte. En permettant aux LMs d'exploiter ces structures d'information, ils sont délestés de la responsabilité de mémoriser les informations factuelles, réduisant la quantité de ressources textuelles et calculatoires nécessaires à leur entraînement, et nous permettant de mettre à jour leur connaissances à moindre coût, élargissant leur cadre d'application et augmentant leur potentiel de démocratisation.Diverses approches pour l'amélioration de LMs par intégration de GCs ont démontré leur efficacité. Elles reposent cependant sur la supposition rarement vérifiée que le problème de Désambiguïsation d'Entités Nommées (DEN) est résolu en amont. Ce mémoire couvre les limitations de cette approche, puis explore l'apprentissage simultané de modélisation de langue et de DEN. Cette démarche s'avère viable mais échoue à réduire considérablement la dépendance du LM sur le texte issu du domaine. Enfin, ce mémoire aborde la stratégie de générer du texte à partir de GCs de manière à exploiter les capacités linguistiques des LMs. Il en ressort que même une implémentation naïve de cette approche peut se solder par de considérables progrès en modélisation de langue dans des domaines de spécialité
Current state-of-the-art Language Models (LMs) are able to converse, summarize, translate, solve novel problems, reason, and use abstract concepts at a near-human level. However, to achieve such abilities, and in particular to acquire ``common sense'' and domain-specific knowledge, they require vast amounts of text, which are not available in all languages or domains. Additionally, their computational requirements are out of reach for most organizations, limiting their potential for specificity and their applicability in the context of sensitive data.Knowledge Graphs (KGs) are sources of structured knowledge which associate linguistic concepts through semantic relations. These graphs are sources of high quality knowledge which pre-exist in a variety of otherwise low-resource domains, and are denser in information than typical text. By allowing LMs to leverage these information structures, we could remove the burden of memorizing facts from LMs, reducing the amount of text and computation required to train them and allowing us to update their knowledge with little to no additional training by updating the KGs, therefore broadening their scope of applicability and making them more democratizable.Various approaches have succeeded in improving Transformer-based LMs using KGs. However, most of them unrealistically assume the problem of Entity Linking (EL), i.e. determining which KG concepts are present in the text, is solved upstream. This thesis covers the limitations of handling EL as an upstream task. It goes on to examine the possibility of learning EL jointly with language modeling, and finds that while this is a viable strategy, it does little to decrease the LM's reliance on in-domain text. Lastly, this thesis covers the strategy of using KGs to generate text in order to leverage LMs' linguistic abilities and finds that even naïve implementations of this approach can result in measurable improvements on in-domain language processing
APA, Harvard, Vancouver, ISO, and other styles
12

Yoon, Changwoo. "Domain-specific knowledge-based informational retrieval model using knowledge reduction." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ramiah, Sivanes Pari. "Exploring and supporting expert and novice reasoning in a complex and uncertain domain : resolving labour disputes." Thesis, University of Surrey, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.590930.

Full text
Abstract:
This research aimed to explore and support the reason-based decision making processes of experts and novices in a complex and uncertain domain: resolving labour disputes. Naturalistic Decision Making (NDM) has investigated the role of expertise in complex and uncertain domains that are often time pressured. NDM models typically focus on fast decisions while explaining the reasoning processes behind slower decisions less well. There is much research on expertise, experts' reasoning on complex problems is less well understood. Therefore this research aimed to look at experts' reasoning in slower, reason-based decisions. The first empirical chapter examined how complex labour judgements were made by testing a Mental Model Theory (MMT) of probabilistic reasoning. This was followed by a second empirical chapter, in which participants' (fabour officers) thought processes were elicited using a think aloud protocol. Based on these findings, the thesis then progressed to develop a reasoning aid to support reasoning followed by an evaluation of any changes in reasoning processes and outcomes in the third empirical chapter. The final empirical chapter validated the efficiency of the reasoning aid. Six scenarios were developed to replicate typical labour cases and used in studies to assess reasoning processes on a realistic task. Participants for each study numbered 42, 22, 28 and 82 respectively. The data for Study 1 and 4 were analysed quantitatively. and the verbal protocols for Study 2 and 3 were analysed qualitatively. Verbal protocols were recorded and transcribed, then transcripts were coded based on participants' reasoning processes. Differences between experienced and less-experienced officers were also tested. Study 1 provided mixed evidence of reasoning according to MMT, finding that experienced and less-experienced officers were not significantly different. In Study 2 the data were analysed using six higher-order codes proposed by Toulmin et al. (1979) and each protocol was drawn into an argument map. This showed that experienced officers drew more accurate conclusions, omitted less evidence and offered more justifications than less-experienced officers. The reasoning aid used in Study 3 improved less-experience officers' reasoning such that conclusion accuracy was the same as that of experienced officers. However, Study 4 revealed that, while the reasoning aid had no impact on the reasoning processes, the level of experience had a significant effect. This research provides a good description of participants' reason-based decision making. Toulmin's argument analysis approach provides a unique contribution to understanding reasoning in this realistic and complex task. Although, the reasoning aid reduces the differences between experienced and less-experienced officers, experience still plays a crucial role in ensuring correct outcomes.
APA, Harvard, Vancouver, ISO, and other styles
14

Chueh, Henry C. "Integration of expert knowledge into computer-controlled databases in the medical domain : HEMAVID, a case study." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/29202.

Full text
Abstract:
Thesis (M.S.)--Harvard University--Massachusetts Institute of Technology Division of Health Sciences and Technology, Program in Medical Engineering and Medical Physics, 1989.
Includes bibliographical references (leaves [165]-[172]).
by Henry C. Chueh.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
15

Blazek, Rick. "Author-Statement Citation Analysis Applied as a Recommender System to Support Non-Domain-Expert Academic Research." NSUWorks, 2007. http://nsuworks.nova.edu/gscis_etd/416.

Full text
Abstract:
This study will investigate the use of citation indexing to provide expert recommendations to domain-novice researchers. Prioritizing the result-set returned from an electronic academic library query is both an essential task and a significant start-up burden for a domain-novice researcher. Current literature reveals many attempts to provide recommender systems in support of research. However, these systems rely on some form of relevance feedback from the user. The domain-novice researcher is unable to satisfy this expectation. Additional research demonstrates that a network of expert recommendations is available in each collection of academic documents. A power distribution, Lotka's law, has been found to be an attribute of the citation network found in large collections of academic domain documents. The issue under study is whether the network of recommendations found in a relatively small collection of academic documents reveals a citation density that conforms to the distribution pattern of large collections. This study will use a descriptive, comparative methodology to answer this question. The study will use Lotka's law to form a predicted density and distribution for comprehensive domain collections. Next, the study will calculate an actual concentration and distribution from a sample population. The sample population will be a result-set returned from a general query to an academic collection. The two indexes and distributions will be statistically compared to ascertain whether the actual density is equivalent to the predicted. If the sample set does not conform to normative Lotkian density, it will demonstrate an unnatural bias and therefore not qualify as an appropriate set of recommendations for guiding domain novice research. The null hypothesis is that the actual density will be statistically equal to the predicted index. If this expectation is met, the result will be a set of expert recommendations that is user-independent for providing domain-relevant expert prioritization. A recommender system based on such recommendations would significantly improve the early research tasks of a domain novice by overcoming the identified start-up problem. It would remove the burden of expertise required when a domain novice seeks to effectively use the result set from a novice query. This experiment will test an alternative hypothesis by isolating smaller subsets of the sample and testing the citation density of each using a factorial orthogonal design. This experiment will attempt to determine the minimal population size valid for the predicted density index. It is anticipated that a sample size below the lower bound for distribution validity will be non-ambiguously identified by actual indexes significantly below that of the standard
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Wang. "Domain knowledge transformation (DKT) for conceptual design of mechanical systems /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9841351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fick, David. "A virtual machine framework for domain-specific languages." Diss., Pretoria : [S.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-10192007-163559/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cass, Kimberly Ann. "Assessing the usefulness of domain and methodological tutorials for novice users employing an expert system as an advice-giving tool." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184607.

Full text
Abstract:
The purpose of this dissertation is to examine the impact of domain and methodological tutorials on the attitude and performance of end-users who are neither well-versed in the domain area nor well-versed with an expert system which is designed to assist them in solving software selection tasks. With respect to these tasks and the mechanism for accomplishing them, the end-users can be categorized as "non-technical users." The design of this experiment was a 2 x 2 full factorial laboratory experiment employing eighty novice users as subjects. Each of the experimental subjects was randomly assigned to one of the four treatment groups corresponding to receipt or lack of receipt of tutorials concerning the problem domain and methodology employed by an expert system. The results of this research indicate that there is a significant interaction between receiving the application and expert system tutorial videos; better performance in terms of correct categorization of problems was observed in subjects who saw either both or neither video whereas worse performance was observed in subjects who saw only one video. In general, the video treatments were unrelated to a variety of attitude measures applied to the subjects. However, it was found that prior attitudes towards the use of computers were significantly related to the majority of the (posttest) attitude measures. Further, the general pattern was for attitudes towards computers to improve as a result of undergoing the experimental process with the viewing of the expert system video to be significant in the level of improvement.
APA, Harvard, Vancouver, ISO, and other styles
19

Eseryel, Deniz. "Expert conceptualizations of the domain of instructional design an investigative study on the deep assessment methodology for complex problem-solving outcomes /." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2006. http://proquest.umi.com/login?COPT=REJTPTU0NWQmSU5UPTAmVkVSPTI=&clientId=3739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Basole, Rahul C. "Modeling and Analysis of Complex Technology Adoption Decisions: An Investigation in the Domain of Mobile ICT." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-06162006-142751/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2007.
Rouse, William, Committee Chair ; DeMillo, Richard, Committee Member ; Cross, Steve, Committee Member ; Cummins, Michael, Committee Member ; Vengazhiyil, Roshan, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
21

Perez-Torrents, Joël. "Gérer la collaboration entre l’expert métier et l’Intelligence Artificielle : Deux études de cas dans le système de soins." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. https://theses.hal.science/tel-04780193.

Full text
Abstract:
L’Intelligence Artificielle (IA) mobilise des méthodes d’apprentissage machine pour automatiser la création de modèles statistiques complexes. Ces outils d’IA peuvent accomplir des tâches avec des performances similaires à celles des experts, mais la manière dont les résultats sont produits reste souvent opaque, posant des défis pour leur intégration dans les organisations. Cette thèse explore la collaboration entre les experts métier et les outils d’IA dans le contexte du système de soins.Dans le système de soins, cette collaboration est cruciale en raison de la responsabilité professionnelle et morale des décisions médicales et de leur incertitude inhérente. Si les outils d’IA promettent de répondre aux tensions de ce système en offrant une personnalisation accrue des soins à moindre coût, ils suscitent également des craintes et leur adoption reste à concrétiser.Notre démarche empirique produit deux études de cas illustrant ces dynamiques. RADO porte sur les usages d’un outil d’IA par des radiologues pour l’analyse mammographique, visant à améliorer leur activité. La seconde étude de cas, KOVAK, examine comment une équipe de recherche médicale utilise des outils d’IA pour analyser des données de cohortes de patients.Nous utilisons un premier cadre d’analyse pour observer comment les experts intègrent leurs connaissances avec les résultats de l’outil d’IA, démontrant ainsi un engagement dans la collaboration. Un deuxième cadre caractérise une double nature de l’usage de l’outil d’IA, entre optimisation d’une activité et apprentissages, et montre l’augmentation des capacités des radiologues grâce à cette collaboration Un troisième cadre, fondé sur les travaux de Peirce sur l’enquête pragmatiste, considère l’outil d’IA comme un partenaire dans la construction des connaissances.Nous proposons un modèle de collaboration, l’EMC2 (Expert Machine Collaborative Community). Il intègre différents modes de gestion de la collaboration expert-IA facilitant ainsi une meilleure intégration de cette collaboration au sein des organisations.Cette thèse contribue à la littérature sur les modèles de collaboration humain-IA par des modes de gestion issus d’une démarche empirique. Elle enrichit aussi la littérature sur les usages des outils d’IA en spécifiant des pratiques interrogatives et en appliquant le concept d’enquête pragmatiste
Artificial Intelligence (AI) leverages machine learning methods to automate the creation of complex statistical models. These AI tools can perform tasks with expert-level proficiency, yet the precise way results are generated often remains opaque, posing challenges for their integration into organizations. This thesis explores the collaboration between domain experts and AI tools within the healthcare system.In the healthcare system, human-AI collaboration is particularly critical due to the professional and moral responsibilities inherent in medical decisions and their inherent uncertainty. While AI tools promise to address the tensions in this system by offering increased personalization of care at lower costs, they also raise concerns, and their real-world adoption remains to be fully realized.Our empirical approach includes two case studies, illustrating these dynamics. RADO focuses on the use of an AI tool by radiologists for mammographic analysis, aiming to enhance their activities. KOVAK, examines how a medical research team uses AI tools to analyze patient cohort data.A first analytical framework observes how experts leverage their knowledge to incorporate AI results, thereby demonstrating an engaged collaboration. A second one identifies the dual nature of the AI tool use, between a way to optimize an activity or generate learning. A third one, based on Peirce’s work on pragmatic inquiry, considers the AI tool as a partner in the knowledge construction process.We propose our collaboration model, EMC2 (Expert Machine Collaborative Community). It integrates various modes of managing expert-AI collaboration, thus facilitating better integration of this collaboration within organizations.This thesis contributes to the literature on human-AI collaboration models by defining management modes derived from an empirical approach. It also contributes to the literature on AI tool usage by specifying interrogative practices and by applying the concept of pragmatic inquiry
APA, Harvard, Vancouver, ISO, and other styles
22

Selcuk, Dogan Gonca Hulya. "Expert Finding In Domains With Unclear Topics." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614259/index.pdf.

Full text
Abstract:
Expert finding is an Information Retrieval (IR) task that is used to find the needed experts. To find the needed experts is a noticeable problem in many commercial, educational or governmental organizations. It is highly crucial to find the appropriate experts, when seeking referees for a paper submitted to a conference or when looking for a consultant for a software project. It is also important to find the similar experts in case of the absence or the inability of the selected expert. Traditional expert finding methods are modeled based on three components which are a supporting document collection, a list of candidate experts and a set of pre-defined topics. In reality, most of the time pre-defined topics are not available. In this study, we propose an expert finding system which generates a semantic layer between domains and experts using Latent Dirichlet Allocation (LDA). A traditional expert finding method (voting approach) is used in order to match the domains and the experts as the baseline method. In case similar experts are needed, the system recommends experts matching the qualities of the selected experts. The proposed model is applied to a semi-synthetic data set to prove the concept and it performs better than the baseline method. The proposed model is also applied to the projects of the Technology and Innovation Funding Programs Directorate (TEYDEB) of The Scientific and Technological Research Council of Turkey (TÜ
B
APA, Harvard, Vancouver, ISO, and other styles
23

Bangabash, Subhasish, and Srimanta Panda. "Machine Learning - Managerial Perspective : A Study to define concepts and highlight challenges in a product-based IT Organization." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18835.

Full text
Abstract:
The purpose of this research is to understand the main managerial challenges that arise in the context of Machine Learning. This research aims to explore the core concepts of Machine Learning and provide the same conceptual foundation to managers to overcome possible obstacles while implementing Machine Learning. Therefore, the main research question is:  What are the phases and the main challenges while managing Machine Learning project in a product based IT organization?   The focus is on the main concepts of Machine Learning and identifying challenges during each phase through literature review and qualitative data collected from interviews conducted with professionals. The research aims to position itself in the field of research which looks for inputs from consultants and management professionals either associated with Machine Learning or they are planning to start such initiatives. In this research paper we introduce ACDDT (Agile-Customer-Data-Domain-Technology) model framework for managers. This framework is centered on the main challenges in Machine Learning project phases while dealing with customer, data, domain and technology. In addition, the frame work also provides key inputs to managers for managing those challenges and possibly overcome them.
APA, Harvard, Vancouver, ISO, and other styles
24

Inchamnan, Wilawan. "A framework for analysing creative activity within puzzle-game play experiences." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/82214/1/Wilawan_Inchamnan_Thesis.pdf.

Full text
Abstract:
This thesis is an analyzing creative processes that can be fostered through computer gaming. Outcomes from the research build on our knowledge of how computer games foster creative thinking. The research proposes guidelines that build upon our understanding of the relationship between the creative processes that players undertake during a game and the components of the game that allow these processes to occur. These guidelines may be used in the game design process to better facilitate creative gameplay activity. A significant research contribution is the ability to create games that facilitate creative thinking through engaging interactions with technology.
APA, Harvard, Vancouver, ISO, and other styles
25

Shah, Darsh J. (Darsh Jaidip). "Multi-source domain adaptation with mixture of experts." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121741.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 35-37).
We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using meta-training. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer.
by Darsh J. Shah.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
26

Jayatilleke, Gaya Buddhinath, and buddhinath@gmail com. "A Model Driven Component Agent Framework for Domain Experts." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080222.162529.

Full text
Abstract:
Industrial software systems are becoming more complex with a large number of interacting parts distributed over networks. Due to the inherent complexity in the problem domains, most such systems are modified over time to incorporate emerging requirements, making incremental development a suitable approach for building complex systems. In domain specific systems it is the domain experts as end users who identify improvements that better suit their needs. Examples include meteorologists who use weather modeling software, engineers who use control systems and business analysts in business process modeling. Most domain experts are not fluent in systems programming and changes are realised through software engineers. This process hinders the evolution of the system, making it time consuming and costly. We hypothesise that if domain experts are empowered to make some of the system changes, it would greatly ease the evolutionary process, thereby making the systems more effective. Agent Oriented Software Engineering (AOSE) is seen as a natural fit for modeling and implementing distributed complex systems. With concepts such as goals and plans, agent systems support easy extension of functionality that facilitates incremental development. Further agents provide an intuitive metaphor that works at a higher level of abstraction compared to the object oriented model. However agent programming is not at a level accessible to domain experts to capitalise on its intuitiveness and appropriateness in building complex systems. We propose a model driven development approach for domain experts that uses visual modeling and automated code generation to simplify the development and evolution of agent systems. Our approach is called the Component Agent Framework for domain-Experts (CAFnE), which builds upon the concepts from Model Driven Development and the Prometheus agent software engineering methodology. CAFnE enables domain experts to work with a graphical representation of the system , which is easier to understand and work with than textual code. The model of the system, updated by domain experts, is then transformed to executable code using a transformation function. CAFnE is supported by a proof-of-concept toolkit that implements the visual modeling, model driven development and code generation. We used the CAFnE toolkit in a user study where five domain experts (weather forecasters) with no prior experience in agent programming were asked to make changes to an existing weather alerting system. Participants were able to rapidly become familiar with CAFnE concepts, comprehend the system's design, make design changes and implement them using the CAFnE toolkit.
APA, Harvard, Vancouver, ISO, and other styles
27

Almeida, Reinaldo de Figueirêdo. "Análise de domínio na aquisição de conhecimentos: ontologias para sistemas computacionais." Faculdade de Educação, 2017. http://repositorio.ufba.br/ri/handle/ri/22595.

Full text
Abstract:
Submitted by Reinaldo Almeida (reinaldo.almeida@ufba.br) on 2017-05-23T16:38:39Z No. of bitstreams: 1 Análise de Domínio na Aquisição de Conhecimentos - Ontologias para Sistemas Computacionais.pdf: 4033106 bytes, checksum: 207a133d5a98f624185f2ffd87870a6c (MD5)
Approved for entry into archive by Maria Auxiliadora da Silva Lopes (silopes@ufba.br) on 2017-05-25T18:25:37Z (GMT) No. of bitstreams: 1 Análise de Domínio na Aquisição de Conhecimentos - Ontologias para Sistemas Computacionais.pdf: 4033106 bytes, checksum: 207a133d5a98f624185f2ffd87870a6c (MD5)
Made available in DSpace on 2017-05-25T18:25:37Z (GMT). No. of bitstreams: 1 Análise de Domínio na Aquisição de Conhecimentos - Ontologias para Sistemas Computacionais.pdf: 4033106 bytes, checksum: 207a133d5a98f624185f2ffd87870a6c (MD5)
A partir do alinhamento entre as Semióticas desenvolvidas pelos filósofos e pensadores, Charles Sanders Peirce, Gilles Deleuze e Félix Guattari, e da atualização teórica para a atividade de Análise de Domínio, baseada nos pressupostos defendidos pelos pesquisadores da Royal School of Library and Information Science, da Dinamarca, com destaque para Birger Hjørland e Torkild Thellefsen, esta Tese disserta sobre os aspectos de cognição a serem observados para determinar o significado num universo do discurso referente a fatos de um domínio, com o objetivo de aumentar o grau de aproximação entre as realidades, dos fatos, entendida e significada. Deste modo, é feito um aprofundamento no processo de aquisição do conhecimento, com a crítica à abordagem atomista e estruturalista, na qual, termos e relações do universo de discurso são especificados a partir de uma relação direta entre signo e significado, de uma concepção onde a expressão supera o conteúdo, e a dimensão espaço prevalece sobre a dimensão tempo no processo de significância. O ambiente de estudo usado é aquele referente às ontologias computacionais, bases de conhecimentos apoiadas sobre redes semânticas e semióticas de frames, concentrado nas fases que vão do entendimento da realidade de um domínio até aquela onde a significância dos termos e relações é tratada a fim de se obter os seus respectivos significados. A pesquisa, na sua fase experimental, dentro do referencial proposto, analisou as etapas de desenvolvimento da ontologia EDXL-RESCUER, contrapondo as hipóteses tratadas na Tese e o processo de desenvolvimento da ontologia, tendo como resultados, uma abordagem crítica e uma fundamentação teórica correspondente, complementada por uma metodologia para Análise de Domínio capaz de atuar numa dimensão pós-estruturalista. O método de pesquisa aplicado é qualitativo, exploratório, envolvendo atualização do estado da arte para os conceitos apresentados, a partir da análise de um projeto de construção de ontologia.
ABSTRACT From the alignment between the Semiotics developed by the philosophers, Charles Sanders Peirce, Gilles Deleuze and Felix Guattari, and the theoretical update for the Domain Analysis activity, based on the assumptions defended by the researchers of the Royal School of Library and Information Science, from Denmark, notably Birger Hjørland and Torkild Thellefsen, this thesis discusses the aspects of cognition to be observed to determine the signified in a universe of discourse concerning at facts of a domain, with the aim of increasing the degree of approximation between the realities, of the facts, understood and signified. In this way, a deepening of the process of knowledge acquisition is made, with the criticism at the atomist and structuralist approach, in which terms and relations of the universe of discourse are specified from a direct relation between sign and signified, a conception where the expression exceeds the content, and the space dimension prevails over the time dimension in the process of significance. The study environment used is that referring to computational ontologies, knowledge bases supported on semantic networks and semiotic frames, focused on the phases that go from the understanding of the reality of a domain to that where the significance of terms and relations is treated in order to obtain their respective signified. The research, in its experimental phase, within the proposed reference, analyzed the stages of development of the EDXL-RESCUER ontology, opposing the hypotheses treated in the thesis and the process of development of the ontology, resulting in a critical approach and a corresponding theoretical foundation, complemented by a methodology for Domain Analysis capable of acting in a post-structuralist dimension. The applied research method is qualitative, exploratory, involving updating the state of the art to the presented concepts, from the analysis of an ontology construction project
APA, Harvard, Vancouver, ISO, and other styles
28

Ramdass, Dennis L. "Designing Bayesian networks for highly expert-involved problem diagnosis domains." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53169.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (leaves 55-56).
Systems for diagnosing problems in highly complicated problem domains have been traditionally very difficult to design. Such problem diagnosis systems have often been restricted to the use of primarily rule-based methods for problem diagnosis in cases where machine learning for probabilistic methods has been made difficult by limited available training data. The probabilistic diagnostic methods that do not require a substantial amount of available training data usually require considerable expert involvement in design. This thesis proposes a model which balances the amount of expert involvement needed and the complexity of design in cases where training data for machine learning is limited. This model aims to use a variety of techniques and methods to translate, and augment, experts' quantitative knowledge of their problem diagnosis domain into quantitative parameters for a Bayesian network model which can be used to design effective and efficient problem diagnosis systems.
by Dennis L. Ramdass.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Lelebina, Olga. "La gestion des experts en entreprise : dynamique des collectifs de professionnels et offre de parcours." Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0027/document.

Full text
Abstract:
Dans un monde économique caractérisé par la complexification des technologies associée à la mondialisation des marchés, les connaissances techniques et la capacité d’innovation sont des sources primordiales d'avantage compétitif et de différenciation. Ces enjeux sont souvent associés à une figure particulière au sein des organisations : celle de l’expert. En effet, c’est souvent sur ces acteurs que repose la responsabilité de fournir des prestations technologiques de haut niveau et d’être force de proposition pour des solutions innovantes. L'important est alors de les reconnaître et de leur offrir des conditions propices au développement de l’expertise et de l’innovation afin notamment de les retenir sur des temps longs au sein de l’entreprise. Face à ces enjeux le modèle de la double échelle s’est depuis longtemps répandu dans les entreprises technologiques et industrielles, proposant une trajectoire alternative (parcours technique) à celle du management. Néanmoins, cette solution, tout en étant un modèle de référence dans la gestion des experts, n’a pas souvent apporté la satisfaction recherchée, ni pour les personnes en charge de sa mise en place, ni pour les personnes ciblées. En partant de ce paradoxe de la double échelle et en se basant sur les résultats d'une recherche-intervention au sein d’une entreprise technologique, cette thèse a permis de proposer une nouvelle problématisation de la gestion des experts en entreprise, qui ne se limite pas à la reconnaissance des experts déjà présents, mais intègre également les enjeux d’anticipation des besoins futurs en expertise et la création des nouveaux domaines. Elle propose ainsi un cadre d’analyse qui intègre trois volets d’action : la reconnaissance des experts, le renouvellement de l’expertise et la création de nouvelles expertises. Chaque axe d’action a été instrumenté par la proposition d’un outil ou d’un dispositif gestionnaire, expérimenté et validé sur le terrain de thèse
In a business world characterized by increasing complexity of technologies associated with the globalization of markets, technical knowledge and innovation become crucial assets and primary condition for developing competitive advantage. These issues are often associated with a particular figure within organizations: that of the expert. Indeed, these people are usually considered as a source of technological excellence and innovative solutions. It becomes thus crucial, in order to retain these key people, to value their expertise and to propose adequate conditions for the development of their knowledge and their innovation potential. As a response to this challenge, the dual ladder model was developed and has been soon recognized as a primary solution for the management of experts in the technological and industrial companies. This model proposes an alternative career path (technical ladder) to that of traditional managerial path. However, this solution, while a reference model in the management of experts, has not often brought satisfaction neither for those in charge of its implementation, nor for targeted individuals. Inspired by this paradox of the dual ladder model and building on the results of a longitudinal intervention-research, this thesis proposes a new problematization of the issue of expert’ management in organisations. We argue that not only the recognition policies for current experts should be taken into consideration, but also the issues of anticipation of future needs in expertise as well as the creation of new expertise areas. This thesis thus proposes an analytical framework that incorporates three lines of action: recognition policies for experts, strategic renewal of expertise and creation of new expertise domains. Each line is supported by the associated management tool, which was tested and validated in our fieldwork
APA, Harvard, Vancouver, ISO, and other styles
30

Melvin, David G. "An investigation of hybrid systems for reasoning in noisy domains." Thesis, University of Aberdeen, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296622.

Full text
Abstract:
This thesis discusses aspects of design, implementation and theory of expert systems, which have been constructed in a novel way using techniques derived from several existing areas of Artificial Intelligence research. In particular, it examines the philosophical and technical aspects of combining techniques derived from the traditional rule-based methods for knowledge representation, with others taken from connectionist (more commonly described as Artificial Neural Network) approaches, into one homogenous architecture. Several issues of viability have been addressed, in particular why an increase in system complexity should be warranted. The kind of gain that can be achieved by such hybrid systems in terms of their applicability to general problem solving and ability to continue working in the presence of noise, are discussed. The first aim of this work has been to assess the potential benefits of building systems from modular components, each of which is constructed using different internal architectures. The objective has been to progress the state of knowledge of the operational capabilities of a specific system. A hybrid architecture containing multiple neural nets and a rule-based system has been designed, implemented and analysed. In the course of, and as an aid to the development of the system, an extensive simulation work-bench has been constructed. The overall system, despite its increased internal complexity provides many benefits including ease of construction and improved noise tolerance, combined with explanation facilities. In terms of undesirable features inherited from the parent techniques the losses are low. The project has proved successful in its stated aims and has succeeded in contributing a working hybrid system model and experimental results derived from the comparison of this new approach with the two, primary, existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
31

Nassiet, Didier. "Contribution à la méthodologie de développement des systèmes experts : application au domaine du diagnostic technique." Valenciennes, 1987. https://ged.uphf.fr/nuxeo/site/esupversions/e69df552-6494-4941-97a3-31d5454a5860.

Full text
Abstract:
Description de quelques méthodes et outils. Elaboration d'une méthodologie de construction des systèmes experts basée sur le concept de "tétraèdre de développement". Application a une étude de faisabilité d'un système expert d'aide au contrôle de fabrication des radiotéléphones.
APA, Harvard, Vancouver, ISO, and other styles
32

Soon, Lisa. "Knowledge Renewal and Knowledge Creation in Export Trading." Thesis, Griffith University, 2007. http://hdl.handle.net/10072/367002.

Full text
Abstract:
This thesis examines how tacit knowledge about export trading is tapped and collectively used in a web portal by a community of practice. Working with a design for application software, such as a web portal, requires an understanding of the application software domain. This research focuses on an export trading knowledge portal for use by an export trading community. The community comprises members involved in export activities. The research adopts three theories useful in the design of the portal. First, theory of domain analysis specifies an application software knowledge domain and explores the thoughts and discourse of the user community. Second, activity theory is used to understand the inherent knowledge in human interactions and the resultant human activity system in relation to the portal use. Third, the theory of organisational knowledge creation is used to explore how knowledge conversion processes take place in the human interactions in the portal. The knowledge captured and collectively used in the portal is beneficial to members for their work purposes. It is argued that tacit export knowledge is exchanged through human interactions. Thus, it is critical to understand what tacit knowledge can be captured and managed in the portal and how this can be done. It is argued that effectively managed knowledge can help members and their organisation to achieve export success. This research is important, as export creates revenues and stimulates economic growth in both the exporting firms and the exporting country. It is particularly important for members involved in export activities who make use of the captured tacit knowledge at work. The principal research questions of this thesis are: what constitutes export knowledge, and how does portal technology help members use and exchange knowledge? From these main questions, the sub-questions are: (1) what portal features can help export trading members interact; (2) what portal features can help export trading members seek and use important useful resources; and (3) how can members’ previous version of knowledge be renewed and new knowledge created when the collective knowledge in the portal is used?
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith Business School
Griffith Business School
Full Text
APA, Harvard, Vancouver, ISO, and other styles
33

Whitley, Edgar A. "Embedding expert systems in semi-formal domains : examining the boundaries of the knowledge base." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/1/.

Full text
Abstract:
This thesis examines the use of expert systems in semi-formal domains. The research identifies the main problems with semi-formal domains and proposes and evaluates a number of different solutions to them. The thesis considers the traditional approach to developing expert systems, which sees domains as being formal, and notes that it continuously faces problems that result from informal features of the problem domain. To circumvent these difficulties experience or other subjective qualities are often used but they are not supported by the traditional approach to design. The thesis examines the formal approach and compares it with a semiformal approach to designing expert systems which is heavily influenced by the socio-technical view of information systems. From this basis it examines a number of problems that limit the construction and use of knowledge bases in semi-formal domains. These limitations arise from the nature of the problem being tackled, in particular problems of natural language communication and tacit knowledge and also from the character of computer technology and the role it plays. The thesis explores the possible mismatch between a human user and the machine and models the various types of confusion that arise. The thesis describes a number of practical solutions to overcome the problems identified. These solutions are implemented in an expert system shell (PESYS), developed as part of the research. The resulting solutions, based on non-linear documents and other software tools that open up the reasoning of the system, support users of expert systems in examining the boundaries of the knowledge base to help them avoid and overcome any confusion that has arisen. In this way users are encouraged to use their own skills and experiences in conjunction with an expert system to successfully exploit this technology in semi-formal domains.
APA, Harvard, Vancouver, ISO, and other styles
34

RABAJOIE, PIERRE. "Contribution a l'elaboration d'un systeme expert dans le domaine des desordres acido-basiques et hydro-electrolytiques." Rennes 1, 1994. http://www.theses.fr/1994REN1M048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lima, Helano Póvoas de. "Uma abordagem para construção de sistemas fuzzy baseados em regras integrando conhecimento de especialistas e extraído de dados." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/7224.

Full text
Abstract:
Submitted by Daniele Amaral (daniee_ni@hotmail.com) on 2016-09-15T12:10:51Z No. of bitstreams: 1 DissHPL.pdf: 5127660 bytes, checksum: 4ffaa3ce20b9eb7adef78d152d5c17d2 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-16T19:47:09Z (GMT) No. of bitstreams: 1 DissHPL.pdf: 5127660 bytes, checksum: 4ffaa3ce20b9eb7adef78d152d5c17d2 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-16T19:47:15Z (GMT) No. of bitstreams: 1 DissHPL.pdf: 5127660 bytes, checksum: 4ffaa3ce20b9eb7adef78d152d5c17d2 (MD5)
Made available in DSpace on 2016-09-16T19:47:20Z (GMT). No. of bitstreams: 1 DissHPL.pdf: 5127660 bytes, checksum: 4ffaa3ce20b9eb7adef78d152d5c17d2 (MD5) Previous issue date: 2015-09-17
Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA)
Historically, since Mamdani proposed his model of fuzzy rule-based system, a lot has changed in the construction process of this type of models. For a long time, the research efforts were directed towards the automatic construction of accurate models starting from data, making fuzzy systems almost mere function approximators. Realizing that this approach escaped from the original concept of fuzzy theory, more recently, researchers attention focused on the automatic construction of more interpretable models. However, such models, although interpretable, might not make sense to the expert. This work proposes an interactive methodology for constructing fuzzy rule-based systems, which aims to integrate the knowledge extracted from experts and induced from data, hoping to contribute to the solution of the mentioned problem. The approach consists of six steps. Feature selection, fuzzy partitions definition, expert rule base definition, genetic learning of rule base, rule bases conciliation and genetic optimization of fuzzy partitions. The optimization and learning steps used multiobjective genetic algorithms with custom operators for each task. A software tool was implemented to support the application of the approach, offering graphical and command line interfaces and a software library. The efficiency of the approach was evaluated by a case study where a fuzzy rule-based system was constructed in order to offer support to the evaluation of reproductive fitness of Nelore bulls. The result was compared to fully manual and fully automatic construction methodologies, the accuracy was also compared to classical algorithms for classification.
Historicamente, desde que Mamdani propôs seu modelo de sistema fuzzy baseado em regras, muita coisa mudou no processo de construção deste tipo de modelo. Durante muito tempo, os esforços de pesquisa foram direcionados à construção automática de sistemas precisos partindo de dados, tornando os sistemas fuzzy quase que meros aproximadores de função. Percebendo que esta abordagem fugia do conceito original da teoria fuzzy, mais recentemente, as atenções dos pesquisadores foram voltadas para a construção automática de modelos mais interpretáveis. Entretanto, tais modelos, embora interpretáveis, podem ainda não fazer sentido para o especialista. Este trabalho propõe uma abordagem interativa para construção de sistemas fuzzy baseados em regras, que visa ser capaz de integrar o conhecimento extraído de especialistas e induzido de dados, esperando contribuir para a solução do problema mencionado. A abordagem é composta por seis etapas. Seleção de atributos, definição das partições fuzzy das variáveis, definição da base de regras do especialista, aprendizado genético da base de regras, conciliação da base de regras e otimização genética da base de dados. As etapas de aprendizado e otimização utilizaram algoritmos genéticos multiobjetivo com operadores customizados para cada tarefa. Uma ferramenta de software foi implementada para subsidiar a aplicação da abordagem, oferecendo interfaces gráfica e de linha de comando, bem como uma biblioteca de software. A eficiência da abordagem foi avaliada por meio de um estudo de caso, onde um sistema fuzzy baseado em regras foi construído visando oferecer suporte à avaliação da aptidão reprodutiva de touros Nelore. O resultado foi comparado às metodologias de construção inteiramente manual e inteiramente automática, bem como a acurácia foi comparada a de algoritmos clássicos para classificação.
APA, Harvard, Vancouver, ISO, and other styles
36

Costa, Nilson Santos. "Modelagem e Construção de uma ferramenta de autoria para um Sistema Tutorial Inteligente." Universidade Federal do Maranhão, 2002. http://tedebc.ufma.br:8080/jspui/handle/tede/306.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:42Z (GMT). No. of bitstreams: 1 Nilson Santos Costa.pdf: 955701 bytes, checksum: 32b64aae5f4201f9b4db958e974f82df (MD5) Previous issue date: 2002-03-01
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
One of the difficullties when building an Intelligent Tutoring System (ITS) is the domain model definition. This must be the result of knowledge acquisition from an experts, usually a teacher. This thesis presents the definition, modelling and implementation of an authoring tool. This tool, named authoring, provides cognitive evaluations to organize a knowledge set within an intelligent tutoring system. Through this tool, the knowledge (domain) of an Expert will be available for Mathnet System. This happens due to the edition or creation of an Expert authoring into a specific domain (knowledge). By the measures, it is possible the automation of a good curricular choice to the next phase to be some proposed by the peer, simulating the teacher s experience, in order to minimize the learning process and provide the use of many pedagogical strategies. Thus, the modelling and implementation of the authoring tool acts as a mechanism of creation and test of knowledge (domain).
Uma das maiores dificuldades na construção de um Sistema Tutorial Inteligente (STI) é a construção do Modelo de Domínio. Este deve ser o resultado da aquisição de conhecimentos a partir dos especialistas, geralmente professores da área e de pedagogia. Este trabalho apresenta a definição, modelagem e implementação de uma ferramenta de autoria. Esta ferramenta, denominada de autoria, terá medidas cognitivas para ordenar um conjunto de conhecimento para um Sistema Tutor Inteligente (STI). Com esta ferramenta, será disponibilizado o conhecimento (domínio) de um especialista para o sistema Mathnet1. Isto ocorrerá mediante a edição ou criação de uma autoria do especialista em um domínio específico (conhecimento). Por meio destas medidas será possível a automação de uma boa escolha curricular para a próxima etapa a ser trabalhada com o aprendiz (aluno), simulando a experiência do professor, a fim de minimizar o tempo de aprendizado e possibilitar a utilização de diversas estratégias pedagógicas. Desta maneira, a modelagem e implementação da ferramenta de autoria servirão como mecanismo de criação e testes de conhecimento (domínio).
APA, Harvard, Vancouver, ISO, and other styles
37

Herbert, Margaret E. "The perceived nature and function of planning by domain-related experts: Academic, business, lay, and teacher." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/9867.

Full text
Abstract:
It is the underlying postulate of the present research that planning is a skill which enjoys a common mental representation and vocabulary across groups in society. Should this be the case, it may be beneficial to focus instruction on planning, as a vehicle for teaching and operationalizing metacognitive skills. Exploration of this postulate necessitates a fuller understanding of how both the nature and the function of planning are perceived by various representative groups. A deeper awareness of planning knowledge held by domain-related experts has potential to reveal similarities and dissimilarities of mental representations of planning. Toward this end, four groups of experts in academics from cognitive science, business, everyday life-planning, and teaching were provided with input from a telephone survey of the general public through a Delphi Methodology. The reiterative, three round process was undertaken by the participants to explore what is meant by the term planning, how it is done, and where and when it is used. This self-correcting technique permitted each group to generate an excellent and precise definition of planning, to clearly define the terminology used, and to articulate the functions of planning and their contextual applications. The Delphi Method achieved unusually high degrees of agreement both within and across cells as to the nature and function of planning. Additional comments from respondents brought elaboration of their perceptions to light for the purposes of understanding and comparison. There was a remarkable correspondence between the most recent literature on planning and the knowledge base of the respondents. The data confirm that there is a striking convergence, within and across the sampled groups about what planning is and the purposes for which it is used. For all cells, definitions of planning achieved a solid consensus and an impressive 97% agreement on all components of the definition and their descriptors, 89% on the function statements, and 95% for the 159 function descriptors. The analysis of data highlighted areas of similarity and dissimilarity of perceptions on aspects of planning. Cell differences between the Academic, Business, Lay, and Teacher cells were observed and reported. The findings substantiate the proposed use of planning and its vocabulary to assist learners to discuss, comprehend, and employ the underlying metacognitive thinking skills involved for problem solving in school and in real life situations throughout the life span. This valuable connector between ages and contexts has been under-utilized to date. The results of the present study justify planning as an umbrella process for instruction in metacognition. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
38

McKee, Courtney Holmes. "Characterization of the nuclear import and export signals of the E7 protein of human papillomavirus type 11." Thesis, Boston College, 2011. http://hdl.handle.net/2345/1957.

Full text
Abstract:
Thesis advisor: Junona Moroianu
The E7 protein of low risk HPV11 has been shown to interact with multiple proteins, including pRb, in both the cytoplasm and the nucleus. High risk HPV16 E7 and low risk HPV11 E7 share a novel nuclear import pathway independent of karyopherins but dependent on the GTPase Ran (Angeline, et al., 2003; Knapp, et al., 2009; Piccioli, et al., 2010). We continued to analyze the nucleocytoplasmic transport of HPV11 E7 in vivo through transfection assays in HeLa cells with EGFP-HPV11 E7 wild type and mutant fusion constructs. We found that nuclear localization of HPV11 E7 is mediated by a nuclear localization signal located in the C-terminal domain which contains a unique zinc-binding domain. Mutations of cysteine residues that interfered with zinc-binding clearly disrupted the nuclear localization of the EGFP-11cE7 and EGFP-11E7 mutants. These data suggest that the integrity of the zinc-binding domain is essential for the nuclear localization of HPV11 E7. In addition, we discovered that HPV11 E7 has a leucine-rich C-terminal nuclear export signal (NES) (76IRQLQDLLL84) mediating the nuclear export of HPV11 E7 in a CRM1-dependent manner
Thesis (BS) — Boston College, 2011
Submitted to: Boston College. College of Arts and Sciences
Discipline: Biology Honors Program
Discipline: Biology
APA, Harvard, Vancouver, ISO, and other styles
39

Larrechea, Michel. "Méthodologie de modélisation des connaissances dans le domaine du diagnostic technique." Bordeaux 1, 1995. http://www.theses.fr/1995BOR10651.

Full text
Abstract:
Le travail expose dans cette these se situe dans le domaine des systemes a base de connaissances appliques au diagnostic technique des systemes complexes. L'objectif est de definir une methodologie de modelisation cognitive pour le developpement de systemes a base de connaissances de seconde generation, pour l'aide aux depanneurs. Dans ce but, nous montrons l'utilite de recourir a une modelisation complete des deux types de connaissances utilisees par de tels systemes. Ainsi, nous proposons plusieurs modeles, d'une part, du systeme physique objet du diagnostic, et d'autre part, de l'expertise structuree en fonction des methodes de resolution de problemes (mrp) utilisees. Nous definissons ainsi un cadre de modelisation conceptuel base sur une structuration des connaissances en trois niveaux interdependants. Deux d'entre eux sont dependants de la mrp utilisee, alors que le troisieme contient les connaissances specifiques a une application et notamment les differents modeles precites du systeme physique. L'un d'entre eux est central, il predit son comportement grace a l'utilisation combinee d'informations quantitatives et qualitatives. Ce cadre constitue notre apport principal. Nous lui associons une demarche de mise en uvre
APA, Harvard, Vancouver, ISO, and other styles
40

Busque-Carrier, Mathieu. "Création d'un modèle de valeurs de travail avec items validés auprès d'experts du domaine de l'orientation." Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6945.

Full text
Abstract:
Ce mémoire porte sur l'élaboration et la validation d'une liste d'items servant à mesurer les valeurs de travail. Après avoir fait la recension des différents modèles de valeurs de travail disponibles dans la littérature scientifique, il est possible d'identifier certains problèmes à leur sujet. En effet, plusieurs modèles de valeurs de travail ne semblent pas exhaustifs, certains sont obsolètes ou n'ont pas fait l'objet d'études empiriques satisfaisantes pour soutenir leur validité. Les modèles de valeurs de travail recensés s'inscrivent dans le courant de la psychologie vocationnelle. Cette recension a aussi permis d'identifier des questionnaires d'évaluation psychométrique permettant de mesurer les modèles théoriques recensés. Ces éléments ont permis de créer un modèle de valeurs de travail comprenant un total de 24 valeurs de travail. Pour chacune de ces valeurs de travail, six items ont été rédigés. Dans le but d'évaluer la pertinence et la clarté de ces items, neuf personnes expertes provenant de différents domaines tels que le développement de carrière, la psychométrie et la psychologie de la motivation ont rempli un questionnaire en ligne afin d'évaluer les items qui ont été rédigés. Elles étaient aussi invitées à identifier des facettes de valeurs de travail qui n'auraient pas été évaluées et à reformuler les items qui nécessitaient plus de clarté. Dans le but d'identifier le niveau de pertinence des valeurs de travail, l'index de validité de contenu et le Free marginal multirater Kappa ont été calculés. Pour ce qui est de la pertinence des items, l'index de validité de contenu a été calculé. Pour évaluer le niveau de clarté des valeurs de travail, le pourcentage d'accord des experts ainsi que le Free marginal multirater Kappa ont été calculés. Pour la clarté des items, le pourcentage d'accord des experts a été calculé. Au niveau de la pertinence des items, les résultats montrent que 127 des 144 items (88,1 %) rédigés sont de bons indicateurs de leur valeur de travail respective. De plus, le niveau d'accord interjuge sur la pertinence de huit des valeurs de travail est substantiel ou presque parfait. Treize valeurs de travail ont obtenu un niveau d'accord interjuge modéré au sujet de la pertinence des items, ce qui signifie que ces valeurs de travail nécessiteraient des modifications mineures. Trois valeurs de travail ont quant à elles obtenu un niveau d'accord interjuge passable ou faible, ce qui signifie que des modifications majeures devraient leur être portées, selon les experts. Au niveau de la clarté des items, 52 des 144 items (36,1 %) ont été évalués comme étant des items clairement formulés et ne nécessitant aucune modification. En ce qui a trait à la clarté des valeurs de travail, il semble que neuf valeurs de travail nécessiteraient des modifications mineures, tandis que quinze valeurs de travail nécessiteraient des modifications majeures pour améliorer leur clarté. Les experts fournissent aussi des propositions de facettes qui auraient pu être évaluées ainsi que des propositions de reformulations d'items. Les résultats obtenus permettent d'identifier plusieurs lacunes des items élaborés en plus de proposer des pistes d'améliorations. Ces éléments permettront d'améliorer la qualité des items. Ces modifications permettront éventuellement de créer un questionnaire d'évaluation psychométrique qui pourra faire l'objet d'une recherche, afin notamment de vérifier la structure factorielle du modèle théorique. Ainsi, le modèle théorique serait actuel, tendrait à être exhaustif et serait validé empiriquement. Ces travaux permettraient donc d'obtenir un modèle théorique et un outil d'évaluation psychométrique qui répondraient aux trois lacunes qui ont été précédemment identifiées au sujet des modèles de valeurs de travail actuellement disponibles.
APA, Harvard, Vancouver, ISO, and other styles
41

Emero, Michael F. "Using naturally occurring texts as a knowledge acquisition resource for knowledge base design : developing a knowledge base taxonomy on microprocessors /." Master's thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-02162010-020204/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Brena, Ramón Jacquet Paul Mossière Jacques Trilling Laurent Jorrand Philippe. "Synthèse de programmes connaissances et déduction dans les domaines d'application /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00333349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Baerisch, Stefan [Verfasser]. "Model-driven test case construction by domain experts in the context of software system families / Stefan Bärisch." Kiel : Universitätsbibliothek Kiel, 2009. http://d-nb.info/1019865997/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Jani, Divyang. "Structure and function of Sus1, Cdc31 and the Sac3 CID domain in mRNA export and gene-gating." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hellmann, Jonathon David. "DataSnap: Enabling Domain Experts and Introductory Programmers to Process Big Data in a Block-Based Programming Language." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54544.

Full text
Abstract:
Block-based programming languages were originally designed for educational purposes. Due to their low requirements for a user's programming capability, such languages have great potential to serve both introductory programmers in educational settings as well as domain experts as a data processing tool. However, the current design of block-based languages fails to address critical factors for these two audiences: 1) domain experts do not have the ability to perform crucial steps: import data sources, perform efficient data processing, and visualize results; 2) the focus of online assignments towards introductory programmers on entertainment (e.g. games, animation) fails to convince students that computer science is important, relevant, and related to their day-to-day experiences. In this thesis, we present the design and implementation of DataSnap, which is a block-based programming language extended from Snap!. Our work focuses on enhancing the state of the art in block-based programming languages for our two target audiences: domain experts and introductory programmers. Specifically, in this thesis we: 1) provide easy-to-use interfaces for big data import, processing, and visualization methods for domain experts; 2) integrate relevant social media, geographic, and business-related data sets into online educational platforms for introductory programmers and enable teachers to develop their own real-time and big-data access blocks; and 3) present DataSnap in the Open edX online courseware platform along with customized problem definition and a dynamic analysis grading system. Stemming from our research contributions, our work encourages the further development and utilization of block-based languages towards a broader audience range.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
46

Messai, Nizar Napoli Amedeo. "Analyse de concepts formels guidée par des connaissances de domaine Application à la découverte de ressources génomiques sur le Web /." S. l. : Nancy 1, 2009. http://www.scd.uhp-nancy.fr/docnum/SCD_T_2009_0018_MESSAI.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nassiet, Didier. "Contribution à la méthodologie de développement des systèmes experts application au domaine du diagnostic technique /." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37608383n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Abd-el-Kader, Yann. "Conception et exploitation d'une base de métadonnées de traitemets informatiques, représentation opérationnelle des connaissances d'expert : application au domaine géographique." Caen, 2006. http://www.theses.fr/2006CAEN2038.

Full text
Abstract:
L’information géographique est construite, analysée et transformée par des traitements informatiques. A l’Institut Géographique National (IGN), les utilisateurs et les développeurs ont besoin d’aide pour rechercher, connaître et partager ces traitements. Le but de notre travail est de fournir cette aide. Les documentations existantes ne permettent pas toujours de répondre de façon satisfaisante aux besoins identifiés : elles sont éparses, aux formats hétérogènes et ne décrivent pas les données avec toute la finesse souhaitée. Ces documentations sont également en général statiques : elles ne peuvent fournir des modes d’emploi adaptés aux contextes d’utilisation particuliers (caractéristiques des données, environnement, connaissances de l’utilisateur). Or, puisque toutes les réponses aux requêtes des utilisateurs ne peuvent être stockées à l’avance, il faut que des mécanismes de dérivation de l’information soient mis en oeuvre. Face à ce problème, nous soutenons la thèse qu’une solution peut être de recourir à des métadonnées à la structure et au contenu contrôlés, conformes à un modèle à la fois approprié à la spécificité des traitements géographiques (description fine des propriétés des données avant et après traitements, illustrations) et propre à une représentation opérationnelle des connaissances d’expert. Nous montrons l’intérˆet de suivre une double approche en développant d’une part un système d’information documentaire (SI) dédié à la consultation et la saisie des métadonnées, d’autre part un système à base de connaissances (SBC) dédié à la simulation des raisonnements de l’expert et reposant sur les langages standard du Web Sémantique RDF, OWL et SWRL
Geographical information is built, analyzed, transformed by computing programs. At the French National Mapping Agency (IGN), developers and users need assistance to seek, know and share these computing programs. The goal of our work is to provide this help, knowing that existing documentations are scattered, have heterogeneous formats and do not describe the data with all the desired smoothness. These documentations are also static : they cannot provide instructions adapted to the context of use (characteristics of the data, environment, user knowledge). However, since all the answers to the requests of the users cannot be stored in advance, it is necessary to implement mechanisms for derivate information. Facing this problem, we propose to exploit metadata with controlled structure and contents, in conformity with a model at the same time appropriate to the specificity of the geographical computing programs (precise description of the data’s properties before and after processing, illustrations) and designed for operational knowledge representation. We show the interest to follow a double approach. On one hand, we build a documentary Information System (IS) for metadata consultation and acquisition ; on the other hand, we build a Knowledge-Based System (KBS) dedicated to the simulation of the expert reasoning. The KBS is based on the standard languages of Semantic Web RDF, OWL and SWRL
APA, Harvard, Vancouver, ISO, and other styles
49

Znaidi, Eya. "Contribution à l'analyse et l'évaluation des requêtes expertes : cas du domaine médical." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30054/document.

Full text
Abstract:
La recherche d'information nécessite la mise en place de stratégies qui consistent à (1) cerner le besoin d'information ; (2) formuler le besoin d'information ; (3) repérer les sources pertinentes ; (4) identifier les outils à exploiter en fonction de ces sources ; (5) interroger les outils ; et (6) évaluer la qualité des résultats. Ce domaine n'a cessé d'évoluer pour présenter des techniques et des approches permettant de sélectionner à partir d'un corpus de documents l'information pertinente capable de satisfaire le besoin exprimé par l'utilisateur. De plus, dans le contexte applicatif du domaine de la RI biomédicale, les sources d'information hétérogènes sont en constante évolution, aussi bien du point de vue de la structure que du contenu. De même, les besoins en information peuvent être exprimés par des utilisateurs qui se caractérisent par différents profils, à savoir : les experts médicaux comme les praticiens, les cliniciens et les professionnels de santé, les utilisateurs néophytes (sans aucune expertise ou connaissance du domaine) comme les patients et leurs familles, etc. Plusieurs défis sont liés à la tâche de la RI biomédicale, à savoir : (1) la variation et la diversité du besoin en information, (2) différents types de connaissances médicales, (3) différences de compé- tences linguistiques entre experts et néophytes, (4) la quantité importante de la littérature médicale ; et (5) la nature de la tâche de RI médicale. Cela implique une difficulté d'accéder à l'information pertinente spécifique au contexte de la recherche, spécialement pour les experts du domaine qui les aideraient dans leur prise de décision médicale. Nos travaux de thèse s'inscrivent dans le domaine de la RI biomédicale et traitent les défis de la formulation du besoin en information experte et l'identification des sources pertinentes pour mieux répondre aux besoins cliniques. Concernant le volet de la formulation et l'analyse de requêtes expertes, nous proposons des analyses exploratoires sur des attributs de requêtes, que nous avons définis, formalisés et calculés, à savoir : (1) deux attributs de longueur en nombre de termes et en nombre de concepts, (2) deux facettes de spécificité terme-document et hiérarchique, (3) clarté de la requête basée sur la pertinence et celle basée sur le sujet de la requête. Nous avons proposé des études et analyses statistiques sur des collections issues de différentes campagnes d'évaluation médicales CLEF et TREC, afin de prendre en compte les différentes tâches de RI. Après les analyses descriptives, nous avons étudié d'une part, les corrélations par paires d'attributs de requêtes et les analyses de corrélation multidimensionnelle. Nous avons étudié l'impact de ces corrélations sur les performances de recherche d'autre part. Nous avons pu ainsi comparer et caractériser les différentes requêtes selon la tâche médicale d'une manière plus généralisable. Concernant le volet lié à l'accès à l'information, nous proposons des techniques d'appariement et d'expansion sémantiques de requêtes dans le cadre de la RI basée sur les preuves cliniques
The research topic of this document deals with a particular setting of medical information retrieval (IR), referred to as expert based information retrieval. We were interested in information needs expressed by medical domain experts like praticians, physicians, etc. It is well known in information retrieval (IR) area that expressing queries that accurately reflect the information needs is a difficult task either in general domains or specialized ones and even for expert users. Thus, the identification of the users' intention hidden behind queries that they submit to a search engine is a challenging issue. Moreover, the increasing amount of health information available from various sources such as government agencies, non-profit and for-profit organizations, internet portals etc. presents oppor- tunities and issues to improve health care information delivery for medical professionals, patients and general public. One critical issue is the understanding of users search strategies and tactics for bridging the gap between their intention and the delivered information. In this thesis, we focus, more particularly, on two main aspects of medical information needs dealing with the expertise which consist of two parts, namely : - Understanding the users intents behind the queries is critically important to gain a better insight of how to select relevant results. While many studies investigated how users in general carry out exploratory health searches in digital environments, a few focused on how are the queries formulated, specifically by domain expert users. We address more specifically domain expert health search through the analysis of query attributes namely length, specificity and clarity using appropriate proposed measures built according to different sources of evidence. In this respect, we undertake an in-depth statistical analysis of queries issued from IR evalua- tion compaigns namely Text REtrieval Conference (TREC) and Conference and Labs of the Evaluation Forum (CLEF) devoted for different medical tasks within controlled evaluation settings. - We address the issue of answering PICO (Population, Intervention, Comparison and Outcome) clinical queries formulated within the Evidence Based Medicine framework. The contributions of this part include (1) a new algorithm for query elicitation based on the semantic mapping of each facet of the query to a reference terminology, and (2) a new document ranking model based on a prioritized aggregation operator. we tackle the issue related to the retrieval of the best evidence that fits with a PICO question, which is an underexplored research area. We propose a new document ranking algorithm that relies on semantic based query expansion leveraged by each question facet. The expansion is moreover bounded by the local search context to better discard irrelevant documents. The experimental evaluation carried out on the CLIREC dataset shows the benefit of our approaches
APA, Harvard, Vancouver, ISO, and other styles
50

He, Xiong-wei. "Apport de l'analyse de raisonnement dans la conception d'un tutoriel intelligent indépendant du domaine d'expertise." Lyon, INSA, 1992. http://www.theses.fr/1992ISAL0037.

Full text
Abstract:
Cette thèse aborde le problème de l'utilisation à des fins pédagogiques des connaissances contenues dans un système expert. Nous avons étudié les différents aspects de la conception d'un tutoriel intelligent capable d'organiser l'apprentissage à partir d'une base de connaissances (BC) sans être lié à un domaine particulier. Grâce à une analyse préalable de la BC, on peut dégager un ensemble de sous-arbres pouvant être considérés comme des cas déduits du savoir de l'expert. Nous avons introduit le concept de chemin de raisonnement pour représenter le raisonnement que ferait l'expert par rapport à un cas donné. L'ensemble des chemins de raisonnement d'une BC constitue la base d'apprentissage. Dans le tutoriel que nous avons baptisé PEDASYS, trois modes d' apprentissage sont conçus : 1) le mode Exploration permet à l'élève de découvrir à sa guise, l'ensemble des connaissances mises en œuvre dans une BC. 2) le mode question offre à l'apprenant la possibilité de poser des question sur des sujets Qui l'intéressent. 3) le mode Exercice utilise la technique de l'étude de cas permettant à l'élève d'effectuer un apprentissage plus systématique. Dans PEDASYS, un modèle de l'élève est construit au fur et à mesure que l'apprentissage progresse. Ce modèle est utilisé pour sélectionner un nouveau cas à étudier par l'élève. C'est ici qu'interviennent les stratégies pédagogiques telles que : trouver Un contre-exemple, faire travailler plus sur une règle particulière correspondant à un point faible de l'élève, sélectionner un cas Qui attire le plus son attention, etc. .
This thesis take into account the problem of using the knowledge base on an expert system for tutoring purposes. We have studied different aspects in the design of a domain-independent tutoring system capable to organize learning courses from a knowledge base (KB). Through a preliminary analysis of the knowledge base, we can obtain a set of reason trees which can be considered as a set of cases deducted from expert's knowledge. We have introduced the concept of reasoning path to represent expert's reasoning in a particular case. The set of reasoning paths of a KB can be used as the material for learning. In our tutoring system, named PEDASYS, three learning modes have been designed: 1) the Exploration mode give to learner the possibility to discover, in proper way, all the expert's knowledge of the KB. 2) the Query mode offer to learner with the possibility to ask Questions about subjects which are interesting for him. 3) the Training mode use the case-study method to organize a more systematic learning In PEDASYS, a student model is constructed and updated in different courses. This model is used to choose new cases to study. Pedagogical strategies come into work in this situation, for instance: find a counter-exemple, give to learner the chance to work on a particular rule to deal with one of his weak points, or choose the most interesting case for him, etc. .
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography