To see the other types of publications on this topic, follow the link: Medical Text.

Dissertations / Theses on the topic 'Medical Text'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Medical Text.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Drusiani, Alberto. "Deep Learning Text Classification for Medical Diagnosis." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17281/.

Full text
Abstract:
The ICD coding is the international standard for the classification of diseases and related disorders, drawn up by the World Health Organization. It was introduced to simplify the exchange of medical data, to speed up statistical analyzes and to make insurance reimbursements efficient. The manual classification of ICD-9-CM codes still requires a human effort that implies a considerable waste of resources and for this reason several methods have been presented over the years to automate the process. In this thesis an approach is proposed for the automatic classification of medical diagnoses in ICD-9-CM codes using the Recurrent Neural Networks, in particular the LSTM module, and exploiting the word embedding. The results were satisfactory as we were able to obtain better accuracy than Support Vector Machines, the most used traditional method. Furthermore, we have shown the effectiveness of specific domain embedding models compared to general ones.
APA, Harvard, Vancouver, ISO, and other styles
2

Sætre, Rune. "GeneTUC: Natural Language Understanding in Medical Text." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-545.

Full text
Abstract:

Natural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists.

The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems.

The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities.

The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.

APA, Harvard, Vancouver, ISO, and other styles
3

Neamatullah, Ishna. "Automated de-identification of free-text medical records." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/41622.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 62-64).
This paper presents a de-identification study at the Harvard-MIT Division of Health Science and Technology (HST) to automatically de-identify confidential patient information from text medical records used in intensive care units (ICUs). Patient records are a vital resource in medical research. Before such records can be made available for research studies, protected health information (PHI) must be thoroughly scrubbed according to HIPAA specifications to preserve patient confidentiality. Manual de-identification on large databases tends to be prohibitively expensive, time-consuming and prone to error, making a computerized algorithm an urgent need for large-scale de-identification purposes. We have developed an automated pattern-matching deidentification algorithm that uses medical and hospital-specific information. The current version of the algorithm has an overall sensitivity of around 0.87 and an approximate positive predictive value of 0.63. In terms of sensitivity, it performs significantly better than 1 person (0.81) but not quite as well as a consensus of 2 human de-identifiers (0.94). The algorithm will be published as open-source software, and the de-identified medical records will be incorporated into HST's Multi-Parameter Intelligent Monitoring for Intensive Care (MIMIC II) physiologic database.
by Ishna Neamatullah.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Savkov, Aleksandar Dimitrov. "Deciphering clinical text : concept recognition in primary care text notes." Thesis, University of Sussex, 2017. http://sro.sussex.ac.uk/id/eprint/68232/.

Full text
Abstract:
Electronic patient records, containing data about the health and care of a patient, are a valuable source of information for longitudinal clinical studies. The General Practice Research Database (GPRD) has collected patient records from UK primary care practices since the late 1980s. These records contain both structured data (in the form of codes and numeric values) and free text notes. While the structured data have been used extensively in clinical studies, there are significant practical obstacles in extracting information from the free text notes. The main obstacles are data access restrictions, due to the presence of sensitive information, and the specific language of medical practitioners, which renders standard language processing tools ineffective. The aim of this research is to investigate approaches for computer analysis of free text notes. The research involved designing a primary care text corpus (the Harvey Corpus) annotated with syntactic chunks and clinically-relevant semantic entities, developing a statistical chunking model, and devising a novel method for applying machine learning for entity recognition based on chunk annotation. The tools produced would facilitate reliable information extraction from primary care patient records, needed for the development of clinically-related research. The three medical concept types targeted in this thesis could contribute to epidemiological studies by enhancing the detection of co-morbidities, and better analysing the descriptions of patient experiences and treatments. The main contributions of the research reported in this thesis are: guidelines for chunk and concept annotation of clinical text, an approach to maximising agreement between human annotators, the Harvey Corpus, a method for using a standard part-of-speech tagging model in clinical text chunking, and a novel approach to recognising clinically relevant medical concepts.
APA, Harvard, Vancouver, ISO, and other styles
5

Shu, Jennifer (Jennifer J. ). "Free text phrase encoding and information extraction from medical notes." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/37064.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 87-90).
The Laboratory for Computational Physiology is collecting a large database of patient signals and clinical data from critically ill patients in hospital intensive care units (ICUs). The data will be used as a research resource to support the development of an advanced patient monitoring system for ICUs. Important pathophysiologic events in the patient data streams must be recognized and annotated by expert clinicians in order to create a "gold standard" database for training and evaluating automated monitoring systems. Annotating the database requires, among other things, analyzing and extracting important clinical information from textual patient data such as nursing admission and progress notes, and using the data to define and document important clinical events during the patient's ICU stay. Two major text-related annotation issues are addressed in this research. First, the documented clinical events must be described in a standardized vocabulary suitable for machine analysis. Second, an advanced monitoring system would need an automated way to extract meaning from the nursing notes, as part of its decision-making process. The thesis presents and evaluates methods to code significant clinical events into standardized terminology and to automatically extract significant information from free-text medical notes.
by Jennifer Shu.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Civera, Saiz Jorge. "An evaluation of alternative strategies for clustering genes from medical text." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/8664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leong, Elaine. "Medical recipe collections in seventeenth-century England : knowledge, text and gender." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leroy, Gondy, Hsinchun Chen, Jesse D. Martinez, Shauna Eggers, Ryan R. Falsey, Kerri L. Kislin, Zan Huang, et al. "Genescene: Biomedical Text And Data Mining." Wiley Periodicals, Inc, 2005. http://hdl.handle.net/10150/105791.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
To access the content of digital texts efficiently, it is necessary to provide more sophisticated access than keyword based searching. Genescene provides biomedical researchers with research findings and background relations automatically extracted from text and experimental data. These provide a more detailed overview of the information available. The extracted relations were evaluated by qualified researchers and are precise. A qualitative ongoing evaluation of the current online interface indicates that this method to search the literature is more useful and efficient than keyword based searching.
APA, Harvard, Vancouver, ISO, and other styles
9

Finch, Dezon K. "TagLine: Information Extraction for Semi-Structured Text Elements In Medical Progress Notes." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4321.

Full text
Abstract:
Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in semi-structure text elements. A prototype system (TagLine) was developed as a method for extracting information from the semi-structured portions of text using machine learning. Features for the learning machine were suggested by prior work, as well as by examining the text, and selecting those attributes that help distinguish the various classes of text lines. The classes were derived empirically from the text and guided by an ontology developed by the Consortium for Health Informatics Research (CHIR), a nationwide research initiative focused on medical informatics. Decision trees and Levenshtein approximate string matching techniques were tested and compared on 5,055 unseen lines of text. The performance of the decision tree method was found to be superior to the fuzzy string match method on this task. Decision trees achieved an overall accuracy of 98.5 percent, while the string match method only achieved an accuracy of 87 percent. Overall, the results for line classification were very encouraging. The labels applied to the lines were used to evaluate TagLines' performance for identifying the semi-structures text elements, including tables, slots and fillers. Results for slots and fillers were impressive while the results for tables were also acceptable.
APA, Harvard, Vancouver, ISO, and other styles
10

Alanazi, Saad. "A Named Entity Recognition system applied to Arabic text in the medical domain." Thesis, Staffordshire University, 2017. http://eprints.staffs.ac.uk/3129/.

Full text
Abstract:
Currently, 30-35% of the global population uses the Internet. Furthermore, there is a rapidly increasing number of non-English language internet users, accompanied by an also increasing amount of unstructured text online. One area replete with underexploited online text is the Arabic medical domain, and one method that can be used to extract valuable data from Arabic medical texts is Named Entity Recognition (NER). NER is the process by which a system can automatically detect and categorise Named Entities (NE). NER has numerous applications in many domains, and medical texts are no exception. NER applied to the medical domain could assist in detection of patterns in medical records, allowing doctors to make better diagnoses and treatment decisions, enabling medical staff to quickly assess a patient's records and ensuring that patients are informed about their data, as just a few examples. However, all these applications would require a very high level of accuracy. To improve the accuracy of NER in this domain, new approaches need to be developed that are tailored to the types of named entities to be extracted and categorised. In an effort to solve this problem, this research applied Bayesian Belief Networks (BBN) to the process. BBN, a probabilistic model for prediction of random variables and their dependencies, can be used to detect and predict entities. The aim of this research is to apply BBN to the NER task to extract relevant medical entities such as disease names, symptoms, treatment methods, and diagnosis methods from modern Arabic texts in the medical domain. To achieve this aim, a new corpus related to the medical domain has been built and annotated. Our BBN approach achieved a 96.60% precision, 90.79% recall, and 93.60% F-measure for the disease entity, while for the treatment method entity, it achieved 69.33%, 70.99%, and 70.15% for precision, recall, and F-measure, respectively. For the diagnosis method and symptom categories, our system achieved 84.91% and 71.34%, respectively, for precision, 53.36% and 49.34%, respectively, for recall, and 65.53% and 58.33%, for F-measure, respectively. Our BBN strategy achieved good accuracy for NEs in the categories of disease and treatment method. However, the average word length of the other two NE categories observed, diagnosis method and symptom, may have had a negative effect on their accuracy. Overall, the application of BBN to Arabic medical NER is successful, but more development is needed to improve accuracy to a standard at which the results can be applied to real medical systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Frunza, Oana Magdalena. "Personalized Medicine through Automatic Extraction of Information from Medical Texts." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22724.

Full text
Abstract:
The wealth of medical-related information available today gives rise to a multidimensional source of knowledge. Research discoveries published in prestigious venues, electronic-health records data, discharge summaries, clinical notes, etc., all represent important medical information that can assist in the medical decision-making process. The challenge that comes with accessing and using such vast and diverse sources of data stands in the ability to distil and extract reliable and relevant information. Computer-based tools that use natural language processing and machine learning techniques have proven to help address such challenges. This current work proposes automatic reliable solutions for solving tasks that can help achieve a personalized-medicine, a medical practice that brings together general medical knowledge and case-specific medical information. Phenotypic medical observations, along with data coming from test results, are not enough when assessing and treating a medical case. Genetic, life-style, background and environmental data also need to be taken into account in the medical decision process. This thesis’s goal is to prove that natural language processing and machine learning techniques represent reliable solutions for solving important medical-related problems. From the numerous research problems that need to be answered when implementing personalized medicine, the scope of this thesis is restricted to four, as follows: 1. Automatic identification of obesity-related diseases by using only textual clinical data; 2. Automatic identification of relevant abstracts of published research to be used for building systematic reviews; 3. Automatic identification of gene functions based on textual data of published medical abstracts; 4. Automatic identification and classification of important medical relations between medical concepts in clinical and technical data. This thesis investigation on finding automatic solutions for achieving a personalized medicine through information identification and extraction focused on individual specific problems that can be later linked in a puzzle-building manner. A diverse representation technique that follows a divide-and-conquer methodological approach shows to be the most reliable solution for building automatic models that solve the above mentioned tasks. The methodologies that I propose are supported by in-depth research experiments and thorough discussions and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
12

Nääs, Johanna, and Emma Thurfjell. "Språklig komplexitet och narrativ struktur i skriven text : En jämförelse mellan elever med och utan lässvårigheter." Thesis, Umeå universitet, Logopedi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148330.

Full text
Abstract:
Sammanfattnin Bakgrund I årskurs 3 i svenska grundskolan skrivs Nationella proven i svenska och ett av delproven är att skriva en berättande (narrativ) text. När berättande texter analyseras i forskning kan språklig komplexitet och narrativ struktur undersökas. Att undersöka vilka delar i skrivandet som kan vara svårt är av värde för att kunna anpassa undervisning för olika elevgrupper. Syfte Syftet med föreliggande studie var att undersöka språklig komplexitet och narrativ struktur i texter skrivna av elever med och utan lässvårigheter. Metod I studien deltog 63 elever. Dessa delades in i fyra grupper varav tre av grupperna har någon typ av lässvårighet. Svårigheterna som eleverna uppvisar är antingen avkodningssvårigheter, läsförståelsesvårigheter eller blandade lässvårigheter (svårigheter med både avkodning och läsförståelse). Den språkliga komplexiteten och den narrativa strukturen analyserades i texterna som eleverna skrivit i årskurs 3. En variansanalys utfördes för att undersöka om det förelåg någon skillnad mellan de fyra grupperna. Resultat Resultaten visade på en signifikant skillnad mellan elever med lässvårigheter och utan lässvårigheter i produktivitetsmåtten antal ord och antal yttranden. Det förelåg även en signifikant skillnad i narrativ struktur mellan gruppen med läsförståelsesvårigheter och gruppen utan lässvårigheter. Slutsatser Inga generella slutsatser kan dras utifrån resultaten men de kan ge vissa riktlinjer för hur undervisning för olika elevgrupper kan anpassas. I de delar av skrivandet som eleverna skiljer sig åt kan stödinsatser för specifika grupper vara av värde för att alla elever ska uppnå svenska grundskolans mål.
Lässvårigheter, språklig förmåga och skolresultat i tidiga skolår
APA, Harvard, Vancouver, ISO, and other styles
13

Rios, Anthony. "Deep Neural Networks for Multi-Label Text Classification: Application to Coding Electronic Medical Records." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/71.

Full text
Abstract:
Coding Electronic Medical Records (EMRs) with diagnosis and procedure codes is an essential task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient’s well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. Therefore, it is necessary to develop automated diagnosis and procedure code recommendation methods that can be used by professional medical coders. The main difficulty with developing automated EMR coding methods is the nature of the label space. The standardized vocabularies used for medical coding contain over 10 thousand codes. The label space is large, and the label distribution is extremely unbalanced - most codes occur very infrequently, with a few codes occurring several orders of magnitude more than others. A few codes never occur in training dataset at all. In this work, we present three methods to handle the large unbalanced label space. First, we study how to augment EMR training data with biomedical data (research articles indexed on PubMed) to improve the performance of standard neural networks for text classification. PubMed indexes more than 23 million citations. Many of the indexed articles contain relevant information about diagnosis and procedure codes. Therefore, we present a novel method of incorporating this unstructured data in PubMed using transfer learning. Second, we combine ideas from metric learning with recent advances in neural networks to form a novel neural architecture that better handles infrequent codes. And third, we present new methods to predict codes that have never appeared in the training dataset. Overall, our contributions constitute advances in neural multi-label text classification with potential consequences for improving EMR coding.
APA, Harvard, Vancouver, ISO, and other styles
14

Nicholls, Joseph Anthony. "Text search : information-seeking strategies using paper and CD-ROM versions of a medical textbook." Thesis, University of Leeds, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bhooshan, Neha. "Classification of semantic relations in different syntactic structures in medical text using the MeSH hierarchy." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33111.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaf 38).
Two different classification algorithms are evaluated in recognizing semantic relationships of different syntactic compounds. The compounds, which include noun- noun, adjective-noun, noun-adjective, noun-verb, and verb-noun, were extracted from a set of doctors' notes using a part of speech tagger and a parser. Each compound was labeled with a semantic relationship, and each word in the compound was mapped to its corresponding entry in the MeSH hierarchy. MeSH includes only medical terminology so it was extended to include everyday, non-medical terms. The two classification algorithms, neural networks and a classification tree, were trained and tested on the data set for each type of syntactic compound. Models representing different levels of MeSH were generated and fed into the neural networks. Both algorithms performed better than random guessing, and the classification tree performed better than the neural networks in predicting the semantic relationship between phrases from their syntactic structure.
by Neha Bhooshan.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Fioramonte, Amy. "A Study of Pragmatic Competence: International Medical Graduates' and Patients' Negotiation of the Treatment Phase of Medical Encounters." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5478.

Full text
Abstract:
Despite advances in medical technologies, interpersonal communication remains the primary tool physicians use to exchange information, make diagnoses, and treat patients (Cameron & Williams, 1997; Groopman, 2007; Ong, de Haes, Hoos, & Lammes, 1995). In the medical encounter effective communication between physician and patient is essential so that beneficial health and wellbeing outcomes are achieved for patients. Taking a discourse analytic approach, this study examined interactions occurring between international medical graduate (IMG) residents, attending physicians, and patients during the treatment advice phase of the supervised medical encounter. The aim of the study was to examine the co-constructed nature of the delivery and receipt of treatment advice and the ways in which physicians and patients managed interpersonal relations through the negotiated activity. The theoretical framework of pragmatic competence was utilized to underpin the study. Physician-patient interactions served as the primary data source. Medical encounter interactions between five different IMG residents and 31 patients were observed and audio-recorded. Observations and a post-medical encounter survey completed by patients served as secondary data sources. The analysis of the data revealed that this medical speech activity embedded within the medical encounter was realized through the use of a variety of discourse strategies and contributions from multiple participants as they attended to the interpersonal and transactional goals associated with the delivery and receipt of treatment advice. Findings provided insights into how multi-party discourse worked to jointly construct and negotiate treatment recommendations. Findings indicated that IMG residents utilized indirect advice giving strategies. Additionally, both IMG residents and patients utilized interrogatives in various ways to engage actively in the treatment decision-making process. Finally, the data revealed how the participants attended to each other's face needs as they worked to enhance, maintain, or challenge face through the dynamic process of negotiating relationships.
APA, Harvard, Vancouver, ISO, and other styles
17

Sethi, Iccha. "Clinician Decision Support Dashboard: Extracting value from Electronic Medical Records." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/41894.

Full text
Abstract:
Medical records are rapidly being digitized to electronic medical records. Although Electronic Medical Records (EMRs) improve administration, billing, and logistics, an open research problem remains as to how doctors can leverage EMRs to enhance patient care. This thesis describes a system that analyzes a patientâ s evolving EMR in context with available biomedical knowledge and the accumulated experience recorded in various text sources including the EMRs of other patients. The aim of the Clinician Decision Support (CDS) Dashboard is to provide interactive, automated, actionable EMR text-mining tools that help improve both the patient and clinical care staff experience. The CDS Dashboard, in a secure network, helps physicians find de-identified electronic medical records similar to their patient's medical record thereby aiding them in diagnosis, treatment, prognosis and outcomes. It is of particular value in cases involving complex disorders, and also allows physicians to explore relevant medical literature, recent research findings, clinical trials and medical cases. A pilot study done with medical students at the Virginia Tech Carilion School of Medicine and Research Institute (VTC) showed that 89% of them found the CDS Dashboard to be useful in aiding patient care for doctors and 81% of them found it useful for aiding medical students pedagogically. Additionally, over 81% of the medical students found the tool user friendly. The CDS Dashboard is constructed using a multidisciplinary approach including: computer science, medicine, biomedical research, and human-machine interfacing. Our multidisciplinary approach combined with the high usability scores obtained from VTC indicated the CDS Dashboard has a high potential value to clinicians and medical students.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Christiansen, Ammon J. "Finding Relevant PDF Medical Journal Articles by the Content of Their Figures as well as Their Text." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/872.

Full text
Abstract:
This work addresses the need for an alternative to keyword-based search for sifting through large PDF medical journal article document collections for literature review purposes. Despite users' best efforts to form precise and accurate queries, it is often difficult to guess the right keywords to find all the related articles while finding a minimum number of unrelated ones. Failure during literature review to find relevant, related research results in wasted research time and effort in addition to missing significant work in the related area which could affect the quality of the research work being conducted. The purpose of this work is to explore the benefits of a retrieval system for professional journal articles in PDF format that supports hybrid queries composed of both text and images. PDF medical journal articles contain formatting and layout information that imply the structure and organization of the document. They also contain figures and tables rich with content and meaning. Stripping a PDF into “full-text” for indexing purposes disregards these important features. Specifically, this work investigated the following: (1) what effect the incorporation of a document's embedded figures into the query (in addition to its text) has on retrieval performance (precision) compared to plain keyword-based search; (2) how current text-based document-query similarity methods can be enhanced by using formatting and font-size information as a structure and organization model for a PDF document; (3) whether to use the standard Euclidean distance function or the matrix distance function for content-based image retrieval; (4) how to convert a PDF into a structured, formatted, reflowable XML representation given a pure-layout PDF document; (5) what document views (such as a term frequency cloud, a document outline, or a document's figures) would help users wade through search results to quickly select those that are worth a closer look. While the results of the experiments were unexpectedly worse than their baselines of comparison (see the conclusion for a summary), the experimental methods are very valuable in showing others what directions have already been pursued and why they did not work and what remaining problems need to be solved in order to achieve the goal of improving literature review through use of a hybrid text and image retrieval system.
APA, Harvard, Vancouver, ISO, and other styles
19

Hozayen, Ghada. "A study of the discussion section of the medical research article : a genre based approach to text analysis." Thesis, University of Birmingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hellström, Karlsson Rebecca. "Aiding Remote Diagnosis with Text Mining." Thesis, KTH, Människa och Kommunikation, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215760.

Full text
Abstract:
The topic of this thesis is on how text mining could be used on patient-reported symptom descriptions, and how it could be used to aid doctors in their diagnostic process. Healthcare delivery today is struggling to provide care to remote settings, and costs are increasing together with the aging population. The aid provided to doctors from text mining on patient descriptions is unknown.Investigating if text mining can aid doctors by presenting additional information, based on what patients who write similar things to what their current patient is writing about, could be relevant to many settings in healthcare. It has the potential to improve the quality of care to remote settings and increase the number of patients treated on the limited resources available. In this work, patient texts were represented using the Bag-of-Words model and clustered using the k-means algorithm. The final clustering model used 41 clusters, and the ten most important words for the cluster centroids were used as representative words for the cluster. An experiment was then performed to gauge how the doctors were aided in their diagnostic process when patient texts were paired with these additional words. The results were that the words aided doctors in cases where the patient case was difficult and that the clustering algorithm can be used to provide the current patient with specific follow-up questions.
Ämnet för detta examensarbete är hur text mining kan användas på patientrapporterade symptombeskrivningar, och hur det kan användas för att hjälpa läkare att utföra den diagnostiska processen. Sjukvården har idag svårigheter med att leverera vård till avlägsna orter, och vårdkostnader ökar i och med en åldrande population. Idag är det okänt hur text mining skulle kunna hjälpa doktorer i sitt arbete. Att undersöka om läkare blir hjälpta av att presenteras med mer information, baserat på vad patienter som skriver liknande saker som deras nuvarande patient gör, kan vara relevant för flera olika områden av sjukvården. Text mining har potential att förbättra vårdkvaliten för patienter med låg tillgänglighet till vård, till exempel på grund av avstånd. I detta arbete representerades patienttexter med en Bag-of-Words modell, och klustrades med en k-means algoritm. Den slutgiltiga klustringsmodellen använde sig av 41 kluster, och de tio viktigaste orden för klustercentroider användes för att representera respektive kluster. Därefter genomfördes ett experiment för att se om och hur läkare blev behjälpta i sin diagnostiska process, om patienters texter presenterades med de tio orden från de kluster som texterna hörde till. Resultaten från experimentet var att orden hjälpte läkarna i de mer komplicerade patientfallen, och att klustringsalgoritmen skulle kunna användas för att ställa specifika följdfrågor till patienter.
APA, Harvard, Vancouver, ISO, and other styles
21

Ellis, Olga. "Voice vs. Text Chats: Their Efficacy for Learning Probing Questions by Non-Native Speaking Medical Professionals in Online Courses." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/228635.

Full text
Abstract:
Through an English for Specific Purposes (ESP): Communication in Nursing online course, the present study examines the efficacy of synchronous voice-based and text-based chats as instructional and communicative modes in learning to use open questions for probing in therapeutic dialogues by non-native speaking (NNS) participants, students of a nursing college at a major university in the Philippines. The study draws on a plethora of research findings in online education, ESP online course designs, text-based vs. voice-based synchronous chats and their place in learning online, efficacy and application of text and voice-based communicative practices in online courses designed for NNS students, issues related to medical discourse, humanization, and patient-centeredness of communicative encounters (e.g., between a nurse/provider and a patient/client). The study examines the following questions: (1) which interactional mode - voice or text - provides for better learning of probing questions by NNS medical professionals through noticing of their use in therapeutic dialogues and situations typical for everyday healthcare-related communicative settings in an online course; (2) what evidence is there to suggest that the skill to use open questions for probing in role-plays of therapeutic dialogues by NNS medical professionals developed through text-based practices in an online course might transfer to their speech and vice versa; (3) which interactional mode - voice or text - is perceived by the online-course participants as more effective for learning to use probing questions in therapeutic dialogues and healthcare-related communicative encounters. The results of the analyses supported many of the hypotheses for both research conditions. More specifically, they supported the predicted efficacy of both forms of online instruction and communication - voice-based and text-based - in learning probing techniques by the online course participants; furthermore, a possibility of the two-way language-skill transfer modes - from text-to-speech and from speech-to-text - was suggested in learning second language online through application of synchronous chat sessions. Although more research is necessary in the above-mentioned areas of language learning in the context of online education, the research findings of the present research study are highly suggestive of effective implementation of voice-based and text-based synchronous chats in ESP online course designs for NNS speaking students.
APA, Harvard, Vancouver, ISO, and other styles
22

Jabbour, Georgette. "Corpus linguistics, contextual collocation and ESP syllabus creation : a text-analysis approach to the study of medical research articles." Thesis, University of Birmingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Isenius, Niklas, Sumithra Velupillai, and Maria Kvist. "Initial Results in the Development of SCAN : a Swedish Clinical Abbreviation Normalizer." Stockholms universitet, Institutionen för data- och systemvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-82245.

Full text
Abstract:
Abbreviations are common in clinical documentation, as this type of text is written under time-pressure and serves mostly for internal communication. This study attempts to apply and extend existing rule-based algorithms that have been developed for English and Swedish abbreviation detection, in order to create an abbreviation detection algorithm for Swedish clinical texts that can identify and suggest definitions for abbreviations and acronyms. This can be used as a pre-processing step for further information extraction and text mining models, as well as for readability solutions. Through a literature review, a number of heuristics were defined for automatic abbreviation detection. These were used in the construction of the Swedish Clinical Abbreviation Normalizer (SCAN). The heuristics were: a) freely available external resources: a dictionary of general Swedish, a dictionary of medical terms and a dictionary of known Swedish medical abbreviations, b) maximum word lengths (from three to eight characters), and c) heuristics for handling common patterns such as hyphenation. For each token in the text, the algorithm checks whether it is a known word in one of the lexicons, and whether it fulfills the criteria for word length and the created heuristics. The final algorithm was evaluated on a set of 300 Swedish clinical notes from an emergency department at the Karolinska University Hospital, Stockholm. These notes were annotated for abbreviations, a total of 2,050 tokens. This set was annotated by a physician accustomed to reading and writing medical records. The algorithm was tested in different variants, where the word lists were modified, heuristics adapted to characteristics found in the texts, and different combinations of word lengths. The best performing version of the algorithm achieved an F-Measure score of 79%, with 76% recall and 81% precision, which is a considerable improvement over the baseline where each token was only matched against the word lists (51% F-measure, 87% recall, 36% precision). Not surprisingly, precision results are higher when the maximum word length is set to the lowest (three), and recall results higher when it is set to the highest (eight). Algorithms for rule-based systems, mainly developed for English, can be successfully adapted for abbreviation detection in Swedish medical records. System performance relies heavily on the quality of the external resources, as well as on the created heuristics. In order to improve results, part-of-speech information and/or local context is needed for disambiguation. In the case of Swedish, compounding also needs to be handled.
APA, Harvard, Vancouver, ISO, and other styles
24

Håkansson, Susanne. "Health and Place : Terminology, proper nouns and titles of cited publications in the translation of a text on medical geology." Thesis, Linnaeus University, School of Language and Literature, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8227.

Full text
Abstract:

This essay deals with some of the difficulties that translation of a technical text may present, more specifically the handling of terminology, proper nouns and titles of cited publications. For this purpose, a text dealing with medical geology, taken from Essentials of Medical Geology (Selinus et al., 2005), was translated and analysed.

Medical geology is an interdisciplinary science and hence contains terminology from several different scientific areas. The present study includes terminology within the field of medicine and geochemistry in the analysis. The preferred and predominant translation procedure was literal translation (Munday, 2001:57). Many source text terms have synonyms in the target language. With the intention to preserve and transfer the level of technical style into the target text, terms were analysed and classified as belonging to one of three levels of technical style: academic, professional and popular (Newmark, 1988:151). The handling of proper nouns connected to medicine and geology was also included in the analysis. One common procedure is to use a translation which is established in the target language. The present study discusses the strategies used when no such established translation was found. The procedure of using a recognised translation was discussed in connection to the handling of titles of cited publications referred to in the source text.

APA, Harvard, Vancouver, ISO, and other styles
25

Bustos, Aurelia. "Extraction of medical knowledge from clinical reports and chest x-rays using machine learning techniques." Doctoral thesis, Universidad de Alicante, 2019. http://hdl.handle.net/10045/102193.

Full text
Abstract:
This thesis addresses the extraction of medical knowledge from clinical text using deep learning techniques. In particular, the proposed methods focus on cancer clinical trial protocols and chest x-rays reports. The main results are a proof of concept of the capability of machine learning methods to discern which are regarded as inclusion or exclusion criteria in short free-text clinical notes, and a large scale chest x-ray image dataset labeled with radiological findings, diagnoses and anatomic locations. Clinical trials provide the evidence needed to determine the safety and effectiveness of new medical treatments. These trials are the basis employed for clinical practice guidelines and greatly assist clinicians in their daily practice when making decisions regarding treatment. However, the eligibility criteria used in oncology trials are too restrictive. Patients are often excluded on the basis of comorbidity, past or concomitant treatments and the fact they are over a certain age, and those patients that are selected do not, therefore, mimic clinical practice. This signifies that the results obtained in clinical trials cannot be extrapolated to patients if their clinical profiles were excluded from the clinical trial protocols. The efficacy and safety of new treatments for patients with these characteristics are not, therefore, defined. Given the clinical characteristics of particular patients, their type of cancer and the intended treatment, discovering whether or not they are represented in the corpus of available clinical trials requires the manual review of numerous eligibility criteria, which is impracticable for clinicians on a daily basis. In this thesis, a large medical corpora comprising all cancer clinical trials protocols in the last 18 years published by competent authorities was used to extract medical knowledge in order to help automatically learn patient’s eligibility in these trials. For this, a model is built to automatically predict whether short clinical statements were considered inclusion or exclusion criteria. A method based on deep neural networks is trained on a dataset of 6 million short free-texts to classify them between elegible or not elegible. For this, pretrained word embeddings were used as inputs in order to predict whether or not short free-text statements describing clinical information were considered eligible. The semantic reasoning of the word-embedding representations obtained was also analyzed, being able to identify equivalent treatments for a type of tumor in an analogy with the drugs used to treat other tumors. Results show that representation learning using deep neural networks can be successfully leveraged to extract the medical knowledge from clinical trial protocols and potentially assist practitioners when prescribing treatments. The second main task addressed in this thesis is related to knowledge extraction from medical reports associated with radiographs. Conventional radiology remains the most performed technique in radiodiagnosis services, with a percentage close to 75% (Radiología Médica, 2010). In particular, chest x-ray is the most common medical imaging exam with over 35 million taken every year in the US alone (Kamel et al., 2017). They allow for inexpensive screening of several pathologies including masses, pulmonary nodules, effusions, cardiac abnormalities and pneumothorax. For this task, all the chest-x rays that had been interpreted and reported by radiologists at the Hospital Universitario de San Juan (Alicante) from Jan 2009 to Dec 2017 were used to build a novel large-scale dataset in which each high-resolution radiograph is labeled with its corresponding metadata, radiological findings and pathologies. This dataset, named PadChest, includes more than 160,000 images obtained from 67,000 patients, covering six different position views and additional information on image acquisition and patient demography. The free text reports written in Spanish by radiologists were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. For this, a subset of the reports (a 27%) were manually annotated by trained physicians, whereas the remaining set was automatically labeled with deep supervised learning methods using attention mechanisms and fed with the text reports. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray databases suitable for training supervised models concerning radiographs, and also the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded on request from http://bimcv.cipf.es/bimcv-projects/padchest/. PadChest is intended for training image classifiers based on deep learning techniques to extract medical knowledge from chest x-rays. It is essential that automatic radiology reporting methods could be integrated in a clinically validated manner in radiologists’ workflow in order to help specialists to improve their efficiency and enable safer and actionable reporting. Computer vision methods capable of identifying both the large spectrum of thoracic abnormalities (and also the normality) need to be trained on large-scale comprehensively labeled large-scale x-ray datasets such as PadChest. The development of these computer vision tools, once clinically validated, could serve to fulfill a broad range of unmet needs. Beyond implementing and obtaining results for both clinical trials and chest x-rays, this thesis studies the nature of the health data, the novelty of applying deep learning methods to obtain large-scale labeled medical datasets, and the relevance of its applications in medical research, which have contributed to its extramural diffusion and worldwide reach. This thesis describes this journey so that the reader is navigated across multiple disciplines, from engineering to medicine up to ethical considerations in artificial intelligence applied to medicine.
APA, Harvard, Vancouver, ISO, and other styles
26

Daidoji, Keiko. "What a household with sick persons should know : expressions of body and illness in a medical text of early nineteenth-century Japan." Thesis, SOAS, University of London, 2009. http://eprints.soas.ac.uk/29267/.

Full text
Abstract:
This thesis assesses the image and expressions of the body and illness in Japan during the Edo period (1603-1867), by examining a text on the cultivation of life, Byoka suchi (What a household with sick persons should know). A unique feature of Byoka suchi is its use of script combining Chinese characters and Japanese readings in the form of furigana. Furigana are conventionally employed to signal the pronunciation of Chinese characters, but the furigana in Byoka suchi function as a means for giving the author's translation into the everyday native language of medical terms which are traditionally written in Chinese characters, which were of originally foreign for Japanese. This thesis particularly scrutinises the gap between the Chinese medical terms and their furigana glosses, as it shows how Chinese medicine was transmitted and imbibed by a Japanese physician in order to facilitate understanding lay readers who had not made a formal study of medicine. The thesis consists of three main parts: The first part reviews the intellectual background of cultivation of life culture in both China and Japan, with reference to some of the relevant insights by previous studies. The second part explores how the author's view of body and illnesses can be reconstructed from a close examination of furigana in the text. The third part is devoted to the translation of the first fascicle of Byoka suchi, which concisely represents the author's basic views on medicine, body and illnesses. The translation with meta-commentary will enable us to appreciate the effect of the Chinese character-furigana combinations, as well as to examine the essence of the physiology, pathology and medical ethics of the text. The principal contribution of this research to the field lies in reassessing how the Edo views of body and illnesses deviated from their Chinese counterparts. As a conceptual study, it will also shed light on the uses of special features of Japanese script in transmitting technical concepts into more colloquial and popular language.
APA, Harvard, Vancouver, ISO, and other styles
27

Williams, Laura Elizabeth. "Painful transformations : a medical approach to experience, life cycle and text in British Library, Additional MS 61823, 'The Book of Margery Kempe'." Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/24288.

Full text
Abstract:
This thesis interprets The Book of Margery Kempe using a medieval medical approach. Through an interdisciplinary methodology based on a medical humanities framework, the thesis explores the significance of Kempe’s painful experiences through a broad survey of the human life cycle, as understood in medieval culture. In exploring the interplay of humoral theory, medical texts, religious instruction and life cycle taxonomies, it illustrates the porousness of medicine and religion in the Middle Ages and the symbiotic relationship between spiritual and corporeal health. In an age when the circulation of medical texts in the English vernacular was increasing, scholastic medicine not only infiltrated religious houses but also translated into lay praxis. Ideas about the moral and physical nature of the human body were thus inextricably linked, based on the popular tradition of Christus medicus. For this reason, the thesis argues that Margery Kempe’s pain, experience and controversial performances amongst her euen-cristen were interpreted in physiological and medical terms by her onlookers, as ‘pain-interpreters’. It also offers a new transcription of the recipe from B.L. Add. MS 61823, f.124v, and argues for its importance as a way of reading the text as an ‘illness narrative’ which depicts Margery Kempe’s spiritual journey from sickness to health. The chapters examine Kempe’s humoral constitution and predisposition to mystical perceptivity, her crying, her childbearing and married years, her menopausal middle age of surrogate reproductivity, and her elderly life stage. Medical texts such as the Trotula, the Sekenesse of Wymmen and the Liber Diversis Medicinis help to shed light on the ways in which medieval women’s bodies were understood. The thesis concludes that, via a ‘pain surrogacy’ hermeneutic, Kempe is brought closer to a knowledge of pain which is transformational, just as she transforms through the stages of the life cycle.
APA, Harvard, Vancouver, ISO, and other styles
28

Assefa, Shimelis G. "Human concept cognition and semantic relations in the unified medical language system: A coherence analysis." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc4008/.

Full text
Abstract:
There is almost a universal agreement among scholars in information retrieval (IR) research that knowledge representation needs improvement. As core component of an IR system, improvement of the knowledge representation system has so far involved manipulation of this component based on principles such as vector space, probabilistic approach, inference network, and language modeling, yet the required improvement is still far from fruition. One promising approach that is highly touted to offer a potential solution exists in the cognitive paradigm, where knowledge representation practice should involve, or start from, modeling the human conceptual system. This study based on two related cognitive theories: the theory-based approach to concept representation and the psychological theory of semantic relations, ventured to explore the connection between the human conceptual model and the knowledge representation model (represented by samples of concepts and relations from the unified medical language system, UMLS). Guided by these cognitive theories and based on related and appropriate data-analytic tools, such as nonmetric multidimensional scaling, hierarchical clustering, and content analysis, this study aimed to conduct an exploratory investigation to answer four related questions. Divided into two groups, a total of 89 research participants took part in two sets of cognitive tasks. The first group (49 participants) sorted 60 food names into categories followed by simultaneous description of the derived categories to explain the rationale for category judgment. The second group (40 participants) performed sorting 47 semantic relations (the nonhierarchical associative types) into 5 categories known a priori. Three datasets resulted as a result of the cognitive tasks: food-sorting data, relation-sorting data, and free and unstructured text of category descriptions. Using the data analytic tools mentioned, data analysis was carried out and important results and findings were obtained that offer plausible explanations to the 4 research questions. Major results include the following: (a) through discriminant analysis category members were predicted consistently in 70% of the time; (b) the categorization bases are largely simplified rules, naïve explanations, and feature-based; (c) individuals theoretical explanation remains valid and stays stable across category members; (d) the human conceptual model can be fairly reconstructed in a low-dimensional space where 93% of the variance in the dimensional space is accounted for by the subjects performance; (e) participants consistently classify 29 of the 47 semantic relations; and, (f) individuals perform better in the functional and spatial dimensions of the semantic relations classification task and perform poorly in the conceptual dimension.
APA, Harvard, Vancouver, ISO, and other styles
29

Walker, Briana Shanise. "Rethinking Document Classification: A Pilot for the Application of Text Mining Techniques To Enhance Standardized Assessment Protocols for Critical Care Medical Team Transfer of Care." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1496760037827537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Strallhofer, Daniel, and Jonatan Ahlqvist. "Classifying Urgency : A Study in Machine Learning for Classifying the Level of Medical Emergency of an Animal’s Situation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231476.

Full text
Abstract:
This paper explores the use of Naive Bayes as well a Linear Support Vector Machines in order to classify a text based on the level of medical emergency. The primary source of testing will be an online veterinarian service’s customer data. The aspects explored are whether a single text gives enough information for a medical decision to be made and if there are alternative data gathering processes that would be preferred. Past research has proven that text classifiers based on Naive Bayes and SVMs can often give good results. We show how to optimize the results so that important decisions can be made with these classifications as a basis. Optimal data gathering procedures will be a part of this optimization process. The business applications of such a venture will also be discussed since implementing such a system in an online medical service will possibly affect customer flow, goodwill, cost/revenue, and online competitiveness.
Denna studie utforskar användandet av Naive Bayes samt Linear Support Vector Machines för att klassificera en text på en medicinsk skala. Den huvudsakliga datamängden som kommer att användas för att göra detta är kundinformation från en online veterinär. Aspekter som utforskas är om en enda text kan innehålla tillräckligt med information för att göra ett medicinskt beslut och om det finns alternativa metoder för att samla in mer anpassade datamängder i framtiden. Tidigare studier har bevisat att både Naive Bayes och SVMs ofta kan nå väldigt bra resultat. Vi visar hur man kan optimera resultat för att främja framtida studier. Optimala metoder för att samla in datamängder diskuteras som en del av optimeringsprocessen. Slutligen utforskas även de affärsmässiga aspekterna utigenom implementationen av ett datalogiskt system och hur detta kommer påverka kundflödet, goodwill, intäkter/kostnader och konkurrenskraft.
APA, Harvard, Vancouver, ISO, and other styles
31

Schnackenberg, Andrew K. "Symbolizing Institutional Change: Media Representations and Legality in the Payday Loan and Medical Marijuana Industries." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405090956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lindell, Klara. "A standard case of subtitling. : A comparative analysis of the subtitling of Scrubs and House M.D. with a focus on medical terminology." Thesis, Stockholms universitet, Engelska institutionen, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-78325.

Full text
Abstract:
Toury’s first law of translation states that translation of all kinds entails standardisation and, moreover, that the more peripheral the status of a text is, the more standardised its translation will be. As such, the purpose of this essay was to investigate to what extent this holds true in the Swedish translations of the two US television hospital shows Scrubs and House M.D. Though they are both hospital shows, House M.D. – considering its unprecedented genre hybridity and unusual plot structure – is a less mainstream and, as such, more peripheral example of a hospital show than Scrubs, which is more mainstream and, consequently, has a more central position within this genre. The object of study concerned the genre-specific register shared by the two shows – i.e. the medical terminology – which was extracted together with its Swedish subtitles and analysed, mainly in terms of coupled pairs. The analysis consisted of identifying which translation strategies had been adopted in the transfer of terminology and, moreover, what these strategies had entailed in terms of semantic increase, decrease, or correspondence between the original texts and their translations. From this it was concluded that both texts were indeed standardised and, additionally, that House M.D. – as the more peripheral of the two texts – had undergone a more extensive standardisation than Scrubs – being the text holding a more central status.
APA, Harvard, Vancouver, ISO, and other styles
33

Samuel, Jarvie John. "Elicitation of Protein-Protein Interactions from Biomedical Literature Using Association Rule Discovery." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30508/.

Full text
Abstract:
Extracting information from a stack of data is a tedious task and the scenario is no different in proteomics. Volumes of research papers are published about study of various proteins in several species, their interactions with other proteins and identification of protein(s) as possible biomarker in causing diseases. It is a challenging task for biologists to keep track of these developments manually by reading through the literatures. Several tools have been developed by computer linguists to assist identification, extraction and hypotheses generation of proteins and protein-protein interactions from biomedical publications and protein databases. However, they are confronted with the challenges of term variation, term ambiguity, access only to abstracts and inconsistencies in time-consuming manual curation of protein and protein-protein interaction repositories. This work attempts to attenuate the challenges by extracting protein-protein interactions in humans and elicit possible interactions using associative rule mining on full text, abstracts and captions from figures available from publicly available biomedical literature databases. Two such databases are used in our study: Directory of Open Access Journals (DOAJ) and PubMed Central (PMC). A corpus is built using articles based on search terms. A dataset of more than 38,000 protein-protein interactions from the Human Protein Reference Database (HPRD) is cross-referenced to validate discovered interactive pairs. A set of an optimal size of possible binary protein-protein interactions is generated to be made available for clinician or biological validation. A significant change in the number of new associations was found by altering the thresholds for support and confidence metrics. This study narrows down the limitations for biologists in keeping pace with discovery of protein-protein interactions via manually reading the literature and their needs to validate each and every possible interaction.
APA, Harvard, Vancouver, ISO, and other styles
34

Colepícolo, Eliane [UNIFESP]. "Epistemologia da Informática em Saúde: entre a teoria e a prática." Universidade Federal de São Paulo (UNIFESP), 2008. http://repositorio.unifesp.br/handle/11600/9468.

Full text
Abstract:
Made available in DSpace on 2015-07-22T20:50:02Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-03-26
Epistemologia da Informática em Saúde: entre a teoria e a prática. Eliane Colepí-colo. 2008. CONTEXTO. O objetivo dessa pesquisa é compreender a epistemologia da área de Informática em Saúde (IS) por meio de um estudo comparativo entre aspectos teóricos e práticos desta disciplina. MATERIAIS E MÉTODOS. O estudo foi dividido em 3 eta-pas: estudo estatístico, estudo terminológico e estudo epistemológico. O estudo esta-tístico envolveu o desenvolvimento e uso de robô para extração de metadados de arti-gos científicos da base PubMed, assim como a mineração de textos destes resumos de artigos, utilizados para estatísticas e análise posterior. O estudo terminológico visou o desenvolvimento de um tesauro especializado em IS, aqui denominado EpistemIS, que, integrado ao MeSH, serviu como base ao estudo estatístico. O estudo epistemo-lógico começou com o estudo dos metaconceitos da ação e pensamento humanos (MAPHs), que são arte, técnica, ciência, tecnologia e tecnociência. A seguir, realizou-se o desenvolvimento de um método epistemológico, baseado nas obras de Mário Bunge, para classificação epistemológica de conceitos da área provenientes do tesau-ro EpistemIS. Uma pesquisa de opinião com a comunidade científica da área foi reali-zada por meio de questionário na web. RESULTADOS. Obteve-se: uma caracteriza-ção dos MAPHs, mapas de sistematização do conhecimento em IS, classificações epistemológica e em MAPHs da IS, um mapa do conhecimento em IS e o consenso da comunidade sobre a epistemologia da IS. Por fim, foram calculadas estatísticas relati-vas: às classificações epistemológica e em MAPHs em IS, à integração entre o corpus de análise (437.289 artigos PubMed) e o tesauro EpistemIS. CONCLUSÃO. A partir de argumentos teóricos e práticos concluiu-se que a Informática em Saúde é uma tecno-ciência que se ocupa de solucionar problemas relativos aos domínios das Ciências da Vida, Ciências da Saúde e do Cuidado em Saúde, por meio da pesquisa científica in-terdisciplinar e do desenvolvimento de tecnologia para uso na sociedade.
TEDE
APA, Harvard, Vancouver, ISO, and other styles
35

Andersson, Kronlid Maja, and Hanna Björklund. "Narrativ förmåga i återberättande hos elever med svag textförståelse i åk 2." Thesis, Umeå universitet, Logopedi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136475.

Full text
Abstract:
Bakgrund. Stor del av barns spontana kommunikation sker i form av berättande och genom att undersöka mikro- och makrostrukturer i återberättande kan olika aspekter av den språkliga förmågan kartläggas. Barn med svag textförståelse uppvisar svårigheter i många språkliga områden och en god kartläggning kan både ligga till grund för identifiering av de individer som behöver stöd samt vara vägledande i interventionsplanering. Syfte. I första delen av studien var syftet att undersöka mikro- och makrostrukturella skillnader i narrativt återberättande mellan elever med svag textförståelse och elever med god textförståelse. Syftet i andra delen av studien var att undersöka det narrativa återberättandet hos elever identifierade med svag narrativ kvalitet. Metod. I studiens första del analyserades återberättande från elever i åk 2 kvantitativt på mikro- och makrostrukturell nivå utifrån SALT, NSS och Informationsled. I studiens andra del utfördes en kvalitativ analys av elevernas återberättande utifrån samma mått. Resultat. Ingen signifikant skillnad mellan grupperna kunde påvisas i studiens första del. Eleverna med svag textförståelse tenderade att ha en högre språklig produktivitet i sitt återberättande medan eleverna med god textförståelse tenderade att ha ett mer komplext återberättande. I studiens andra del var eleverna benägna att sammanfatta berättelsen och exkludera delar. Slutsatser. Elever med språkliga svårigheter är inte en homogen grupp. Interventionsinsatser måste således anpassas efter varje elevs behov. Resultaten från denna studie kan dock tyda på att även små interventionsinsatser i grupp skulle kunna utjämna eventuella skillnader i narrativt återberättande mellan elever med svag textförståelse och elever med god textförståelse.
Background. Much of children’s spontaneous communication is in the form of storytelling. By examining micro- and macrostructures in retelling, different aspects of linguistic ability can be mapped. Children with poor text comprehension show weakness in many linguistic areas and an appropriate analysis can serve as a basis for identifying individuals who need support, as well as guidance for intervention. Aim. The first part of this study investigates micro- and macrostructural differences in narrative retelling between students with and without poor text comprehension. The second part investigates narrative retelling by students identified with poor narrative quality. Methods. In the first part of the study, retellings from second grade students were analyzed at micro- and macrostructural levels. In the second part, a qualitative analysis of the retellings based on these levels was performed. Results. In part 1, no significant differences between the groups were observed. However, students with poor text comprehension tended to have a higher linguistic productivity in their retellings and students with good text comprehension had more complex structures. Part 2, it was found that the students with poor narrative quality tended to summarize the story and exclude parts. Conclusions. Students with language difficulties are not a homogeneous group. Hence, an intervention needs to be adapted to the needs of the student. Further, the results indicate that a short intervention has the potential to equalize possible differences in narrative retelling between students with and without poor text comprehension.
Tidig intensivsatsning i avkodning och läsförståelse
APA, Harvard, Vancouver, ISO, and other styles
36

Miller, P. W. "Vocabulary and reading medical texts." Thesis, Swansea University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638194.

Full text
Abstract:
In both first language and foreign language contexts there has been considerable debate as to the exact nature of the relationship between readers' levels of vocabulary knowledge and their ability to comprehend texts. This research examines this relationship in detail and in particular addresses three specific issues: a) How important is vocabulary knowledge for Spanish speakers reading medical texts in English?; b) What levels of vocabulary knowledge are required by these readers for this task?; and c) What other factors influence the levels of vocabulary knowledge needed for successful text comprehension? Six experiments were carried out to investigate these questions. The results revealed that the importance of vocabulary knowledge varied considerably across the different experiments. No support was found for the contention that it is possible to establish for all readers and texts fixed levels of vocabulary which must be attained to ensure successful text comprehension. Three factors were identified as having a major impact on the levels of vocabulary knowledge needed by readers. These included the actual reading comprehension test method used to assess text comprehension, the levels of background knowledge available to the readers and the degree of domain-specificity of the texts being read. Although these three groups of variables each exerted an effect on the levels of vocabulary knowledge required for text comprehension individually, their effect can only be fully understood if the clear interaction which takes place between them is also taken into account. These results clearly confirm current theoretical models which see text comprehension as a complex process involving the interaction of multiple components. They also serve to underline the need to critically evaluate notions which at first sight seem commonsensical but which on closer analysis do not capture the full complexity of the processes being analysed.
APA, Harvard, Vancouver, ISO, and other styles
37

Cederblad, Gustav. "Finding Synonyms in Medical Texts : Creating a system for automatic synonym extraction from medical texts." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149643.

Full text
Abstract:
This thesis describes the work of creating an automatic system for identifying synonyms and semantically related words in medical texts. Before this work, as a part of the project E-care@home, medical texts have been classified as either lay or specialized by both a lay annotator and an expert annotator. The lay annotator, in this case, is a person without any medical knowledge, whereas the expert annotator has professional knowledge in medicine. Using these texts made it possible to create co-occurrences matrices from which the related words could be identified. Fifteen medical terms were chosen as system input. The Dice similarity of these words in a context window of ten words around them was calculated. As output, five candidate related terms for each medical term was returned. Only unigrams were considered. The candidate related terms were evaluated using a questionnaire, where 223 healthcare professionals rated the similarity using a scale from one to five. A Fleiss kappa test showed that the agreement among these raters was 0.28, which is a fair agreement. The evaluation further showed that there was a significant correlation between the human ratings and the relatedness score (Dice similarity). That is, words with higher Dice similarity tended to get a higher human rating. However, the Dice similarity interval in which the words got the highest average human rating was 0.35-0.39. This result means that there is much room for improving the system. Further developments of the system should remove the unigram limitation and expand the corpus the provide a more accurate and reliable result.
APA, Harvard, Vancouver, ISO, and other styles
38

Nelsson, Mikael. "Deep learning for medical report texts." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-356140.

Full text
Abstract:
Data within the medical sector is often stored as free text entries. This is especially true for report texts, which are written after an examination. To be able to automatically gather data from these texts they need to be analyzed and classified to show what findings the examinations had. This thesis compares three state of the art deep learning approaches to classify short medical report texts. This is done for two types of examinations, so the concept of transfer learning plays a role in the evaluation. An optimal model should learn concepts that are applicable for more than one type of examinations, since we can expect the texts to be similar. The two data set from the examinations are also of different sizes, and both have an uneven distribution among the target classes. One of the models is based on techniques traditionally used for language processing using deep learning. The two other models are based on techniques usually used for image recognition and classification. The latter models proves to be the best across the different metrics, not least in the sense of transfer learning as they improve the results when learning from both types of examinations. This becomes especially apparent for the lowest frequent class from the smaller data set as none of the models correctly predict this class without using transfer learning.
APA, Harvard, Vancouver, ISO, and other styles
39

Bigeard, Elise. "Détection et analyse de la non-adhérence médicamenteuse dans les réseaux sociaux." Thesis, Lille 3, 2019. http://www.theses.fr/2019LIL3H026.

Full text
Abstract:
La non-adhérence médicamenteuse désigne les situations où le patient ne suit pas les directives des autorités médicales concernant la prise d'un médicament. Il peut s'agir d'une situation où le patient prend trop (sur-usage) ou pas assez (sous-usage) de médicaments, boit de l'alcool alors qu'il y a une contrindication, ou encore commet une tentative de suicide à l'aide de médicaments. Selon [HAYNES 2002] améliorer l'adhérence pourrait avoir un plus grand impact sur la santé de la population que tout autre amélioration d'un traitement médical spécifique. Cependant les données sur la non-adhérence sont difficiles à acquérir, puisque les patients en situation de non-adhérence sont peu susceptibles de rapporter leurs actions à leurs médecins. Nous proposons d'exploiter les données des réseaux sociaux pour étudier la non-adhérence médicamenteuse.Dans un premier temps, nous collectons un corpus de messages postés sur des forums médicaux. Nous construisons des vocabulaires de noms de médicaments et de maladies utilisés par les patients. Nous utilisons ces vocabulaires pour indexer les médicaments et maladies dans les messages. Ensuite nous utilisons des méthodes d'apprentissage supervisé et de recherche d'information pour détecter les messages de forum parlant d'une situation de non-adhérence. Avec les méthodes d'apprentissage supervisé, nous obtenons 0,433 de F-mesure, avec un maximum de 0,421 de précision ou 0,610 de rappel. Avec les méthodes de recherche d'information, nous atteignons une précision de 0,8 sur les dix premiers résultats. Nous étudions ensuite le contenu des messages ainsi découverts pour connaître les différents types de non-adhérence et savoir comment et pourquoi les patients se retrouvent dans de telles situations. Nous identifions 3 motivations : gérer soi-même sa santé, rechercher un effet différent de celui pour lequel le médicament est prescrit, être en situation d'addiction ou d'accoutumance. La gestion de sa santé recouvre ainsi plusieurs situations : éviter un effet secondaire, moduler l'effet du médicament, sous-utiliser un médicament perçu comme inutile, agir sans avis médical. Additionnellement, une non-adhérence peut survenir par erreur ou négligence, sans motivation particulière. À l'issue de notre étude nous produisons : un corpus annoté avec des messages de non-adhérence, un classifieur capable de détecter les messages de non-adhérence, une typologie des situations de non-adhérence et une analyse des causes de la non-adhérence
Drug non-compliance refers to situations where the patient does not follow instructions from medical authorities when taking medications. Such situations include taking too much (overuse) or too little (underuse) of medications, drinking contraindicated alcohol, or making a suicide attempt using medication. According to [HAYNES 2002] increasing drug compliance may have a bigger impact on public health than any other medical improvements. However non-compliance data are difficult to obtain since non-adherent patients are unlikely to report their behaviour to their healthcare providers. This is why we use data from social media to study drug non-compliance. Our study is applied to French-speaking forums.First we collect a corpus of messages written by users from medical forums. We build vocabularies of medication and disorder names such as used by patients. We use these vocabularies to index medications and disorders in the corpus. Then we use supervised learning and information retrieval methods to detect messages talking about non-compliance. With machine learning, we obtain 0.433 F-mesure, with up to 0.421 precision or 0.610 recall. With information retrieval, we reach 0.8 precision on the first ten results.After that, we study the content of the non-compliance messages. We identify various non-compliance situations and patient's motivations. We identify 3 main motivations: self-medication, seeking an effect besides the effect the medication was prescribed for, or being in addiction or habituation situation. Self-medication is an umbrella for several situations: avoiding an adverse effect, adjusting the medication's effect, underuse a medication seen as useless, taking decisions without a doctor's advice. Non-compliance can also happen thanks to errors or carelessness, without any particular motivation.Our work provides several kinds of result: annotated corpus with non-compliance messages, classifier for the detection of non-compliance messages, typology of non-compliance situations and analysis of the causes of non-compliance
APA, Harvard, Vancouver, ISO, and other styles
40

Barbosa, Alexandre Nunes. "Descoberta de conhecimento aplicado à base de dados textual de saúde." Universidade do Vale do Rio dos Sinos, 2012. http://www.repositorio.jesuita.org.br/handle/UNISINOS/4559.

Full text
Abstract:
Submitted by William Justo Figueiro (williamjf) on 2015-07-18T12:21:33Z No. of bitstreams: 1 42c.pdf: 1016491 bytes, checksum: 407619e0114b592531ee5a68ca0fd0f9 (MD5)
Made available in DSpace on 2015-07-18T12:21:33Z (GMT). No. of bitstreams: 1 42c.pdf: 1016491 bytes, checksum: 407619e0114b592531ee5a68ca0fd0f9 (MD5) Previous issue date: 2012
UNISINOS - Universidade do Vale do Rio dos Sinos
Este trabalho propõe um processo de investigação do conteúdo de uma base de dados, composta por dados descritivos e pré-estruturados do domínio da saúde, mais especificamente da área da Reumatologia. Para a investigação da base de dados, foram compostos 3 conjuntos de interesse. O primeiro composto por uma classe com conteúdo descritivo relativo somente a área da Reumatologia em geral, e outra cujo seu conteúdo pertence a outras áreas da medicina. O segundo e o terceiro conjunto, foram constituídos após análises estatísticas na base de dados. Um formado pelo conteúdo descritivo associado as 5 maiores frequências de códigos CID, e outro formado por conteúdo descritivo associado as 3 maiores frequências de códigos CID relacionados exclusivamente à área da Reumatologia. Estes conjuntos foram pré-processados com técnicas clássicas de Pré-processamento tais como remoção de Stopwords e Stemmer. Com o objetivo de extrair padrões que através de sua interpretação resultem na produção de conhecimento, foram aplicados aos conjuntos de interesse técnicas de classificação e associação, visando à relação entre o conteúdo textual que descreve sintomas de doenças com o conteúdo pré-estruturado, que define o diagnóstico destas doenças. A execução destas técnicas foi realizada através da aplicação do algoritmo de classificação Support Vector Machines e do algoritmo para extração de Regras de Associação Apriori. Para o desenvolvimento deste processo foi pesquisado referencial teórico relativo à mineração de dados, bem como levantamento e estudo de trabalhos científicos produzidos no domínio da mineração textual e relacionados a Prontuário Médico Eletrônico, focando o conteúdo das bases de dados utilizadas, técnicas de pré-processamento e mineração empregados na literatura, bem como os resultados relatados. A técnica de classificação empregada neste trabalho obteve resultados acima de 80% de Acurácia, demonstrando capacidade do algoritmo de rotular dados da saúde relacionados ao domínio de interesse corretamente. Também foram descobertas associações entre conteúdo textual e conteúdo pré-estruturado, que segundo a análise de especialistas, podem conduzir a questionamentos quanto à utilização de determinados CIDs no local de origem dos dados.
This study suggests a process of investigation of the content of a database, comprising descriptive and pre-structured data related to the health domain, more particularly in the area of Rheumatology. For the investigation of the database, three sets of interest were composed. The first one formed by a class of descriptive content related only to the area of Rheumatology in general, and another whose content belongs to other areas of medicine. The second and third sets were constituted after statistical analysis in the database. One of them formed by the descriptive content associated to the five highest frequencies of ICD codes, and another formed by descriptive content associated with the three highest frequencies of ICD codes related exclusively to the area of Rheumatology. These sets were pre-processed with classic Pre-processing techniques such as Stopword Removal and Stemming. In order to extract patterns that, through their interpretation, result in knowledge production, association and classification techniques were applied to the sets of interest, aiming at to relate the textual content that describes symptoms of diseases with pre-structured content, which defines the diagnosis of these diseases. The implementation of these techniques was carried out by applying the classification algorithm Support Vector Machines and the Association Rules Apriori Algorithm. For the development of this process, theoretical references concerning data mining were researched, including selection and review of scientific publications produced on text mining and related to Electronic Medical Record, focusing on the content of the databases used, techniques for pre-processing and mining used in the literature, as well as the reported results. The classification technique used in this study reached over 80% accurate results, demonstrating the capacity the algorithm has to correctly label health data related to the field of interest. Associations between text content and pre-structured content were also found, which, according to expert analysis, may be questioned as for the use of certain ICDs in the place of origin of the data.
APA, Harvard, Vancouver, ISO, and other styles
41

Weidow, Rebecka, and Jennie Olsén. "Framtagning av förnyat material för Hörtröskel för tal-test." Thesis, Örebro universitet, Institutionen för hälsovetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-85794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Albitar, Shereen. "De l'usage de la sémantique dans la classification supervisée de textes : application au domaine médical." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4343/document.

Full text
Abstract:
Cette thèse porte sur l’impact de l’usage de la sémantique dans le processus de la classification supervisée de textes. Cet impact est évalué au travers d’une étude expérimentale sur des documents issus du domaine médical et en utilisant UMLS (Unified Medical Language System) en tant que ressource sémantique. Cette évaluation est faite selon quatre scénarii expérimentaux d’ajout de sémantique à plusieurs niveaux du processus de classification. Le premier scénario correspond à la conceptualisation où le texte est enrichi avant indexation par des concepts correspondant dans UMLS ; le deuxième et le troisième scénario concernent l’enrichissement des vecteurs représentant les textes après indexation dans un sac de concepts (BOC – bag of concepts) par des concepts similaires. Enfin le dernier scénario utilise la sémantique au niveau de la prédiction des classes, où les concepts ainsi que les relations entre eux, sont impliqués dans la prise de décision. Le premier scénario est testé en utilisant trois des méthodes de classification: Rocchio, NB et SVM. Les trois autres scénarii sont uniquement testés en utilisant Rocchio qui est le mieux à même d’accueillir les modifications nécessaires. Au travers de ces différentes expérimentations nous avons tout d’abord montré que des améliorations significatives pouvaient être obtenues avec la conceptualisation du texte avant l’indexation. Ensuite, à partir de représentations vectorielles conceptualisées, nous avons constaté des améliorations plus modérées avec d’une part l’enrichissement sémantique de cette représentation vectorielle après indexation, et d’autre part l’usage de mesures de similarité sémantique en prédiction
The main interest of this research is the effect of using semantics in the process of supervised text classification. This effect is evaluated through an experimental study on documents related to the medical domain using the UMLS (Unified Medical Language System) as a semantic resource. This evaluation follows four scenarios involving semantics at different steps of the classification process: the first scenario incorporates the conceptualization step where text is enriched with corresponding concepts from UMLS; both the second and the third scenarios concern enriching vectors that represent text as Bag of Concepts (BOC) with similar concepts; the last scenario considers using semantics during class prediction, where concepts as well as the relations between them are involved in decision making. We test the first scenario using three popular classification techniques: Rocchio, NB and SVM. We choose Rocchio for the other scenarios for its extendibility with semantics. According to experiment, results demonstrated significant improvement in classification performance using conceptualization before indexing. Moderate improvements are reported using conceptualized text representation with semantic enrichment after indexing or with semantic text-to-text semantic similarity measures for prediction
APA, Harvard, Vancouver, ISO, and other styles
43

Keightley, Sofia. "Changements syntaxiques, modulations et adaptations dans un texte medical." Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-26779.

Full text
Abstract:
Abstract This study examines which measures that are possible to take whilst translating a medical text, in order to make it more subtle, more personal and to draw the reader’s attention to the essential information. Whilst doing this we have to keep in mind the importance of making sure that the text will work in its new cultural situation. This study will also bring up the other factors that will have to be respected and taken into account in order to place the text correctly in the new cultural and moral context. We have used Vinay and Darbelent’s methodology to help us with both the translation and the following study. In order to succeed with the translation in the best way, we have also consulted some other theories related to medical and technical translation. This study shows that a translator can with the right knowledge and small means make a text suitable for a target culture. However, it is important to know that language skills are not enough to do this. It is also important to know the subject of the text and the circumstances in which it will work.
APA, Harvard, Vancouver, ISO, and other styles
44

Blackman, Nicole Jill-Marie. "Criteria for demonstrating the efficacy of a medical test." Thesis, De Montfort University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Michelle W. M. Eng Massachusetts Institute of Technology. "Comparison of natural language processing algorithms for medical texts." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100298.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Title as it appears in MIT Commencement Exercises program, June 5, 2015: Comparison of NLP systems for medical text. Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
With the large corpora of clinical texts, natural language processing (NLP) is growing to be a field that people are exploring to extract useful patient information. NLP applications in clinical medicine are especially important in domains where the clinical observations are crucial to define and diagnose the disease. There are a variety of different systems that attempt to match words and word phrases to medical terminologies. Because of the differences in annotation datasets and lack of common conventions, many of the systems yield conflicting results. The purpose of this thesis project is (1) to create a visual representation of how different concepts compare to each other when using various annotators and (2) to improve upon the NLP methods to yield terms with better fidelity to what the clinicians are trying to express.
by Michelle W. Chen.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
46

Canlas, Joel. "Creating software libraries to improve medical device testing of the Pacing System Analyzer (PSA) at St. Jude Medical." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/599.

Full text
Abstract:
Software testing, specifically in the medical device field, has become increasingly complex over the last decade. Technological enhancements to simulate clinical scenarios and advancements in communicating to medical devices have created the need for better testing strategies and methodologies. Typical medical device companies have depended on manual testing processes to fulfill Food and Drug Administration (FDA) submission requirements specifically Class III devices which are life supporting, life sustaining devices. At St. Jude Medical, software testing of Class III devices such as implantable cardioverter-defibrillators (ICDs), pacemakers, and pacing analyzers are given top priority to ensure the highest quality in each product. High emphasis is made on improving software testing for ease of use and for catching more software errors in each device. A significant stride in testing has automated the process and has provided software verification teams with the tools they need to successfully test and deliver high quality products. By creating software libraries which interact with communication to the other interfaces needed to test medical devices, test engineers can focus on fully testing device requirements and will not be concerned with how each test will interact with the device or any other testing tools. The main focus will be a specific St. Jude Medical device known as the Pacing System Analyzer (PSA). The PSA device will be used to demonstrate how verification engineers are able to benefit from software libraries and allow the testing process and test development to be fully automated. New technologies and standards will be created to simulate clinical scenarios and to communicate to new devices. The goal is to use software engineering principles to create standard test libraries which sustain these changes while still allowing testers to focus on finding issues for each device.
APA, Harvard, Vancouver, ISO, and other styles
47

Mays, Patricia Faye. "Seal strength models for medical device trays." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Radovanovic, Aleksandar. "Concept Based Knowledge Discovery from Biomedical Literature." Thesis, Online access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_9861_1272229462.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Muhammad, Muhammad. "Patterns of textual cohesion in medical textbook discourse in English and Arabic." Thesis, University of Surrey, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Karlsson, Pernilla, and Maria Rytiniemi. "En normeringsstudie av ”The timed water swallow test” (TWST) för personer över 60 år." Thesis, Umeå universitet, Logopedi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-161231.

Full text
Abstract:
Bakgrund “The Timed Water Swallow Test” (TWST) är ett screeningverktyg för dysfagi som ger ett mått på en persons sväljkapacitet (ml/s). Tidigare normeringsstudier har konstaterat att en sväljkapacitet under 10 ml/s är en indikator på sväljningsdysfunktion för personer under 70 år. Huruvida denna gränsnivå är representativ för personer över 70 år är dock något oklart då det föreligger brister i normvärden för den äldre populationen. Eftersom dysfagi är vanligare bland äldre personer är det viktigt att det finns tillförlitliga normvärden för att TWST ska kunna tolkas med större tillförlitlighet.   Syfte Syftet med den föreliggande studien var att bidra med normvärden för TWST hos personer över 60 år.   Metod   Studien bestod av 118 deltagare från normalpopulationen i åldrarna 62–92 år. Deltagarnas sväljkapacitet testades med TWST. Deltagarna fick även fylla i ett hälsoformulär för att ge en uppfattning om eventuella subjektiva sväljbesvär. Sväljkapacitet (ml/s) analyserades genom statistisk analys för att undersöka eventuella skillnader och associationer mellan sväljkapacitet, kön och ålder.   Resultat  Studien visade att den äldre populationen på gruppnivå hade en sväljkapacitet över 10 ml/s för både kvinnor och män. Vidare fann studien att både män och kvinnor i alla åldersgrupper uppvisade en median sväljkapacitet över 10 ml/s, med undantag för kvinnor i äldsta åldersgruppen (90+). Totalt 15 deltagare uppvisade en sväljkapacitet under 10 ml/s. Utöver detta uppvisade män högre sväljkapacitet än kvinnor i samtliga åldersgrupper.   Slutsatser Fynden från denna studie indikerar att en gränsnivå på 10 ml/s till största del är representativ på gruppnivå för den äldre populationen. Då det förekom deltagare som uppvisade en sväljkapacitet under gränsnivån behövs ytterligare forskning för att med säkerhet fastställa en representativ gränsnivå för den äldre populationen.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography