Дисертації з теми "LDH test"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: LDH test.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-44 дисертацій для дослідження на тему "LDH test".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Balášová, Patricie. "Příprava a charakterizace moderních krytů ran." Master's thesis, Vysoké učení technické v Brně. Fakulta chemická, 2021. http://www.nusl.cz/ntk/nusl-449701.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This diploma thesis is focused on the study of bioactive wound dressings. During the thesis, hydrogel, lyophilized and nanofiber wound dressings were prepared. Hydrogel and lyophilized wound dressings were prepared on basis of two polysaccharides – alginate and chitosan. Nanofiber wound dressings were prepared by spinning polyhydroxybutyrate. All prepared wound dressings were enriched with bioactive substances, which represented analgesics (ibuprofen), antibiotics (ampicillin) and enzymes (collagenase). Into hydrogel and lyophilized wound dressings were all the mentioned active substances incorporated, whereas nanofiber wound dressings were only with ibuprofen and ampicillin prepared. The theoretical part deals with the anatomy and function of human skin. There was explained the process of wound healing and also there were introduced available modern wound dressings. The next chapter of the theoretical part deals with materials for preparing wound dressings (alginate, chitosan, polyhydroxybutyrate) and with active substances, which were used during the experimental part of this thesis. In the theoretical part, the methods of preparation of nanofiber wound dressings and also the methods of cytotoxicity testing used in this work were presented. The first part of the experimental part of this thesis was focused on preparing already mentioned wound dressings. Then, their morphological changes over time and also the gradual release of incorporated active substances into the model environment were monitored. The gradual release of ampicillin was monitored not only spectrophotometrically, but also by ultra-high-performance chromatography. In wound dressings, in which collagenase was incorporated, was also the final proteolytic activity of this enzyme monitored. The effect of the active substances was observed on three selected microorganisms: Escherichia coli, Staphylococcus epidermidis and Candida glabrata. The cytotoxic effect of the active substances on the human keratinocyte cell line was monitored by MTT test and LDH test. A test for monitoring the rate of wound healing – a scratch test – was also performed.
2

Nelson, Stephanie Anne. "Associations Between Intelligence Test Scores and Test Session Behavior in Children with ADHD, LD, and EBD." ScholarWorks @ UVM, 2008. http://scholarworks.uvm.edu/graddis/159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Individually administered intelligence tests are a routine component of psychological assessments of children who may meet criteria for Attention-Deficit/Hyperactivity Disorder (ADHD), learning disorders (LD), or emotional and behavioral disorders (EBD). In addition to providing potentially useful test scores, the individual administration of an intelligence test provides an ideal opportunity for observing a child’s behavior in a standardized setting, which may contribute clinically meaningful information to the assessment process. However, little is known about the associations between test scores and test session behavior of children with these disorders. This study examined patterns of test scores and test session observations in groups of children with ADHD, LD, EBD who were administered the Stanford Binet Intelligence Scales, Fifth Edition (SB5), as well as in control children from the SB5 standardization sample. Three hundred and twelve children receiving special education services for ADHD (n = 50), LD (n = 234), EBD (n = 28) and 100 children selected from the SB5 standardization sample were selected from a data set of children who were administered both the SB5 and the Test Observation Form (TOF; a standardized rating form for assessing behavior during cognitive or achievement testing of children). The groups were then compared on SB scores and TOF scores. Associations between test scores and TOF scores in children with ADHD, LD, and EBD and normal controls were also examined. The results of this investigation indicated that children with ADHD, LD, and EBD and normal control children differed on several SB5 and TOF scales. Control children scored higher on all of the SB5 scales than children with LD, and scored higher on many of the SB5 scales than children with ADHD and EBD. Children with EBD demonstrated the most problem behavior during testing, followed by children with ADHD. Children with LD were similar to control children with respect to test session behavior. In addition, several combinations of test scores and test session behavior were able to predict diagnostic group status. Overall, the results of this investigation suggest that test scores and behavioral observations during testing can and should be important components of multi-informant, multi-method assessment of children with ADHD, LD, and EBD.
3

Harrysson, Mattias. "Neural probabilistic topic modeling of short and messy text." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189532.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Exploring massive amount of user generated data with topics posits a new way to find useful information. The topics are assumed to be “hidden” and must be “uncovered” by statistical methods such as topic modeling. However, the user generated data is typically short and messy e.g. informal chat conversations, heavy use of slang words and “noise” which could be URL’s or other forms of pseudo-text. This type of data is difficult to process for most natural language processing methods, including topic modeling. This thesis attempts to find the approach that objectively give the better topics from short and messy text in a comparative study. The compared approaches are latent Dirichlet allocation (LDA), Re-organized LDA (RO-LDA), Gaussian Mixture Model (GMM) with distributed representation of words, and a new approach based on previous work named Neural Probabilistic Topic Modeling (NPTM). It could only be concluded that NPTM have a tendency to achieve better topics on short and messy text than LDA and RO-LDA. GMM on the other hand could not produce any meaningful results at all. The results are less conclusive since NPTM suffers from long running times which prevented enough samples to be obtained for a statistical test.
Att utforska enorma mängder användargenererad data med ämnen postulerar ett nytt sätt att hitta användbar information. Ämnena antas vara “gömda” och måste “avtäckas” med statistiska metoder såsom ämnesmodellering. Dock är användargenererad data generellt sätt kort och stökig t.ex. informella chattkonversationer, mycket slangord och “brus” som kan vara URL:er eller andra former av pseudo-text. Denna typ av data är svår att bearbeta för de flesta algoritmer i naturligt språk, inklusive ämnesmodellering. Det här arbetet har försökt hitta den metod som objektivt ger dem bättre ämnena ur kort och stökig text i en jämförande studie. De metoder som jämfördes var latent Dirichlet allocation (LDA), Re-organized LDA (RO-LDA), Gaussian Mixture Model (GMM) with distributed representation of words samt en egen metod med namnet Neural Probabilistic Topic Modeling (NPTM) baserat på tidigare arbeten. Den slutsats som kan dras är att NPTM har en tendens att ge bättre ämnen på kort och stökig text jämfört med LDA och RO-LDA. GMM lyckades inte ge några meningsfulla resultat alls. Resultaten är mindre bevisande eftersom NPTM har problem med långa körtider vilket innebär att tillräckligt många stickprov inte kunde erhållas för ett statistiskt test.
4

Testa, Luca. "Contribution to the Built-In Self-Test for RF VCOs." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14011/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce travail concerne l'étude et la réalisation de stratégies d'auto-test intégrées pour VCO radiofréquence (RF). La complexité des circuits intégrés RF devient un obstacle pour la mesure des principaux blocs RF des chaines de transmission/réception. Certains nœuds ne sont pas accessibles, l'excursion en tension des signaux baisse et les signaux haute fréquence ne peuvent pas être amenés à l'extérieur de la puce sans une forte dégradation. Le s techniques habituelles de test deviennent très couteuses et lentes. Le test pour le wafer-sort est étudié en premier. La solution proposée est la mise en œuvre d'une stratégie d'auto-test intégrée (BIST) qui puisse discriminer entre circuits sans fautes et circuits avec fautes pendant le wafer-test. La méthodologie utilisée est le test structurel. La couverture des fautes est étudiée pour connaitre la quantité à capter au niveau intégré afin de maximiser la probabilité de trouver tous les défauts physiques dans le VCO. Le résultat de cette analyse montre que la couverture des fautes est maximisée quand la tension crête-crête en sortie du VCO est captée. La caractérisation complète (validation de la puce et process-monitoring) du VCO est étudiée dans la deuxième étape. Les informations à extraire de la puce sont: amplitude des signaux, consommation du VCO, fréquence d'oscillation, gain de conversion (Kvco) et une information à propos du bruit de phase. Un démonstrateur pour le test au niveau wafer est réalisé en technologie ST CMOS 65nm. Le démonstrateur est composé d'un VCO 3.5GHz (le circuit sous test), un LDO, une référence de tension indépendante de température et variations d'alimentation, un capteur de tension crête-crête et un comparateur. Le capteur Vpp donne en sortie une information DC qui est comparée avec une plage de valeurs acceptables. Le BIST donne en sortie une information numérique pass/fail
This work deals with the study and the realization of Built-In Self-Tests (BIST) for RF VCOs (Voltage Controlled Oscillators) The increasing complexity of RF integrated circuits is creating an obstacle for the correct measurement of the main RF blocks of any transceiver. Some nodes are not accessible, the voltage excursion of the signals is getting lower and lower and high frequency signals cannot be driven off the die without a main degradation. The common test techniques become then very expensive and time consuming. The wafer sort is firstly approached. The proposed solution is the implementation of a BIST strategy able to discriminate between faulty and good circuits during the wafer test. The chosen methodology is the structural test (fault-oriented). A fault coverage campaign is carried out in order to find the quantity to monitor on-chip that maximizes the probability to find all possible physical defects in the VCO. The result of the analysis reveals that the fault coverage is maximized if the peak-to-peak output voltage is monitored. The complete on-chip characterization of the VCO is then addressed, for chip validation and process monitoring. The information that need to be extracted on-chip concern: amplitude of the signal, consumption of the VCO, frequency of oscillation, its conversion gain (voltage-to-frequency) and eventually some information on the phase noise. A silicon demonstrator for wafer sort purposes is implemented using the ST CMOS 65nm process. It includes a 3.5GHz VCO, an LDO, a temperature and supply-voltage independent voltage reference, a peak-to-peak voltage detector and a comparator. The Vpp detector outputs a DC-voltage that is compared to a predefined acceptance boundary. A logic pass/fail signal is output by the BIST. The attention is then turned to the study of the proposed architecture for an on-chip frequency-meter able to measure the RF frequency with high accuracy. Behavioral simulations using VHDL-AMS lead to the conclusion that a TDC (Time-to-Digital Converter) is the best solution for our goal. The road is then opened to the measure of long-time jitter making use of the same TDC
5

Adkins, Jason Michael. "Politics from the Pulpit: A Critical Test of Elite Cues in American Politics." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1531927892623716.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Rarivomanana, Jens A. Saucier Gabrièle. "Système CADOC génération fonctionnelle de test pour les circuits complexes /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00319028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Potter, Mark D. "Using Graphic Organizers with Scriptural Text: Ninth-Grade Latter-Day Saint (LDS) Students’ Comprehension of Doctrinal Readings and Concepts." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study investigated the effect of instruction that included graphic organizers on LDS seminary students’ ability to understand scriptural text and their ability to identify doctrines in scriptural text, utilizing a repeated measures, quasi-experimental design involving 209 ninth-grade student participants. The participants were randomly assigned by class to one of two treatment groups. Participants in the treatment group received instruction using graphic organizers with the standard curriculum and participants in the comparison group received instruction using only the standard curriculum. Three different measures were employed to measure the effectiveness of the graphic organizers intervention: (a) a multiple-choice test of LDS doctrines and principles; (b) an identifying doctrines and principles in text test; and (c) a student perception survey. Results of the ANOVA for the multiple-choice test indicated no significant difference between instructional groups for ability to recall facts from the class instruction and the class text, F (1, 205) = 1.60, p = .21, partial ή² = .21. Results of the ANOVA for the identifying doctrines and principles in text test, measuring transferability of the skills learned while studying the Doctrine and Covenants to a different text containing some of the same doctrines and principles, also indicated no significant difference between groups, F (1, 196) = 1.93, p = .17. The results for the student perception survey were positive; most students felt confident about their ability to comprehend scriptural text, but were slightly less confident about their ability to identify doctrines and principles in the text. The participants in this study were generally positive in their willingness to learn about and use graphic organizers. Results of this study indicated that graphic organizers did not significantly impact students’ ability to identify doctrines and principles in scriptural text or to learn concepts from scriptural text.
8

Alsadhan, Majed. "An application of topic modeling algorithms to text analytics in business intelligence." Thesis, Kansas State University, 2014. http://hdl.handle.net/2097/17580.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Master of Science
Department of Computing and Information Sciences
Doina Caragea
William H. Hsu
In this work, we focus on the task of clustering businesses in the state of Kansas based on the content of their websites and their business listing information. Our goal is to cluster the businesses and overcome the challenges facing current approaches such as: data noise, low number of clustered businesses, and lack of evaluation approach. We propose an LSA-based approach to analyze the businesses’ data and cluster those businesses by using Bisecting K-Means algorithm. In this approach, we analyze the businesses’ data by using LSA and produce businesses’ representations in a reduced space. We then use the businesses’ representations to cluster the businesses by applying the Bisecting K-Means algorithm. We also apply an existing LDA-based approach to cluster the businesses and compare the results with our proposed LSA-based approach at the end. In this work, we evaluate the results by using a human-expert-based evaluation procedure. At the end, we visualize the clusters produced in this work by using Google Earth and Tableau. According to our evaluation procedure, the LDA-based approach performed slightly bet- ter then the LSA-based approach. However, with the LDA-based approach, there were some limitations which are: low number of clustered businesses, and not being able to produce a hierarchical tree for the clusters. With the LSA-based approach, we were able to cluster all the businesses and produce a hierarchical tree for the clusters.
9

Svensson, Karin, and Johan Blad. "Exploring NMF and LDA Topic Models of Swedish News Articles." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The ability to automatically analyze and segment news articles by their content is a growing research field. This thesis explores the unsupervised machine learning method topic modeling applied on Swedish news articles for generating topics to describe and segment articles. Specifically, the algorithms non-negative matrix factorization (NMF) and the latent Dirichlet allocation (LDA) are implemented and evaluated. Their usefulness in the news media industry is assessed by its ability to serve as a uniform categorization framework for news articles. This thesis fills a research gap by studying the application of topic modeling on Swedish news articles and contributes by showing that this can yield meaningful results. It is shown that Swedish text data requires extensive data preparation for successful topic models and that nouns exclusively and especially common nouns are the most suitable words to use. Furthermore, the results show that both NMF and LDA are valuable as content analysis tools and categorization frameworks, but they have different characteristics, hence optimal for different use cases. Lastly, the conclusion is that topic models have issues since they can generate unreliable topics that could be misleading for news consumers, but that they nonetheless can be powerful methods for analyzing and segmenting articles efficiently on a grand scale by organizations internally. The thesis project is a collaboration with one of Sweden’s largest media groups and its results have led to a topic modeling implementation for large-scale content analysis to gain insight into readers’ interests.
10

Giehl, Nina Caprice [Verfasser], and H. G. [Akademischer Betreuer] Wahl. "Interferenz eines homogenen Tests für LDL-Cholesterin durch Lipoprotein-X / Nina Caprice Giehl. Betreuer: H. G. Wahl." Marburg : Philipps-Universität Marburg, 2012. http://d-nb.info/1028072619/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ljungberg, Lucas. "Using unsupervised classification with multiple LDA derived models for text generation based on noisy and sensitive data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Creating models to generate contextual responses to input queries is a difficult problem. It is even more difficult when available data contains noise and sensitive data. Finding models or methods to handle such issues is important in order to use data for productive means.This thesis proposes a model based on a cooperating pair of Topic Models of differing tasks (LDA and GSDMM) in order to alleviate the problematic properties of data. The model is tested on a real-world dataset with these difficulties as well as a dataset without them. The goal is to 1) look at the behaviour of the different topic models to see if their topical representation of the data is of use as input or output to other models and 2) find out what properties can be alleviated as a result.The results show that topic modeling can represent the semantic information of documents well enough to produce well-behaved input data for other models, which can also deal well with large vocabularies and noisy data. The topical clustering of the response data is sufficient enough for a classification model to predict the context of the response, from which valid responses can be created.
Att skapa modeller som genererar kontextuella svar på frågor är ett svårt problem från början, någonting som blir än mer svårt när tillgänglig data innehåller både brus och känslig information. Det är både viktigt och av stort intresse att hitta modeller och metoder som kan hantera dessa svårigheter så att även problematisk data kan användas produktivt.Detta examensarbete föreslår en modell baserat på ett par samarbetande Topic Models (ämnesbaserade modeller) med skiljande ansvarsområden (LDA och GSDMM) för att underlätta de problematiska egenskaperna av datan. Modellen testas på ett verkligt dataset med dessa svårigheter samt ett dataset utan dessa. Målet är att 1) inspektera båda ämnesmodellernas beteende för att se om dessa kan representera datan på ett sådant sätt att andra modeller kan använda dessa som indata eller utdata och 2) förstå vilka av dessa svårigheter som kan hanteras som följd.Resultaten visar att ämnesmodellerna kan representera semantiken och betydelsen av dokument bra nog för att producera välartad indata för andra modeller. Denna representation kan även hantera stora ordlistor och brus i texten. Resultaten visar även att ämnesgrupperingen av responsdatan är godartad nog att användas som mål för klassificeringsmodeller sådant att korrekta meningar kan genereras som respons.
12

Viatkin, Aleksandr. "Development of a Test Bench for Multilevel Cascaded H-Bridge Converter with Self-Balancing Level Doubling Network." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14974/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This Master degree project was developed during an exchange program, established between the University of Bologna and the Technical University of Munich (TUM). The research activity was conducted at the Institute of Energy Conversion Technology (TUM department) in collobaration with Prof. Dr.-Ing. Hans-Georg Herzog and his research team. A symmetric 3-Phase Cascaded H-bridge Multilevel Inverter (CHBMLI), that is available in the TUM university laboratory, is reconfigured to operate as proposed using a Level Doubling Network (LDN). The LDN takes the form of a 3-phase half-bridge inverter that shares a common DC bus connected to a floating capacitor. This configuration allows almost to double number of output voltage levels. The LDN concept has inherent self-balancing capability that guarantees to maintain voltage across the LDN capacitor at nearly constant value and without any closed-loop control, while it does not consume or supply any power, apart from losses in the circuit. The proposed topology preserves the merit of CHBMLI modular structure, improving overall inverter’s reliability with reduced number of switching devices and required isolated DC sources compare with standard CHBMLI topology. Therefore, it significantly improves power quality, allows to reduce average device switching frequency, while minimizing cost and size of the power filter. Operation of the circuit is extensively verified by simulation in MATLAB/Simulink framework and experiments, performed on a grid-connected 3-phase five level laboratory prototype, specifically built as a part of the current Master Thesis. This work is a first step towards studying the proposed topology. Nevertheless, it provides a baseline for future analyses of the architecture and its possible variations.
13

Malan, Gunce. "Do Personality Tests have a place in Academic Preparation of Undergradute Hospitality Students." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4533.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This is a descriptive study that poses the questions and discussion regarding use of personality tests in prediction of future job performance of the current undergraduate hospitality students. A gap exists between the perception of the skills and competencies of high performers and the perception of hospitality students (Berezina et al., 2011; Malan, Berezina & Cobanoglu, 2012). The purpose of this study is to investigate if personality tests will help in predicting the success of students in their preferred job setting as compared to current high performers (managers). The use of personality tests increased substantially after 1988, when the government banned the use of polygraphs (Employee Polygraph Protection Act, 1988 as cited in Stabile, 2002). Although there is no right or wrong answer to personality test questions, the answers would allow employers to have a better idea if there is a sufficient fit between the applicant and the position sought. To compare the personality types of successful hotel managers and hospitality students to determine if there is a need to customize the hospitality curriculum in order to produce graduates who will fit to the correct type of positions, a convenient sample was drawn from a hotel management company's managers and hospitality students of a university in the Southeast USA. The sample for this study was 175 Managers and 150 Students. With the 144/175 (82% response rate) manager and 76/150 (51% response rate) students the main findings show there is a significantly difference between managers and students. This indicates that current hospitality students and current managers have different perceptions about hospitality industry. Since current students will work on the industry in the future, the difference needs to be eliminated by both curricular and extra-curricular activities. There are also significant differences among managerial positions' (general manager, assistant general manager, and director of sales) LDP scores. This could indicate that it might not be a good fit to promote these individuals from one position to other within the company since each position differs from each other.
14

Ponweiser, Martin. "Latent Dirichlet Allocation in R." WU Vienna University of Economics and Business, 2012. http://epub.wu.ac.at/3558/1/main.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Topic models are a new research field within the computer sciences information retrieval and text mining. They are generative probabilistic models of text corpora inferred by machine learning and they can be used for retrieval and text mining tasks. The most prominent topic model is latent Dirichlet allocation (LDA), which was introduced in 2003 by Blei et al. and has since then sparked off the development of other topic models for domain-specific purposes. This thesis focuses on LDA's practical application. Its main goal is the replication of the data analyses from the 2004 LDA paper ``Finding scientific topics'' by Thomas Griffiths and Mark Steyvers within the framework of the R statistical programming language and the R~package topicmodels by Bettina Grün and Kurt Hornik. The complete process, including extraction of a text corpus from the PNAS journal's website, data preprocessing, transformation into a document-term matrix, model selection, model estimation, as well as presentation of the results, is fully documented and commented. The outcome closely matches the analyses of the original paper, therefore the research by Griffiths/Steyvers can be reproduced. Furthermore, this thesis proves the suitability of the R environment for text mining with LDA. (author's abstract)
Series: Theses / Institute for Statistics and Mathematics
15

Pellegrinotti, Idico Luiz 1946. "Analise comparativa das atividades da lactatodesidrogenase (LDH) e creatinafosfoquinase (CPK) no soro e na saliva de individuos treinados em (atletismo, futebol e voleibol) e não treinados submetidos ao teste de Cooper." [s.n.], 1987. http://repositorio.unicamp.br/jspui/handle/REPOSIP/289646.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Orientador: Alcides Guimarães
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Odontologia de Piracicaba
Made available in DSpace on 2018-07-16T22:40:48Z (GMT). No. of bitstreams: 1 Pellegrinotti_IdicoLuiz_M.pdf: 1390081 bytes, checksum: 17fe7c4d279c97f75c676a6a044cdac0 (MD5) Previous issue date: 1987
Resumo: Este trabalho teve como objetivo a verificação das possfveis alterações das atividades das enzimas LDH e CPK, na saliva e no so'ro de indivíduos não treinados e treinados, submetidos ao teste de Cooper. Foram analisados 37 indivíduos do sexo masculino distribuídos em 2 grupos: Grupo I:14 indivíduos não treinados. Grup II : 23 indivíduos treinados, distribuídos em 3 subgrupos: II1 : 06 treinados em atletismo. II2 : 08 treinados em futebol. II3 - 09 treinados em voleibol. Nos indivídudos dos 2 grupos experimentais, foram analisadas as atividades da LDH e da CPK na saliva e no soro em 3 tempos: A-. repouso; B- 1 minuto após o teste de Cooper e C- 3 horas após o mesmo teste. Também foi medido o VO2 Máximo. Os resultados obtidos demonstraram que a saliva pode se constituir em um veículo que permite analisar a atividade enzimática no soro e que a LDH é um indicador não só da especificidade do tipo de treinamento como também da glicólise anaeróbica, enquanto que a CPK é um indicador da adaptação do organismo ao treinamento físico e ao grau de esforço realizado
Abstract: The behaviour of LDH and CPK in saliva and serum of trained and untrained persons submmitted to Cooper test has been studied in this paper. 37 pers.ons, male were distributed in two groups: G1: 14 persons. untrained. GII: 23 persons trained and distributed in 3 subgroups: II1- 06 persons trained in athetism. II2- 08 persons trained in soccer. II3- 09 persons trained in volleyball. The activities of LDH and CPK were determined in the two groups on three times: A- after a rest period; B- 1 minute after the submmittion to Cooper test and C-.3 hours after the same test . The maximum V02 was checked also. The correlation between the activities of LDH obtained in saliva and blood allau us to conclude that the enzymatic activities in saliva can be considereas un indicator af the same activities in bload. LDH activities also prouved to be an acceptable indicator of the specification of the kind of training and also to the anaerobic glicolisis. CPK activity seemed to be a good indicator of the organic adaptation to the fisical training
Mestrado
Fisiologia
Mestre em Biologia e Patologia Buco-Dental
16

Dwyer, Eleanor A. "Price, Perceived Value and Customer Satisfaction: A Text-Based Econometric Analysis of Yelp! Reviews." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/715.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We examine the antecedents of customer satisfaction in the restaurant sector, paying particular attention to perceived value and price level. Using Latent Dirichlet Allocation, we extract latent topics from the text of Yelp! reviews, then analyze the relationship between these topics and satisfaction, measured as the difference between review rating and user average review rating.
17

Riley, Owen G. "Termediator-II: Identification of Interdisciplinary Term Ambiguity Through Hierarchical Cluster Analysis." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Technical disciplines are evolving rapidly leading to changes in their associated vocabularies. Confusion in interdisciplinary communication occurs due to this evolving terminology. Two causes of confusion are multiple definitions (overloaded terms) and synonymous terms. The formal names for these two problems are polysemy and synonymy. Termediator-I, a web application built on top of a collection of glossaries, uses definition count as a measure of term confusion. This tool was an attempt to identify confusing cross-disciplinary terms. As more glossaries were added to the collection, this measure became ineffective. This thesis provides a measure of term polysemy. Term polysemy is effectively measured by semantically clustering the text concepts, or definitions, of each term and counting the number of resulting clusters. Hierarchical clustering uses a measure of proximity between the text concepts. Three such measures are evaluated: cosine similarity, latent semantic indexing, and latent Dirichlet allocation. Two linkage types, for determining cluster proximity during the hierarchical clustering process, are also evaluated: complete linkage and average linkage. Crowdsourcing through a web application was unsuccessfully attempted to obtain a viable clustering threshold by public consensus. An alternate metric of polysemy, convergence value, is identified and tested as a viable clustering threshold. Six resulting lists of terms ranked by cluster count based on convergence values are generated, one for each similarity measure and linkage type combination. Each combination produces a competitive list, and no clear combination can be determined as superior. Semantic clustering successfully identifies polysemous terms, but each similarity measure and linkage type combination provides slightly different results.
18

Shokat, Imran. "Computational Analyses of Scientific Publications Using Raw and Manually Curated Data with Applications to Text Visualization." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78995.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Text visualization is a field dedicated to the visual representation of textual data by using computer technology. A large number of visualization techniques are available, and now it is becoming harder for researchers and practitioners to choose an optimal technique for a particular task among the existing techniques. To overcome this problem, the ISOVIS Group developed an interactive survey browser for text visualization techniques. ISOVIS researchers gathered papers which describe text visualization techniques or tools and categorized them according to a taxonomy. Several categories were manually assigned to each visualization technique. In this thesis, we aim to analyze the dataset of this browser. We carried out several analyses to find temporal trends and correlations of the categories present in the browser dataset. In addition, a comparison of these categories with a computational approach has been made. Our results show that some categories became more popular than before whereas others have declined in popularity. The cases of positive and negative correlation between various categories have been found and analyzed. Comparison between manually labeled datasets and results of computational text analyses were presented to the experts with an opportunity to refine the dataset. Data which is analyzed in this thesis project is specific to text visualization field, however, methods that are used in the analyses can be generalized for applications to other datasets of scientific literature surveys or, more generally, other manually curated collections of textual documents.
19

Rarivomanana, Jens A. "Système CADOC : génération fonctionnelle de test pour les circuits complexes." Phd thesis, Grenoble INPG, 1985. http://tel.archives-ouvertes.fr/tel-00319028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le système CADOC est un outil de conception assisté pour circuits VLSI, basé sur le langage CADOC-LD. Présentation du langage CADOC-LD en tenant compte de l'étude du langage de description de matériel CHDL. Application à partir du langage CADOC-LD basée sur les techniques d'exécution symbolique temporisée et de l'intelligence artificielle
20

Malan, Rencia. "Optimalisering van leerbekwaamhede by graad nege-leerders 'n vergelyking van enkele vakdidaktiese meetinstrumente /." Diss., Pretoria : [s.n.], 2001. http://upetd.up.ac.za/thesis/available/etd-09192003-131325/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Aguirre, Castillo José. "Optimisation of the bottom stirring praxis in a LD-LBE converter : Investigations and tests on phosphorous removal, nitrogen as stirring gas, and slopping." Thesis, Uppsala universitet, Oorganisk kemi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The LD-process, called after the cities Linz and Donawitz, is used to convert pig iron into crude steel by blowing oxygen on top of the pig iron. A LD-LBE converter, Lance Bubbling Equilibrium, also stirs the melt trough a bottom stirring system. The bottom stirring in a LD-LBE converter is believed to have a positive effect alone on the phosphorous removal. Previous studies have shown that the temperature and slag composition are the main factors affecting phosphorus removal. Phosphorus binds to the slag easier at low temperature and to slag with certain levels of dissolved calcium (a process additive). Different praxes were tested and a better dephosphorisation was reached. The bottom stirrings effect on the dissolution of calcium additives is a possible explanation to the results and mechanisms presented in this study. The study also aimed to investigate the use of nitrogen as stirring gas instead of argon. Nitrogen is removed from the steel during the formation of carbon oxide gases. Nitrogen was used in varying amounts as stirring gas during the first half of the oxygen blow. It proved to be safe to use as long as there was a high content of carbon in the melt. However using nitrogen beyond half of the blow showed to be risky for nitrogen sensible steels; even in small amounts since there is not enough carbon left to degas the steel from nitrogen. Slopping happens when formed gas from the LD-process is trapped in the slag. The slag level rises and sometimes it floods the converter resulting in yield losses. The influence of the bottom stirring on slopping was studied, which resulted in the conclusion that slopping cannot be avoided by simply improving the bottom stirring. Although some verification studies remains to be done, if the suggestions based on the results of this thesis were employed, savings in the oxygen and stirring gas economies could be made. Not least improvements on the iron yield.
En järnmalmsbaserad stålproduktion börjar med att järnmalm matas i en masugn tillsammans med koks, kalk och tillsatsämnen. Ut kommer råjärn med höga kol och svavelhalter. Råjärnet transporteras till stålverket i så kallade torpedvagnar. I vissa stålverk, t.ex. SSAB Special Steels i Oxelösund, renas råjärnet från svavel i torpedvagnen. I andra stålverk svavelrenar man i separata skänkar. Svavelreningen sker med bland annat kalciumkarbid som binder till svavlet. Det svavelfattiga råjärnet måste sedan renas från kol för att bli stål. Det görs i en LD-konvertern (Linz Donawitz). LD-konvertern laddas med flytande råjärn som har en kolhalt på 4,5 procent och som är runt 1350 grader varmt. Råjärnet kyls genom att cirka 20 procent skrot tillsätts. En syrgaslans sänks sedan in i konvertern ovanför smältan och reningen startar.  Syrgaslansen blåser syrgas i ultraljudsfart vilket oxiderar en del av järnet, så väl som kol, kisel, mangan, fosfor and andra föroreningsämnen i råjärnet. Kol försvinner ur konvertern i form av kolmonoxidgas. Andra oxiderade föroreningar och järnoxid bildar tillsammans en så kallad slagg som flyter ovanpå smältan. Det tillsätts även så kallade slaggbildare som förbättrar upptaget av föroreningar i slaggen. Processen varar i cirka 17 minuter och är mycket beroende av slaggen som bildas. Kol försvinner ur konvertern i form av kolmonoxidgas. Under processens gång rör man om smältan med hjälp av gaser som spolas genom botten av konvertern. Omröringen jämnar ut smältans sammansättning och temperatur. När man inte länge behöver avlägsna kol stoppas processen. Stålets temperatur är då cirka 1700 grader och kolhalten ligger nära 0,05 procent. Stålet överförs sedan till en skänk för att skilja det ur slaggen. Stålet förädlas vidare i olika processer där sammansättningen justeras så att det möter kundens krav. Sedan gjuts stålet i strängar för transport till valsverk eller kunder. Denna studie behandlar bottenomrörningen under LD-processen i SSAB Special Steels's stålverk i Oxelösund. Omrörningen sker genom åtta porösa stenar i botten av konvertern som blåser med argon eller kväve. Gasflödet genom stenarna justeras genom ett ventilsystem. Under blåsningen rör man om med hjälp av förinställda program. Omrörningens primära funktion är att avlasta syrgaslansen. I fallen där ingen bottenomrörning finns måste syrgaslansen blåsa ”hårdare” på stålet för att avlägsna kol. Avlastningen som bottenomrörningen bidrar med gör att processen även kallas för LD-LBE, där LBE står för Lans Bubbling Equilibrium. Bottenomrörningen tros ha en positiv effekt på stålets rening från fosfor. Sedan tidigare vet man att temperatur och slaggsammansättning är de största faktorerna som påverkar fosforreningen. Fosfor tas lättare upp i slaggen vid låga temperaturen samt i slagg med högre kalkhalter. Olika omrörningsprogram testades och en bättre fosforrening nåddes. Bottenomrörningen visade sig ha positiva effekter som är teoretisk kopplade till kalksmältning. Två möjliga förklaringsmekanismer hittades. Studien undersökte även användningen av kväve som omrörningsgas istället för argon, då kväve är ekonomisk fördelaktig gentemot argon. Kväve finns inlöst i råjärnet som sätts in i konvertern. Kvävgasen försvinner ur stålet under och med hjälp av kolreningen. Det visade sig vara säkert att använda kväve från start fram till halva syrgasblåset på kvävekänsliga stålsorter, var efter man sedan byte till argon. Kväve som används sent under blåset visade ge högre kvävehalter. Urkok är en kraftig volymökning av slaggen som sker när bildad gas från reningen av smältan fångas i slaggen och får slaggen att ”koka över”. Urkok resulterar i ekonomiska förluster då slaggen som lämnar konvertern vid urkok är rik på järn. Bottenomrörningens eventuella påverkan på urkok studerades. Det visade sig att urkok inte kan undvikas genom att enbart optimera bottenomrörningen.
22

Westin, Elin M. "Welds in the lean duplex stainless steel LDX 2101 : effect of microstructure and weld oxide on corrosion properties." Licentiate thesis, KTH, Materials Science and Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9299.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Duplex stainless steels are a very attractive alternative to austenitic grades due to their higher strength and good corrosion performance. The austenitic grades can often be welded autogenously, while the duplex grades normally require addition of filler metal. This is to counteract segregation of important alloying elements and to give sufficient austenite formation to prevent precipitation of chromium nitrides that could have a negative effect on impact toughness and pitting resistance. The corrosion performance of the recently-developed lean duplex stainless steel LDX 2101 is higher than that of 304 and can reach the level of 316. This thesis summarises pitting resistance tests performed on laser and gas tungsten arc (GTA) welded LDX 2101. It is shown here that this material can be autogenously welded, but additions of filler metal, nitrogen in the shielding gas and use of hybrid methods increases the austenite formation and the pitting resistance by further suppressing formation of chromium nitride precipitates in the weld metal. If the weld metal austenite formation is sufficient, the chromium nitride precipitates in the heat-affected zone (HAZ) could cause local pitting, however, this was not seen in this work. Instead, pitting occurred 1–3 mm from the fusion line, in the parent metal rather than in the high temperature HAZ (HTHAZ). This is suggested here to be controlled by the heat tint, and the effect of residual weld oxides on the pitting resistance is studied. The composition and the thickness of weld oxide formed on LDX 2101 and 2304 were determined using X-ray photoelectron spectroscopy (XPS). The heat tint on these lean duplex grades proved to contain significantly more manganese than what has been reported for standard austenitic stainless steels in the 300 series. A new approach on heat tint formation is consequently presented. Evaporation of material from the weld metal and subsequent deposition on the weld oxide are suggested to contribute to weld oxide formation. This is supported by element loss in LDX 2101 weld metal, and nitrogen additions to the GTA shielding gas further increase the evaporation.

 

23

Westin, Elin M. "Welds in the lean duplex stainless steel LDX 2101 : effect of microstructure and weld oxides on corrosion properties." Licentiate thesis, Stockholm : Industriell teknik och management, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9299.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Vecchi, Federica. "Analisi automatica della corporate reputation attraverso il topic modeling." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8384/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questo elaborato tratta dell'importanza dell'analisi testuale tramite strumenti informatici. Presenta la tecnica più utilizzata per questo tipo di analisi il: Topic Modeling. Vengono indicati alcuni degli algoritmi più sfruttati e si descrivono gli obiettivi principali. Inoltre introduce il Web Mining per l’estrazione di informazioni presenti nel web, specificando una tecnica particolare chiamata Web Scraping. Nell'ultima sezione dell’elaborato viene descritto un caso di studio. L’argomento dello studio è la Privatizzazione. Viene suddiviso in tre fasi, la primi riguarda la ricerca dei documenti e articoli da analizzare del quotidiano La Repubblica, nella seconda parte la raccolta di documenti viene analizzata attraverso l’uso del software MALLET e come ultimo passo vengono analizzati i topic, prodotti dal programma, a cui vengono assegnate delle etichette per identificare i sotto-argomenti presenti nei documenti della raccolta.
25

Rossi, Espagnet Alberto. "Techno-Economic Assessment of Thermal Energy Storage integration into Low Temperature District Heating Networks." Thesis, KTH, Energiteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191485.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thermal energy storage (TES) systems are technologies with the potential to enhance the efficiency and the flexibility of the coming 4th generation low temperature district heating (LTDH). Their integration would enable the creation of smarter, more efficient networks, benefiting both the utilities and the end consumers. This study aims at developing a comparative assessment of TES systems, both latent and sensible heat based. First, a techno-economic analysis of several TES systems is conducted to evaluate their suitability to be integrated into LTDH. Then, potential scenarios of TES integration are proposed and analysed in a case study of an active LTDH network. This is complemented with a review of current DH legislation focused on the Swedish case, with the aim of taking into consideration the present situation, and changes that may support some technologies over others. The results of the analysis show that sensible heat storage is still preferred to latent heat when coupled with LTDH: the cost per kWh stored is still 15% higher, at least, for latent heat in systems below 5MWh of storage size; though, they require just half of the volume. However, it is expected that the cost of latent heat storage systems will decline in the future, making them more competitive. From a system perspective, the introduction of TES systems into the network results in an increase in flexibility leading to lower heat production costs by load shifting. It is achieved by running the production units with lower marginal heat production costs for longer periods and with higher efficiency, and thus reducing the operating hours of the other more expensive operating units during peak load conditions. In the case study, savings in the magnitude of 0.5k EUR/year are achieved through this operational strategy, with an investment cost of 2k EUR to purchase a water tank. These results may also be extended to the case when heat generation is replaced by renewable, intermittent energy sources; thus increasing profits, reducing fuel consumption, and consequently emissions. This study represents a step forward in the development of a more efficient DH system through the integration of TES, which will play a crucial role in future smart energy system.
Thermal energy storage (TES) eller Termisk energilagring är en teknologi med potentialen att öka effektivitet och flexibilitet i den kommande fjärde generationens fjärrvärme (LTDH). Studien har som mål att kartlägga en komparativ uppskattning av TES systemen, baserad både på latent och sensibel värme. Resultaten visar att lagring av sensibel värme är att föredra före latent värme när den kopplas med LTDH: pris per lagrade kWh kvarstår som 15% högre än för latent värme i system under 5 MWh av lagringsutrymme; dock fordrar de endast hälften av volymen. Utifrån systemperspektiv innebär introduktionen av TES system i nätverket en ökning av flexibilitet vilket leder till reducerade värmeproduktionskostnaderna i mindre belastning. I fallstudien nås en sparnivå av femhundra euro per år genom denna operativa strategi, med en investering av 2000 euro för inköp av vattentank. Resultaten kan också vidgas till en situation där värmeproduktionen ersätts av förnybara, intermittenta energikällor; till detta medföljer högre vinster, lägre bruk av bränsle vilket skulle innebära lägre utsläpp. Studien kan ses som ett steg framåt mot skapandet av en mer effektiv DH system genom integrationen av TES, vilket kommer att spela en betydande roll i framtida smarta energisystem.
26

Di, Fiore Silvia. "La dimensione discorsiva della Politica di Coesione. Confronto fra Content Analysis e Topic Modeling." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17284/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questo elaborato studia il confronto fra due tipologie di analisi applicate ad una raccolta di articoli de Il Sole 24 Ore, riguardanti la “Politica di Coesione”: Content Analysis e Analisi Topic Modeling. Vengono descritte ed elencate le tecniche utilizzate per entrambi i metodi di analisi e, nel caso di Topic Modeling vengono indicati gli algoritmi più utilizzati e i corrispettivi software. Nella sezione finale dell’elaborato viene presentato il caso di studio dove sono riportati i risultati ottenuti dall’analisi qualitativa di articoli cartacei che ci sono stati forniti dalla “Regione Puglia” ed i risultati ottenuti dall’analisi Topic Modeling degli stessi articoli in formato digitale tramite il Software Mallet. In entrambe le analisi sono emersi una serie di Topic a cui sono state assegnate delle etichette per identificare i sotto-argomenti presenti nei documenti della raccolta. Alla fine dell’elaborato vengono analizzati e confrontati i vari Topic emersi dalle due tipologie di analisi applicate.
27

Raposo, Carlos Olympio Lima. "Estudo experimental de compactação e expansão de uma escória de aciaria LD para uso em pavimentação." Universidade Federal do Espírito Santo, 2005. http://repositorio.ufes.br/handle/10/6184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Made available in DSpace on 2016-12-23T14:05:54Z (GMT). No. of bitstreams: 1 Raposo Carlos O L (dissertacao).pdf: 2138408 bytes, checksum: 92598a211c5acd54e932935165f2df18 (MD5) Previous issue date: 2005-11-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The steel slag or BOF (Basic Oxygen Furnace) slag is a by-product generated in integrated steel plants. The use of this material in bases and sub-bases of pavements may present technical, economical and environmental advantages compared to natural aggregates. However, problems, like the expansibility and the deficiency of technical criteria for its acceptance, limit the use of steel slags in pavements. The expansibility of steel slags is mainly generated by hydration of free calcium and magnesium oxides (CaO and MgO). The purpose of this study is to evaluate compaction and expansibility of a steel slag using laboratory tests. And then contributing for definition of technical criteria of expansibility evaluation of this material for use in bases and sub-bases of pavements. The steel slag of this study is originated in a integrated steel plant located in Vitória, Espírito Santo, Brazil. In this work, the laboratory compaction tests were conducted using standard and modified effort, and for the expansibility tests three methods were used: PTM-130/78, JIS 5015/92 and ASTM D 4792/00. The compaction tests of the steel slag did not yield optimum water content, showing the same characteristic of granular materials. Statistical analysis of compaction tests did not show significant differences between two procedures (with and without reuse of material), between two efforts of compaction (standard and modified effort) and between two different samples (with and without treatment for expansion reduction). The statistical analysis of expansibility tests using method PTM-130/78 show that: The water content of compaction was not statistically significant in the expansion results; the influence of the temperature in the expansion results was statistically significant; and the influence of the effort of compaction in the expansion results was statistically significant. Also a technical criterion for acceptance of lots of steel slag is proposed here, using PTM- 130/78 test method. The criterion includes sampling procedure, a statistical significant methodology to calculate the minimum number of specimens and the maximum limit of 3% expansion using PTM-130/78 test method.
A escória de aciaria LD é um subproduto gerado durante o processo de fabricação do aço em siderúrgicas que utilizam conversores a oxigênio. A utilização desse material em bases e subbases de pavimentos pode ser vantajosa em termos técnico, econômico e ambiental, comparados a agregados convencionais. Porém, problemas como sua natureza expansiva e a deficiência de critérios técnicos para sua aceitação têm limitado a utilização das escórias de aciaria em pavimentação. A expansão das escórias de aciaria é provocada principalmente pela hidratação dos óxidos de cálcio e magnésio livres presentes nesse material. O objetivo desta dissertação é estudar experimentalmente a compactação e a expansão de uma escória de aciaria LD, e assim contribuir para a definição de critérios técnicos de avaliação da expansão desse material visando a sua utilização em bases e sub-bases de pavimentos. A escória de aciaria LD deste estudo é proveniente de uma siderúrgica da região metropolitana de Vitória, Espírito Santo. Neste trabalho, são apresentados ensaios laboratoriais de caracterização química, física e ambiental, ensaios de compactação com energias do Proctor normal e do Proctor modificado e ensaios de expansão. Os três métodos de ensaio de expansão utilizados foram os métodos: PTM-130/78, JIS A 5015/92 e ASTM D 4792/00. Os resultados dos ensaios de compactação demonstraram que as amostras de escória de aciaria LD estudadas não apresentaram umidade ótima de compactação definida, tendo um comportamento típico de solos e materiais granulares. Análises estatísticas realizadas nos resultados dos ensaios de compactação mostraram ausência de diferenças estatisticamente significantes entre os dois procedimentos utilizados (com e sem reuso de material), entre as duas energias de compactação utilizadas (Proctor normal e Proctor modificado) e entre as duas amostras utilizadas (com e sem tratamento de redução de expansão). As análises estatísticas dos resultados dos ensaios de expansão obtidos pelo método PTM-130/78, nas amostras de escória de aciaria estudadas, demonstraram que: a umidade de compactação não teve influência estatisticamente significante nos resultados de expansão; a influência da temperatura nos resultados de expansão foi altamente significativa; e a influência da energia de compactação nos resultados de expansão foi estatisticamente significante, sendo que a energia do Proctor modificado provocou menores valores de expansão em relação à energia do Proctor normal. Neste trabalho, também se propõe um critério técnico para aceitação dos lotes de escória de aciaria LD segundo o requisito expansão utilizando o método PTM-130/78. Esse critério engloba o procedimento de amostragem, uma metodologia estatisticamente significante para o cálculo do número mínimo de corpos-de-prova e o limite máximo de 3% de expansão determinada pelo método PTM-130/78.
28

Wei, Zhihua. "The research on chinese text multi-label classification." Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Text Classification (TC) which is an important field in information technology has many valuable applications. When facing the sea of information resources, the objects of TC are more complicated and diversity. The researches in pursuit of effective and practical TC technology are fairly challenging. More and more researchers regard that multi-label TC is more suited for many applications. This thesis analyses the difficulties and problems in multi-label TC and Chinese text representation based on a mass of algorithms for single-label TC and multi-label TC. Aiming at high dimensionality in feature space, sparse distribution in text representation and poor performance of multi-label classifier, this thesis will bring forward corresponding algorithms from different angles.Focusing on the problem of dimensionality “disaster” when Chinese texts are represented by using n-grams, two-step feature selection algorithm is constructed. The method combines filtering rare features within class and selecting discriminative features across classes. Moreover, the proper value of “n”, the strategy of feature weight and the correlation among features are discussed based on variety of experiments. Some useful conclusions are contributed to the research of n-gram representation in Chinese texts.In a view of the disadvantage in Latent Dirichlet Allocation (LDA) model, that is, arbitrarily revising the variable in smooth process, a new strategy for smoothing based on Tolerance Rough Set (TRS) is put forward. It constructs tolerant class in global vocabulary database firstly and then assigns value for out-of-vocabulary (oov) word in each class according to tolerant class.In order to improve performance of multi-label classifier and degrade computing complexity, a new TC method based on LDA model is applied for Chinese text representation. It extracts topics statistically from texts and then texts are represented by using the topic vector. It shows competitive performance both in English and in Chinese corpus.To enhance the performance of classifiers in multi-label TC, a compound classification framework is raised. It partitions the text space by computing the upper approximation and lower approximation. This algorithm decomposes a multi-label TC problem into several single-label TCs and several multi-label TCs which have less labels than original problem. That is, an unknown text should be classified by single-label classifier when it is partitioned into lower approximation space of some class. Otherwise, it should be classified by corresponding multi-label classifier.An application system TJ-MLWC (Tongji Multi-label Web Classifier) was designed. It could call the result from Search Engines directly and classify these results real-time using improved Naïve Bayes classifier. This makes the browse process more conveniently for users. Users could locate the texts interested immediately according to the class information given by TJ-MLWC
La thèse est centrée sur la Classification de texte, domaine en pleine expansion, avec de nombreuses applications actuelles et potentielles. Les apports principaux de la thèse portent sur deux points : Les spécificités du codage et du traitement automatique de la langue chinoise : mots pouvant être composés de un, deux ou trois caractères ; absence de séparation typographique entre les mots ; grand nombre d’ordres possibles entre les mots d’une phrase ; tout ceci aboutissant à des problèmes difficiles d’ambiguïté. La solution du codage en «n-grams »(suite de n=1, ou 2 ou 3 caractères) est particulièrement adaptée à la langue chinoise, car elle est rapide et ne nécessite pas les étapes préalables de reconnaissance des mots à l’aide d’un dictionnaire, ni leur séparation. La classification multi-labels, c'est-à-dire quand chaque individus peut être affecté à une ou plusieurs classes. Dans le cas des textes, on cherche des classes qui correspondent à des thèmes (topics) ; un même texte pouvant être rattaché à un ou plusieurs thème. Cette approche multilabel est plus générale : un même patient peut être atteint de plusieurs pathologies ; une même entreprise peut être active dans plusieurs secteurs industriels ou de services. La thèse analyse ces problèmes et tente de leur apporter des solutions, d’abord pour les classifieurs unilabels, puis multi-labels. Parmi les difficultés, la définition des variables caractérisant les textes, leur grand nombre, le traitement des tableaux creux (beaucoup de zéros dans la matrice croisant les textes et les descripteurs), et les performances relativement mauvaises des classifieurs multi-classes habituels
文本分类是信息科学中一个重要而且富有实际应用价值的研究领域。随着文本分类处理内容日趋复杂化和多元化,分类目标也逐渐多样化,研究有效的、切合实际应用需求的文本分类技术成为一个很有挑战性的任务,对多标签分类的研究应运而生。本文在对大量的单标签和多标签文本分类算法进行分析和研究的基础上,针对文本表示中特征高维问题、数据稀疏问题和多标签分类中分类复杂度高而精度低的问题,从不同的角度尝试运用粗糙集理论加以解决,提出了相应的算法,主要包括:针对n-gram作为中文文本特征时带来的维数灾难问题,提出了两步特征选择的方法,即去除类内稀有特征和类间特征选择相结合的方法,并就n-gram作为特征时的n值选取、特征权重的选择和特征相关性等问题在大规模中文语料库上进行了大量的实验,得出一些有用的结论。针对文本分类中运用高维特征表示文本带来的分类效率低,开销大等问题,提出了基于LDA模型的多标签文本分类算法,利用LDA模型提取的主题作为文本特征,构建高效的分类器。在PT3多标签分类转换方法下,该分类算法在中英文数据集上都表现出很好的效果,与目前公认最好的多标签分类方法效果相当。针对LDA模型现有平滑策略的随意性和武断性的缺点,提出了基于容差粗糙集的LDA语言模型平滑策略。该平滑策略首先在全局词表上构造词的容差类,再根据容差类中词的频率为每类文档的未登录词赋予平滑值。在中英文、平衡和不平衡语料库上的大量实验都表明该平滑方法显著提高了LDA模型的分类性能,在不平衡语料库上的提高尤其明显。针对多标签分类中分类复杂度高而精度低的问题,提出了一种基于可变精度粗糙集的复合多标签文本分类框架,该框架通过可变精度粗糙集方法划分文本特征空间,进而将多标签分类问题分解为若干个两类单标签分类问题和若干个标签数减少了的多标签分类问题。即,当一篇未知文本被划分到某一类文本的下近似区域时,可以直接用简单的单标签文本分类器判断其类别;当未知文本被划分在边界域时,则采用相应区域的多标签分类器进行分类。实验表明,这种分类框架下,分类的精确度和算法效率都有较大的提高。本文还设计和实现了一个基于多标签分类的网页搜索结果可视化系统(MLWC),该系统能够直接调用搜索引擎返回的搜索结果,并采用改进的Naïve Bayes多标签分类算法实现实时的搜索结果分类,使用户可以快速地定位搜索结果中感兴趣的文本。
29

Nymark, Marianne Kristine. "Taxonomy of the Rufous-naped lark (Mirafra africana) complex based on song analysis." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-435322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Rufous-naped lark Mirafra africana complex consists of 22 subspecies spread across the African continent. Several of the subspecies have recently been suggested to potentially be treated as separate species. In this study a comparative analysis was done on the song from seven of the subspecies: M. a. africana, M. a. athi, M. a. grisescens, M. a. kabalii, M. a. nyikae, M. a. transvaalensis and M. a. tropicalis. The results showed that M. a. athi, M. a. kabalii and M. a. nyikae are all very divergent from each other as well as from the other four subspecies. In contrast, M. a. tropicalis, M. a. grisescens, M. a. africana and M. a. transvaalensis are not clearly separable from each other. Based on the results, I suggest that M. a. athi, M. a. kabalii and M. a. nyikae can be classified as separate species, with M. a. africana, M. a. tropicalis, M. a grisescens and M. a. transvaalensis forming a fourth species (M. africana sensu stricto). Finally, I conclude that this study shows that more studies need to be done on the subspecies of the Mirafra africana complex.
30

Uys, J. W. "A framework for exploiting electronic documentation in support of innovation processes." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/1449.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (PhD (Industrial Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: The crucial role of innovation in creating sustainable competitive advantage is widely recognised in industry today. Likewise, the importance of having the required information accessible to the right employees at the right time is well-appreciated. More specifically, the dependency of effective, efficient innovation processes on the availability of information has been pointed out in literature. A great challenge is countering the effects of the information overload phenomenon in organisations in order for employees to find the information appropriate to their needs without having to wade through excessively large quantities of information to do so. The initial stages of the innovation process, which are characterised by free association, semi-formal activities, conceptualisation, and experimentation, have already been identified as a key focus area for improving the effectiveness of the entire innovation process. The dependency on information during these early stages of the innovation process is especially high. Any organisation requires a strategy for innovation, a number of well-defined, implemented processes and measures to be able to innovate in an effective and efficient manner and to drive its innovation endeavours. In addition, the organisation requires certain enablers to support its innovation efforts which include certain core competencies, technologies and knowledge. Most importantly for this research, enablers are required to more effectively manage and utilise innovation-related information. Information residing inside and outside the boundaries of the organisation is required to feed the innovation process. The specific sources of such information are numerous. Such information may further be structured or unstructured in nature. However, an ever-increasing ratio of available innovation-related information is of the unstructured type. Examples include the textual content of reports, books, e-mail messages and web pages. This research explores the innovation landscape and typical sources of innovation-related information. In addition, it explores the landscape of text analytical approaches and techniques in search of ways to more effectively and efficiently deal with unstructured, textual information. A framework that can be used to provide a unified, dynamic view of an organisation‟s innovation-related information, both structured and unstructured, is presented. Once implemented, this framework will constitute an innovation-focused knowledge base that will organise and make accessible such innovation-related information to the stakeholders of the innovation process. Two novel, complementary text analytical techniques, Latent Dirichlet Allocation and the Concept-Topic Model, were identified for application with the framework. The potential value of these techniques as part of the information systems that would embody the framework is illustrated. The resulting knowledge base would cause a quantum leap in the accessibility of information and may significantly improve the way innovation is done and managed in the target organisation.
AFRIKAANSE OPSOMMING: Die belangrikheid van innovasie vir die daarstel van „n volhoubare mededingende voordeel word tans wyd erken in baie sektore van die bedryf. Ook die belangrikheid van die toeganklikmaking van relevante inligting aan werknemers op die geskikte tyd, word vandag terdeë besef. Die afhanklikheid van effektiewe, doeltreffende innovasieprosesse op die beskikbaarheid van inligting word deurlopend beklemtoon in die navorsingsliteratuur. „n Groot uitdaging tans is om die oorsake en impak van die inligtingsoorvloedverskynsel in ondernemings te bestry ten einde werknemers in staat te stel om inligting te vind wat voldoen aan hul behoeftes sonder om in die proses deur oormatige groot hoeveelhede inligting te sif. Die aanvanklike stappe van die innovasieproses, gekenmerk deur vrye assosiasie, semi-formele aktiwiteite, konseptualisering en eksperimentasie, is reeds geïdentifiseer as sleutelareas vir die verbetering van die effektiwiteit van die innovasieproses in sy geheel. Die afhanklikheid van hierdie deel van die innovasieproses op inligting is besonder hoog. Om op „n doeltreffende en optimale wyse te innoveer, benodig elke onderneming „n strategie vir innovasie sowel as „n aantal goed gedefinieerde, ontplooide prosesse en metingskriteria om die innovasieaktiwiteite van die onderneming te dryf. Bykomend benodig ondernemings sekere innovasie-ondersteuningsmeganismes wat bepaalde sleutelaanlegde, -tegnologiëe en kennis insluit. Kern tot hierdie navorsing, benodig organisasies ook ondersteuningsmeganismes om hul in staat te stel om meer doeltreffend innovasie-verwante inligting te bestuur en te gebruik. Inligting, gehuisves beide binne en buite die grense van die onderneming, word benodig om die innovasieproses te voer. Die bronne van sulke inligting is veeltallig en hierdie inligting mag gestruktureerd of ongestruktureerd van aard wees. „n Toenemende persentasie van innovasieverwante inligting is egter van die ongestruktureerde tipe, byvoorbeeld die inligting vervat in die tekstuele inhoud van verslae, boeke, e-posboodskappe en webbladsye. In hierdie navorsing word die innovasielandskap asook tipiese bronne van innovasie-verwante inligting verken. Verder word die landskap van teksanalitiese benaderings en -tegnieke ondersoek ten einde maniere te vind om meer doeltreffend en optimaal met ongestruktureerde, tekstuele inligting om te gaan. „n Raamwerk wat aangewend kan word om „n verenigde, dinamiese voorstelling van „n onderneming se innovasieverwante inligting, beide gestruktureerd en ongestruktureerd, te skep word voorgestel. Na afloop van implementasie sal hierdie raamwerk die innovasieverwante inligting van die onderneming organiseer en meer toeganklik maak vir die deelnemers van die innovasieproses. Daar word verslag gelewer oor die aanwending van twee nuwerwetse, komplementêre teksanalitiese tegnieke tot aanvulling van die raamwerk. Voorts word die potensiele waarde van hierdie tegnieke as deel van die inligtingstelsels wat die raamwerk realiseer, verder uitgewys en geillustreer.
31

Le, Thi Khuyen. "Sparse precision matrix estimation in high dimension and application to medical imaging : hypothesis testing on some particular graphical models of GLASSO." Thesis, Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Notre recherche exploite deux caractéristiques principales du modèle GLASSO: la parcimonie et la monotonie. En se basant sur la parcimonie de ce modèle, nous proposons d'adapter la méthode d'analyse discriminante linéaire (ADL) en grande dimension, en incorporant une estimation parcimonieuse de la matrice de précision sur l'ensemble de la population. Afin d'améliorer la performance de cette méthode, nous proposons de réduire la dimension de données en sélectionnant des composantes connexes plus discriminantes. Cette sélection est basée sur une forme spéciale de la matrice de précision qui est diagonale par blocs. En particulier, chaque bloc est correspondant à une composante connexe dans le modèle GLASSO. En se basant sur la méthode d'analyse discriminante factorielle, nous définissons la capacité discriminante de chaque composante. Ensuite, la sélection est restreinte uniquement sur les variables dans les composantes dont les capacités discriminantes sont plus grandes. Tous ces méthodes sont appliquées sur les données réelles issues de l'imagerie cérébrale tomographie pour la classification de certains groupes de patients atteints comme fibromyalgie, dépression, ou de la maladie d'Alzheimer. Par ailleurs, en se basant sur la monotonie du modèle GLASSO, nous proposons un test de la signification des composantes sur des modèles d'intersection et d'union de GLASSO. Ces tests nous permettent de déterminer le modèle le plus parcimonieux qui contient toutes les composantes connexes du vrai modèle. En particulier, ces tests convergent sur des lois exponentielles avec des nombres raisonnables de variables et d’observations
Our research exploits two main characteristics of the GLASSO model: the sparsity and the monotony. Based on the sparsity of this model, we propose to adapt the linear discriminant analysis method (LDA) in high dimension by using a sparse estimated precision matrix on the whole population. In order to improve the performance of this method, we propose to reduce the dimension of data by selecting the most discriminant connected components. This method is based on a block diagonal form of the precision matrix. In fact, each block is corresponding to a component of the GLASSO model. Inspired from the factorial discriminant analysis method, we define the discriminant capacity for each component. Then the selection is restricted only on the variables within the components whose discriminant capacities are the largest. Our Adapted-LDA and connected component selection methods are applied to real data from PET tomography brain imaging for classifying certain patient groupes such as fibromyalgia, depression, or Alzheimer's disease. In addition, based on the monotony of the GLASSO model, we propose to formulate a significance test for the connected components on the intersection as well as the union of GLASSO models. These tests allow us to determine the sparsest estimated model which contains all the components of the real model. They converge to exponential distribution with reasonable numbers of observations and variables
32

Eriksson, Olle. "Studies on Premenstrual Dysphoria." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5812.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Scheu, Julia. "Ut pictura philosophia." Doctoral thesis, Humboldt-Universität zu Berlin, Kultur-, Sozial- und Bildungswissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Die Untersuchung widmet sich der visuellen Thematisierung autoreferentieller Fragestellungen zur Genese sowie den Grundlagen und Zielen von Malerei in der italienischen Druckgraphik des ausgehenden 16. und 17. Jahrhunderts. Erstmals wird diese bildliche Auseinandersetzung mit abstrakten kunsttheoretischen Inhalten zum zentralen Untersuchungsgegenstand erklärt und anhand von vier hinsichtlich ihrer ikonographischen Dichte herausragenden druckgraphischen Beispielen - Federico Zuccaris Lamento della pittura, Pietro Testas Liceo della pittura, Salvator Rosas Genio di Salvator Rosa und Carlo Marattas Scuola del Disegno – vergleichend analysiert. Neben der Rekonstruktion der Entstehungszusammenhänge befasst sich die Analyse mit dem Verhältnis von Text und Bild, offenen Fragen der Ikonographie, der zeitgenössischen Verlagssituation sowie dem Adressatenkreis und somit schließlich der Motivation für jene komplexen bildlichen Reflexionen über Malerei. Als zentrale Gemeinsamkeit der kunsttheoretischen Blätter, welche im Kontext der römischen Akademiebewegung entstanden sind, konnte das Bestreben, die Malerei im Sinne einer Metawissenschaft über das neuzeitliche Wissenschaftspanorama hinauszuheben, erschlossen werden. Anhand einer umfassenden Neubewertung der einzigartigen Ikonographien wird erstmals aufgezeigt, dass dem Vergleich zwischen Malerei und Philosophie als der Mutter aller Wissenschaften in der visuellen Kunsttheorie des 17. Jahrhunderts eine vollkommen neuartige Bedeutung zukommt. Dieser hat neue Spielräume für die bildliche Definition des künstlerischen Selbstverständnisses eröffnet, die der traditionelle, aus dem Horazschen Diktum „Ut pictura poesis“ hervorgegangene Vergleich zwischen Malerei und Dichtung nicht in ausreichender Form bereit hielt. Folglich thematisiert die vorliegende Untersuchung auch die Frage nach dem spezifischen reflexiven Potenzial des Bildes, seiner medialen Autonomie und seiner möglichen Vorrangstellung gegenüber dem Medium der Sprache.
The study deals with the pictorial examination of self-implicating topics relating to the genesis, the fundamentals and the aims of painting by Italian printmaking of the late 16th and 17th century. For the first time, a research is focussed on the pictorial examination of abstract contents of art theory as shown in the selected and compared examples which are extraordinary regarding their iconographical concentration – the Lamento della pittura by Federico Zuccari, the Liceo della pittura by Pierto Testa, the Genio di Salvator Rosa by Salvator Rosa and the Scuola del Disegno by Carlo Maratta. Besides the reconstruction of the history of origins the research is dealing with the relationship of image and text, problems of iconography, the coeval publishing situation as well as the target audience of these prints and finally the motivation for those very complex visual reflections on painting. As essential similarity of those arttheoretical prints, which all araised within the context of the Roman Art Accademy, has been determined the ambition to specify painting as a kind of Meta-science, which is somehow superior to all other modern age sciences. By means of an extensive reevaluation of the unique iconography of every single sheet it became feasible to illustrate that the comparison between painting and philosophy as the origin of the entire spectrum of sciences has attained a completely new dimension within the pictorial art theory of the 17th century. The novel comparison has opened a wider range and diversity for the visual definition of the artists` self-conception compared to the traditional comparison between painting and poetry, as it emerged from the dictum „Ut pictura poesis“ by Horaz. Accordingly the study deals with the question of the particular reflexive capability of images, their medial autonomy and their potential primacy over language.
34

"Monitoring for Reliable and Secure Power Management Integrated Circuits via Built-In Self-Test." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54959.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
abstract: Power management circuits are employed in most electronic integrated systems, including applications for automotive, IoT, and smart wearables. Oftentimes, these power management circuits become a single point of system failure, and since they are present in most modern electronic devices, they become a target for hardware security attacks. Digital circuits are typically more prone to security attacks compared to analog circuits, but malfunctions in digital circuitry can affect the analog performance/parameters of power management circuits. This research studies the effect that these hacks will have on the analog performance of power circuits, specifically linear and switching power regulators/converters. Apart from security attacks, these circuits suffer from performance degradations due to temperature, aging, and load stress. Power management circuits usually consist of regulators or converters that regulate the load’s voltage supply by employing a feedback loop, and the stability of the feedback loop is a critical parameter in the system design. Oftentimes, the passive components employed in these circuits shift in value over varying conditions and may cause instability within the power converter. Therefore, variations in the passive components, as well as malicious hardware security attacks, can degrade regulator performance and affect the system’s stability. The traditional ways of detecting phase margin, which indicates system stability, employ techniques that require the converter to be in open loop, and hence can’t be used while the system is deployed in-the-field under normal operation. Aging of components and security attacks may occur after the power management systems have completed post-production test and have been deployed, and they may not cause catastrophic failure of the system, hence making them difficult to detect. These two issues of component variations and security attacks can be detected during normal operation over the product lifetime, if the frequency response of the power converter can be monitored in-situ and in-field. This work presents a method to monitor the phase margin (stability) of a power converter without affecting its normal mode of operation by injecting a white noise/ pseudo random binary sequence (PRBS). Furthermore, this work investigates the analog performance parameters, including phase margin, that are affected by various digital hacks on the control circuitry associated with power converters. A case study of potential hardware attacks is completed for a linear low-dropout regulator (LDO).
Dissertation/Thesis
Masters Thesis Electrical Engineering 2019
35

Wu, Jian-Sheng, and 吳建生. "Prediction of stock price trend from news articles:using text mining and LDA algorithm." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/8a8agg.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立高雄應用科技大學
資訊管理研究所碩士班
103
Most predictions on the rise and fall of the stock in the past time either take aspect on the key words or are based on the technical and fundamental analysis, rather than study how stock is affected by specific subjects from relevant informative reports, news. This paper investigates the individual weekly stock prices of foods, semiconductors, and computer peripherals equipments categories in the cnYES, and acquires the subjects of news, reports by Latent Dirichlet Allocation(LDA) and text mining. By forming new key words from subject of news, we have the basis to analysis and speculate the subjects of news. We use the article about foods, semiconductors, and computer peripherals equipments dated from September, 2014 to February, 2015 as the training data and establish a topic model set of various subjects. Then, we figure out an exceptive value of the odds that each subject appearing in the news and reports, so as to obtain a predictive value every other day. We also use the Receiver Operating Characteristic Curve to determine the predicted results, and take the superior results as the prediction for lookup table and threshold value. When we have new article in the topic model set, we will calculate the odds of new which is relevant to the subjects we need and the expected value, and use them as predictive value to do a table look up. The closest value in the table look up we find will be regarded as the result of this research. In the end, the result shows that the stock of food category perform the best. Due to the booming information, the other categories might involve the wrong or the old information which are not able to use as a new subject to determine the chance of the rise and fall of the stock. Therefore, the categories of semiconductors and computer peripherals equipments do not perform well.
36

Rosiello, Fernandina. "Relatório de estágio nas Edições Piaget Lda." Master's thesis, 2015. http://hdl.handle.net/10362/16153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Este relatório apresenta o meu percurso realizado durante o estágio nas Edições Piaget como parte da componente não-letiva do Mestrado em Edição de Texto. São descritas as funções delegadas e exercidas ao longo da trajetória, assim como todo o processo de realização do livro no decorrer do estágio. Contam-se entre as principais tarefas exercidas: o conhecimento da história e atividade da editora, e do seu catálogo de obras; os processos de preparação de texto e a sua revisão, como parte do circuito das atividades de trabalho desenvolvidas. Procedeu-se ainda à elaboração de um projeto editorial específico, do qual fiquei responsável, tendo por objetivo principal a edição de um texto sobre Contos com Música. A este respeito, descreve-se o percurso de simulação da atividade real desenvolvida desde a seleção dos textos até à elaboração de um livro, exercício cujos elementos principais se incluem anexamente.
37

Yang, Ting-Hsuan, and 楊庭瑄. "Applying Techniques of Text Mining on Trading Investment Strategy:an LDA Approach to Distinguish the Topics." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/bmm33k.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立清華大學
計量財務金融系
105
Sentiment analysis has triggered a heated discussion in recent years, and it can be widely used in various kinds of fields. For example, It can be applied on the detection of network security, the prediction of the president election, the recommendation system on the shopping website, and so on. This thesis aims to apply the sentiment analysis on the trading investment strategy and make use of the articles of Federal Reserve to do the sentiment analysis to predict the return rate of stocks. Moreover, the thesis uses the topic model of latent dirichlet allocation to investigate the latent topics from the articles of Federal Reserve, and the goal is to distinguish the topics which influence the return rate of stock the most from the articles of Federal Reserve. Finally, my research expects to frame a lucrative trading investment strategy based on the research results. The thesis is inspired by the researches of Tetlock (2007) and Tetlock, Saar-Tsechansky, and MacSkassy (2008). First, I will use the topic model of latent dirichlet allocation to classify the words according to different topics. Second, I will eliminate the paragraph which is irrelevant to finance in order to assess the exact financial sentiment and to apply it on investment trading strategy. Last but not least, I will add the derivatives into the investment trading strategy so as to hedge the loss from the wrong prediction of sentiment, and then I will examine the performance of the investment trading strategy after the modification.
38

JHENG, YU-JIE, and 鄭宇傑. "A Comparative Study of Automatic Text Labeling Using Von Neumann Kernel and LDA Topic Model." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/74450264834068611898.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立臺北大學
資訊管理研究所
103
There are tools and techniques that are capable of grouping vast documents into cohesive clusters based on the relatedness or similarity metrics between these documents. The resulted clusters of documents need to be properly labeled to facilitate a fast and holistic comprehension of the main themes or topics bore by them. There were systems that employed various theoretical or empirical based approaches to label clusters of documents automatically. Our study applied Latent Dirichlet Allocation (LDA) to obtain the most likely keywords for topics in the document clusters. The obtained keywords are then composed into key phrases as the representative labels of the clusters. The appropriateness of the labels are evaluated using the evaluative framework proposed by Treeratpituk. We found the LDA-based automatic labeling system generates proper clusters’ labels. We also compare the effectiveness of the LDA-based labeling system with our home-grown kernel-based system. In most of the cases, the LDA-based system generated better clusters’ labels then our kernel-based system in the experiment.
39

Zaplatílková, Lucie. "Vztah fyzické zdatnosti a studijního prospěchu žáků ZŠ." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-415576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Name Relation between physical fitness and study results of primary school students Goal The goal of the thesis is to examine physical fitness of primary school students, compare it with their study results, and see if there is any relatioship between these two variables. Methods Physical fitness is tested with tests and norms of Unifittest (6-60) (Měkota, Kovář, 2002) and study results are generated with questionnaires. The answers are then matched with Unifittest's results of each student. Results are processed with the help of statistical methods. Results The results of Unifittest (6- 60) showed above-average or well above-average level of fitness with 89 % of individuals. Only 11 % of participants reached the average score. The best performance was detected with girls of the 9 ͭ ͪ grade and boys of the 7 ͭ ͪ grade. Spearman correlation coefficient showed a medium-strong relation between the Unifittest (6- 60) results and grades from Czech language, foreign language, and mathematics in 2 categories. 6 ͭ ͪ grade girls had the correlation rₛ= 0,51 and 8 ͭ ͪ grade girls rₛ= 0,56. Other categories showed very weak correlation relation. Keywords Unifittest, physical fitness, studying, youth, sit-ups, pull-ups, long jump, Cooper test
40

Koštial, Martin. "Získavanie a analýza dát pre oblasť crowdfundingu." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-428891.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The thesis deals with data acquisition from crowdfunding and their analysis. The theoretical part is focused on the description of available technologies and algorithms for data analysis. In the practical part the data collection is realized. Data mining and text mining algorithms are applied in this section for data.
41

HSIANG, CHUANG KAI, and 莊凱翔. "The prediction of trend toward stock price by text mining and sentiment analysis on social media: Using SVM and LDA Algorithm." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/mqp258.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立高雄應用科技大學
資訊管理研究所碩士班
106
In recent years, text mining is largely applied in the surrounding. this paper uses text mining to explore the social media content and uses the classification algorithm to predict future stock trends.   In this paper, Using Latent Dirichlet Allocation and sentiment analysis and other text mining methods to analysis the social media context which is collected by the Internet, in addition, this study uses technical indicators to predict the stock price of Taiwan stock market, including Williams %R, Psychological Line and On Balance Volume…and many more, use the model to predict stock price movements. Through the LDA to establish the social media context topics and sentiment analysis of social media context to obtain the analysis results. then it through the support vector machine training to obtain the accuracy of social media context prediction, and compare influence of correct rate between sentiment analysis and topic vector.
42

Correia, Acácio Filipe Pereira Pinto. "Towards Preemptive Text Edition using Topic Matching on Corpora." Master's thesis, 2016. http://hdl.handle.net/10400.6/6368.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, the results of scientific research are only recognized when published in papers for international journals or magazines of the respective area of knowledge. This perspective reflects the importance of having the work reviewed by peers. The revision encompasses a thorough analysis on the work performed, including quality of writing and whether the study advances the state-of-the-art, among other details. For these reasons, with the publishing of the document, other researchers have an assurance of the high quality of the study presented and can, therefore, make direct usage of the findings in their own work. The publishing of documents creates a cycle of information exchange responsible for speeding up the progress behind the development of new techniques, theories and technologies, resulting in added value for the entire society. Nonetheless, the existence of a detailed revision of the content sent for publication requires additional effort and dedication from its authors. They must make sure that the manuscript is of high quality, since sending a document with mistakes conveys an unprofessional image of the authors, which may result in the rejection at the journal or magazine. The objective of this work is to develop an algorithm capable of assisting in the writing of this type of documents, by proposing suggestions of possible improvements or corrections according to its specific context. The general idea for the solution proposed is for the algorithm to calculate suggestions of improvements by comparing the content of the document being written in to that of similar published documents on the field. In this context, a study on Natural Language Processing (NLP) techniques used in the creation of models for representing the document and its subjects was performed. NLP provides the tools for creating models to represent the documents and identify their topics. The main concepts include n-grams and topic modeling. The study included also an analysis of some works performed in the field of academic writing. The structure and contents of this type of documents, the presentation of some of the characteristics that are common to high quality articles, as well as the tools developed with the objective of helping in its writing were also subject of analysis. The developed algorithm derives from the combination of several tools backed up by a collection of documents, as well as the logic connecting all components, implemented in the scope of this Master’s. The collection of documents is constituted by full text of articles from different areas, including Computer Science, Physics and Mathematics, among others. The topics of these documents were extracted and stored in order to be fed to the algorithm. By comparing the topics extracted from the document under analysis with those from the documents in the collection, it is possible to select its closest documents, using them for the creation of suggestions. The algorithm is capable of proposing suggestions for word replacements which are more commonly utilized in a given field of knowledge through a set of tools used in syntactic analysis, synonyms search and morphological realization. Both objective and subjective tests were conducted on the algorithm. They demonstrate that, in some cases, the algorithm proposes suggestions which approximate the terms used in the document to the most utilized terms in the state-of-the-art of a defined scientific field. This points towards the idea that the usage of the algorithm should improve the quality of the documents, as they become more similar to the ones already published. Even though the improvements to the documents are minimal, they should be understood as a lower bound for the real utility of the algorithm. This statement is partially justified by the existence of several parsing errors both in the training and test sets, resulting from the parsing of the pdf files from the original articles, which can be improved in a production system. The main contributions of this work include the presentation of the study performed on the state of the art, the design and implementation of the algorithm and the text editor developed as a proof of concept. The analysis on the specificity of the context, which results from the tests performed on different areas of knowledge, and the large collection of documents, gathered during this Master’s program, are also important contributions of this work.
Hoje em dia, a realização de uma investigação científica só é valorizada quando resulta na publicação de artigos científicos em jornais ou revistas internacionais de renome na respetiva área do conhecimento. Esta perspetiva reflete a importância de que os estudos realizados sejam validados por pares. A validação implica uma análise detalhada do estudo realizado, incluindo a qualidade da escrita e a existência de novidades, entre outros detalhes. Por estas razões, com a publicação do documento, outros investigadores têm uma garantia de qualidade do estudo realizado e podem, por isso, utilizar o conhecimento gerado para o seu próprio trabalho. A publicação destes documentos cria um ciclo de troca de informação que é responsável por acelerar o processo de desenvolvimento de novas técnicas, teorias e tecnologias, resultando na produção de valor acrescido para a sociedade em geral. Apesar de todas estas vantagens, a existência de uma verificação detalhada do conteúdo do documento enviado para publicação requer esforço e trabalho acrescentado para os autores. Estes devem assegurar-se da qualidade do manuscrito, visto que o envio de um documento defeituoso transmite uma imagem pouco profissional dos autores, podendo mesmo resultar na rejeição da sua publicação nessa revista ou ata de conferência. O objetivo deste trabalho é desenvolver um algoritmo para ajudar os autores na escrita deste tipo de documentos, propondo sugestões para melhoramentos tendo em conta o seu contexto específico. A ideia genérica para solucionar o problema passa pela extração do tema do documento a ser escrito, criando sugestões através da comparação do seu conteúdo com o de documentos científicos antes publicados na mesma área. Tendo em conta esta ideia e o contexto previamente apresentado, foi realizado um estudo de técnicas associadas à área de Processamento de Linguagem Natural (PLN). O PLN fornece ferramentas para a criação de modelos capazes de representar o documento e os temas que lhe estão associados. Os principais conceitos incluem n-grams e modelação de tópicos (topic modeling). Para concluir o estudo, foram analisados trabalhos realizados na área dos artigos científicos, estudando a sua estrutura e principais conteúdos, sendo ainda abordadas algumas características comuns a artigos de qualidade e ferramentas desenvolvidas para ajudar na sua escrita. O algoritmo desenvolvido é formado pela junção de um conjunto de ferramentas e por uma coleção de documentos, bem como pela lógica que liga todos os componentes, implementada durante este trabalho de mestrado. Esta coleção de documentos é constituída por artigos completos de algumas áreas, incluindo Informática, Física e Matemática, entre outras. Antes da análise de documentos, foi feita a extração de tópicos da coleção utilizada. Deste forma, ao extrair os tópicos do documento sob análise, é possível selecionar os documentos da coleção mais semelhantes, sendo estes utilizados para a criação de sugestões. Através de um conjunto de ferramentas para análise sintática, pesquisa de sinónimos e realização morfológica, o algoritmo é capaz de criar sugestões de substituições de palavras que são mais comummente utilizadas na área. Os testes realizados permitiram demonstrar que, em alguns casos, o algoritmo é capaz de fornecer sugestões úteis de forma a aproximar os termos utilizados no documento com os termos mais utilizados no estado de arte de uma determinada área científica. Isto constitui uma evidência de que a utilização do algoritmo desenvolvido pode melhorar a qualidade da escrita de documentos científicos, visto que estes tendem a aproximar-se daqueles já publicados. Apesar dos resultados apresentados não refletirem uma grande melhoria no documento, estes deverão ser considerados uma baixa estimativa ao valor real do algoritmo. Isto é justificado pela presença de inúmeros erros resultantes da conversão dos documentos pdf para texto, estando estes presentes tanto na coleção de documentos, como nos testes. As principais contribuições deste trabalho incluem a partilha do estudo realizado, o desenho e implementação do algoritmo e o editor de texto desenvolvido como prova de conceito. A análise de especificidade de um contexto, que advém dos testes realizados às várias áreas do conhecimento, e a extensa coleção de documentos, totalmente compilada durante este mestrado, são também contribuições do trabalho.
43

BOUDOVÁ, Adéla. "Barevná modifikace Warteggova kresebného testu - typický způsob zpracování u dětí se SPU." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-136129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This graduation work is aboutthe typical characteristics of children diagnosed with learning disabilities in the color modification Wartegg drawing test. In the theoretical part, characterized the basis for research, which includes topics dealing with projective techniques and their use in children, which includes also Warteggův drawing test and its modifications. There are also topics have focused on specific learning disabilities, their social aspects, and artistic expression of children with this diagnosis and the graphic skills of children in general. Followed by part dealing with the colors, their use in psychodiagnostic and their symbolism. This section was prepared analyzing literature and its aim is characteristic Wartegg drawing test and color modifications.In the empirical part of the selected characters analyzed in terms of processing individual fields Wartegg drawing test, which are then processed statistically. The results of the research group are compared with the control group. The target of graduation work is determination of typical marks in processing colourmodification of Wartegg?s drawing test with learning disabilities diagnosed children compared to children of intact
44

Zondo, Raymond Mnyamezeli Mlungisi. "The replacement of the doctrine of pith and marrow by the catnic test in English Patent Law : a historical evaluation." Diss., 2012. http://hdl.handle.net/10500/5697.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation is a historical evaluation of the movement of the English courts from the doctrine of pith and marrow to the Catnic test in the determination of non-textual infringement of patents. It considers how and why the doctrine was replaced with the Catnic test. It concludes that this movement occurred as a result of the adoption by a group of judges of literalism in the construction of patents while another group dissented and maintained the correct application of the doctrine. Although the Court of Appeal and the House of Lords initially approved the literalist approach, they, after realising its untennability, adopted the dissenters’ approach, but, ultimately, adopted the Catnic test in which features of the dissenters’ approach were included. The dissertation concludes that the doctrine of pith and marrow, correctly applied, should have been retained as the Catnic test creates uncertainty and confusion.
Mercantile Law
LL.M.

До бібліографії