Дисертації з теми "LDH test"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-44 дисертацій для дослідження на тему "LDH test".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Balášová, Patricie. "Příprava a charakterizace moderních krytů ran." Master's thesis, Vysoké učení technické v Brně. Fakulta chemická, 2021. http://www.nusl.cz/ntk/nusl-449701.
Nelson, Stephanie Anne. "Associations Between Intelligence Test Scores and Test Session Behavior in Children with ADHD, LD, and EBD." ScholarWorks @ UVM, 2008. http://scholarworks.uvm.edu/graddis/159.
Harrysson, Mattias. "Neural probabilistic topic modeling of short and messy text." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189532.
Att utforska enorma mängder användargenererad data med ämnen postulerar ett nytt sätt att hitta användbar information. Ämnena antas vara “gömda” och måste “avtäckas” med statistiska metoder såsom ämnesmodellering. Dock är användargenererad data generellt sätt kort och stökig t.ex. informella chattkonversationer, mycket slangord och “brus” som kan vara URL:er eller andra former av pseudo-text. Denna typ av data är svår att bearbeta för de flesta algoritmer i naturligt språk, inklusive ämnesmodellering. Det här arbetet har försökt hitta den metod som objektivt ger dem bättre ämnena ur kort och stökig text i en jämförande studie. De metoder som jämfördes var latent Dirichlet allocation (LDA), Re-organized LDA (RO-LDA), Gaussian Mixture Model (GMM) with distributed representation of words samt en egen metod med namnet Neural Probabilistic Topic Modeling (NPTM) baserat på tidigare arbeten. Den slutsats som kan dras är att NPTM har en tendens att ge bättre ämnen på kort och stökig text jämfört med LDA och RO-LDA. GMM lyckades inte ge några meningsfulla resultat alls. Resultaten är mindre bevisande eftersom NPTM har problem med långa körtider vilket innebär att tillräckligt många stickprov inte kunde erhållas för ett statistiskt test.
Testa, Luca. "Contribution to the Built-In Self-Test for RF VCOs." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14011/document.
This work deals with the study and the realization of Built-In Self-Tests (BIST) for RF VCOs (Voltage Controlled Oscillators) The increasing complexity of RF integrated circuits is creating an obstacle for the correct measurement of the main RF blocks of any transceiver. Some nodes are not accessible, the voltage excursion of the signals is getting lower and lower and high frequency signals cannot be driven off the die without a main degradation. The common test techniques become then very expensive and time consuming. The wafer sort is firstly approached. The proposed solution is the implementation of a BIST strategy able to discriminate between faulty and good circuits during the wafer test. The chosen methodology is the structural test (fault-oriented). A fault coverage campaign is carried out in order to find the quantity to monitor on-chip that maximizes the probability to find all possible physical defects in the VCO. The result of the analysis reveals that the fault coverage is maximized if the peak-to-peak output voltage is monitored. The complete on-chip characterization of the VCO is then addressed, for chip validation and process monitoring. The information that need to be extracted on-chip concern: amplitude of the signal, consumption of the VCO, frequency of oscillation, its conversion gain (voltage-to-frequency) and eventually some information on the phase noise. A silicon demonstrator for wafer sort purposes is implemented using the ST CMOS 65nm process. It includes a 3.5GHz VCO, an LDO, a temperature and supply-voltage independent voltage reference, a peak-to-peak voltage detector and a comparator. The Vpp detector outputs a DC-voltage that is compared to a predefined acceptance boundary. A logic pass/fail signal is output by the BIST. The attention is then turned to the study of the proposed architecture for an on-chip frequency-meter able to measure the RF frequency with high accuracy. Behavioral simulations using VHDL-AMS lead to the conclusion that a TDC (Time-to-Digital Converter) is the best solution for our goal. The road is then opened to the measure of long-time jitter making use of the same TDC
Adkins, Jason Michael. "Politics from the Pulpit: A Critical Test of Elite Cues in American Politics." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1531927892623716.
Rarivomanana, Jens A. Saucier Gabrièle. "Système CADOC génération fonctionnelle de test pour les circuits complexes /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00319028.
Potter, Mark D. "Using Graphic Organizers with Scriptural Text: Ninth-Grade Latter-Day Saint (LDS) Students’ Comprehension of Doctrinal Readings and Concepts." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1027.
Alsadhan, Majed. "An application of topic modeling algorithms to text analytics in business intelligence." Thesis, Kansas State University, 2014. http://hdl.handle.net/2097/17580.
Department of Computing and Information Sciences
Doina Caragea
William H. Hsu
In this work, we focus on the task of clustering businesses in the state of Kansas based on the content of their websites and their business listing information. Our goal is to cluster the businesses and overcome the challenges facing current approaches such as: data noise, low number of clustered businesses, and lack of evaluation approach. We propose an LSA-based approach to analyze the businesses’ data and cluster those businesses by using Bisecting K-Means algorithm. In this approach, we analyze the businesses’ data by using LSA and produce businesses’ representations in a reduced space. We then use the businesses’ representations to cluster the businesses by applying the Bisecting K-Means algorithm. We also apply an existing LDA-based approach to cluster the businesses and compare the results with our proposed LSA-based approach at the end. In this work, we evaluate the results by using a human-expert-based evaluation procedure. At the end, we visualize the clusters produced in this work by using Google Earth and Tableau. According to our evaluation procedure, the LDA-based approach performed slightly bet- ter then the LSA-based approach. However, with the LDA-based approach, there were some limitations which are: low number of clustered businesses, and not being able to produce a hierarchical tree for the clusters. With the LSA-based approach, we were able to cluster all the businesses and produce a hierarchical tree for the clusters.
Svensson, Karin, and Johan Blad. "Exploring NMF and LDA Topic Models of Swedish News Articles." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429250.
Giehl, Nina Caprice [Verfasser], and H. G. [Akademischer Betreuer] Wahl. "Interferenz eines homogenen Tests für LDL-Cholesterin durch Lipoprotein-X / Nina Caprice Giehl. Betreuer: H. G. Wahl." Marburg : Philipps-Universität Marburg, 2012. http://d-nb.info/1028072619/34.
Ljungberg, Lucas. "Using unsupervised classification with multiple LDA derived models for text generation based on noisy and sensitive data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255010.
Att skapa modeller som genererar kontextuella svar på frågor är ett svårt problem från början, någonting som blir än mer svårt när tillgänglig data innehåller både brus och känslig information. Det är både viktigt och av stort intresse att hitta modeller och metoder som kan hantera dessa svårigheter så att även problematisk data kan användas produktivt.Detta examensarbete föreslår en modell baserat på ett par samarbetande Topic Models (ämnesbaserade modeller) med skiljande ansvarsområden (LDA och GSDMM) för att underlätta de problematiska egenskaperna av datan. Modellen testas på ett verkligt dataset med dessa svårigheter samt ett dataset utan dessa. Målet är att 1) inspektera båda ämnesmodellernas beteende för att se om dessa kan representera datan på ett sådant sätt att andra modeller kan använda dessa som indata eller utdata och 2) förstå vilka av dessa svårigheter som kan hanteras som följd.Resultaten visar att ämnesmodellerna kan representera semantiken och betydelsen av dokument bra nog för att producera välartad indata för andra modeller. Denna representation kan även hantera stora ordlistor och brus i texten. Resultaten visar även att ämnesgrupperingen av responsdatan är godartad nog att användas som mål för klassificeringsmodeller sådant att korrekta meningar kan genereras som respons.
Viatkin, Aleksandr. "Development of a Test Bench for Multilevel Cascaded H-Bridge Converter with Self-Balancing Level Doubling Network." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14974/.
Malan, Gunce. "Do Personality Tests have a place in Academic Preparation of Undergradute Hospitality Students." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4533.
Ponweiser, Martin. "Latent Dirichlet Allocation in R." WU Vienna University of Economics and Business, 2012. http://epub.wu.ac.at/3558/1/main.pdf.
Series: Theses / Institute for Statistics and Mathematics
Pellegrinotti, Idico Luiz 1946. "Analise comparativa das atividades da lactatodesidrogenase (LDH) e creatinafosfoquinase (CPK) no soro e na saliva de individuos treinados em (atletismo, futebol e voleibol) e não treinados submetidos ao teste de Cooper." [s.n.], 1987. http://repositorio.unicamp.br/jspui/handle/REPOSIP/289646.
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Odontologia de Piracicaba
Made available in DSpace on 2018-07-16T22:40:48Z (GMT). No. of bitstreams: 1 Pellegrinotti_IdicoLuiz_M.pdf: 1390081 bytes, checksum: 17fe7c4d279c97f75c676a6a044cdac0 (MD5) Previous issue date: 1987
Resumo: Este trabalho teve como objetivo a verificação das possfveis alterações das atividades das enzimas LDH e CPK, na saliva e no so'ro de indivíduos não treinados e treinados, submetidos ao teste de Cooper. Foram analisados 37 indivíduos do sexo masculino distribuídos em 2 grupos: Grupo I:14 indivíduos não treinados. Grup II : 23 indivíduos treinados, distribuídos em 3 subgrupos: II1 : 06 treinados em atletismo. II2 : 08 treinados em futebol. II3 - 09 treinados em voleibol. Nos indivídudos dos 2 grupos experimentais, foram analisadas as atividades da LDH e da CPK na saliva e no soro em 3 tempos: A-. repouso; B- 1 minuto após o teste de Cooper e C- 3 horas após o mesmo teste. Também foi medido o VO2 Máximo. Os resultados obtidos demonstraram que a saliva pode se constituir em um veículo que permite analisar a atividade enzimática no soro e que a LDH é um indicador não só da especificidade do tipo de treinamento como também da glicólise anaeróbica, enquanto que a CPK é um indicador da adaptação do organismo ao treinamento físico e ao grau de esforço realizado
Abstract: The behaviour of LDH and CPK in saliva and serum of trained and untrained persons submmitted to Cooper test has been studied in this paper. 37 pers.ons, male were distributed in two groups: G1: 14 persons. untrained. GII: 23 persons trained and distributed in 3 subgroups: II1- 06 persons trained in athetism. II2- 08 persons trained in soccer. II3- 09 persons trained in volleyball. The activities of LDH and CPK were determined in the two groups on three times: A- after a rest period; B- 1 minute after the submmittion to Cooper test and C-.3 hours after the same test . The maximum V02 was checked also. The correlation between the activities of LDH obtained in saliva and blood allau us to conclude that the enzymatic activities in saliva can be considereas un indicator af the same activities in bload. LDH activities also prouved to be an acceptable indicator of the specification of the kind of training and also to the anaerobic glicolisis. CPK activity seemed to be a good indicator of the organic adaptation to the fisical training
Mestrado
Fisiologia
Mestre em Biologia e Patologia Buco-Dental
Dwyer, Eleanor A. "Price, Perceived Value and Customer Satisfaction: A Text-Based Econometric Analysis of Yelp! Reviews." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/scripps_theses/715.
Riley, Owen G. "Termediator-II: Identification of Interdisciplinary Term Ambiguity Through Hierarchical Cluster Analysis." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4030.
Shokat, Imran. "Computational Analyses of Scientific Publications Using Raw and Manually Curated Data with Applications to Text Visualization." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78995.
Rarivomanana, Jens A. "Système CADOC : génération fonctionnelle de test pour les circuits complexes." Phd thesis, Grenoble INPG, 1985. http://tel.archives-ouvertes.fr/tel-00319028.
Malan, Rencia. "Optimalisering van leerbekwaamhede by graad nege-leerders 'n vergelyking van enkele vakdidaktiese meetinstrumente /." Diss., Pretoria : [s.n.], 2001. http://upetd.up.ac.za/thesis/available/etd-09192003-131325/.
Aguirre, Castillo José. "Optimisation of the bottom stirring praxis in a LD-LBE converter : Investigations and tests on phosphorous removal, nitrogen as stirring gas, and slopping." Thesis, Uppsala universitet, Oorganisk kemi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265159.
En järnmalmsbaserad stålproduktion börjar med att järnmalm matas i en masugn tillsammans med koks, kalk och tillsatsämnen. Ut kommer råjärn med höga kol och svavelhalter. Råjärnet transporteras till stålverket i så kallade torpedvagnar. I vissa stålverk, t.ex. SSAB Special Steels i Oxelösund, renas råjärnet från svavel i torpedvagnen. I andra stålverk svavelrenar man i separata skänkar. Svavelreningen sker med bland annat kalciumkarbid som binder till svavlet. Det svavelfattiga råjärnet måste sedan renas från kol för att bli stål. Det görs i en LD-konvertern (Linz Donawitz). LD-konvertern laddas med flytande råjärn som har en kolhalt på 4,5 procent och som är runt 1350 grader varmt. Råjärnet kyls genom att cirka 20 procent skrot tillsätts. En syrgaslans sänks sedan in i konvertern ovanför smältan och reningen startar. Syrgaslansen blåser syrgas i ultraljudsfart vilket oxiderar en del av järnet, så väl som kol, kisel, mangan, fosfor and andra föroreningsämnen i råjärnet. Kol försvinner ur konvertern i form av kolmonoxidgas. Andra oxiderade föroreningar och järnoxid bildar tillsammans en så kallad slagg som flyter ovanpå smältan. Det tillsätts även så kallade slaggbildare som förbättrar upptaget av föroreningar i slaggen. Processen varar i cirka 17 minuter och är mycket beroende av slaggen som bildas. Kol försvinner ur konvertern i form av kolmonoxidgas. Under processens gång rör man om smältan med hjälp av gaser som spolas genom botten av konvertern. Omröringen jämnar ut smältans sammansättning och temperatur. När man inte länge behöver avlägsna kol stoppas processen. Stålets temperatur är då cirka 1700 grader och kolhalten ligger nära 0,05 procent. Stålet överförs sedan till en skänk för att skilja det ur slaggen. Stålet förädlas vidare i olika processer där sammansättningen justeras så att det möter kundens krav. Sedan gjuts stålet i strängar för transport till valsverk eller kunder. Denna studie behandlar bottenomrörningen under LD-processen i SSAB Special Steels's stålverk i Oxelösund. Omrörningen sker genom åtta porösa stenar i botten av konvertern som blåser med argon eller kväve. Gasflödet genom stenarna justeras genom ett ventilsystem. Under blåsningen rör man om med hjälp av förinställda program. Omrörningens primära funktion är att avlasta syrgaslansen. I fallen där ingen bottenomrörning finns måste syrgaslansen blåsa ”hårdare” på stålet för att avlägsna kol. Avlastningen som bottenomrörningen bidrar med gör att processen även kallas för LD-LBE, där LBE står för Lans Bubbling Equilibrium. Bottenomrörningen tros ha en positiv effekt på stålets rening från fosfor. Sedan tidigare vet man att temperatur och slaggsammansättning är de största faktorerna som påverkar fosforreningen. Fosfor tas lättare upp i slaggen vid låga temperaturen samt i slagg med högre kalkhalter. Olika omrörningsprogram testades och en bättre fosforrening nåddes. Bottenomrörningen visade sig ha positiva effekter som är teoretisk kopplade till kalksmältning. Två möjliga förklaringsmekanismer hittades. Studien undersökte även användningen av kväve som omrörningsgas istället för argon, då kväve är ekonomisk fördelaktig gentemot argon. Kväve finns inlöst i råjärnet som sätts in i konvertern. Kvävgasen försvinner ur stålet under och med hjälp av kolreningen. Det visade sig vara säkert att använda kväve från start fram till halva syrgasblåset på kvävekänsliga stålsorter, var efter man sedan byte till argon. Kväve som används sent under blåset visade ge högre kvävehalter. Urkok är en kraftig volymökning av slaggen som sker när bildad gas från reningen av smältan fångas i slaggen och får slaggen att ”koka över”. Urkok resulterar i ekonomiska förluster då slaggen som lämnar konvertern vid urkok är rik på järn. Bottenomrörningens eventuella påverkan på urkok studerades. Det visade sig att urkok inte kan undvikas genom att enbart optimera bottenomrörningen.
Westin, Elin M. "Welds in the lean duplex stainless steel LDX 2101 : effect of microstructure and weld oxide on corrosion properties." Licentiate thesis, KTH, Materials Science and Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9299.
Duplex stainless steels are a very attractive alternative to austenitic grades due to their higher strength and good corrosion performance. The austenitic grades can often be welded autogenously, while the duplex grades normally require addition of filler metal. This is to counteract segregation of important alloying elements and to give sufficient austenite formation to prevent precipitation of chromium nitrides that could have a negative effect on impact toughness and pitting resistance. The corrosion performance of the recently-developed lean duplex stainless steel LDX 2101 is higher than that of 304 and can reach the level of 316. This thesis summarises pitting resistance tests performed on laser and gas tungsten arc (GTA) welded LDX 2101. It is shown here that this material can be autogenously welded, but additions of filler metal, nitrogen in the shielding gas and use of hybrid methods increases the austenite formation and the pitting resistance by further suppressing formation of chromium nitride precipitates in the weld metal. If the weld metal austenite formation is sufficient, the chromium nitride precipitates in the heat-affected zone (HAZ) could cause local pitting, however, this was not seen in this work. Instead, pitting occurred 1–3 mm from the fusion line, in the parent metal rather than in the high temperature HAZ (HTHAZ). This is suggested here to be controlled by the heat tint, and the effect of residual weld oxides on the pitting resistance is studied. The composition and the thickness of weld oxide formed on LDX 2101 and 2304 were determined using X-ray photoelectron spectroscopy (XPS). The heat tint on these lean duplex grades proved to contain significantly more manganese than what has been reported for standard austenitic stainless steels in the 300 series. A new approach on heat tint formation is consequently presented. Evaporation of material from the weld metal and subsequent deposition on the weld oxide are suggested to contribute to weld oxide formation. This is supported by element loss in LDX 2101 weld metal, and nitrogen additions to the GTA shielding gas further increase the evaporation.
Westin, Elin M. "Welds in the lean duplex stainless steel LDX 2101 : effect of microstructure and weld oxides on corrosion properties." Licentiate thesis, Stockholm : Industriell teknik och management, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9299.
Vecchi, Federica. "Analisi automatica della corporate reputation attraverso il topic modeling." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8384/.
Rossi, Espagnet Alberto. "Techno-Economic Assessment of Thermal Energy Storage integration into Low Temperature District Heating Networks." Thesis, KTH, Energiteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191485.
Thermal energy storage (TES) eller Termisk energilagring är en teknologi med potentialen att öka effektivitet och flexibilitet i den kommande fjärde generationens fjärrvärme (LTDH). Studien har som mål att kartlägga en komparativ uppskattning av TES systemen, baserad både på latent och sensibel värme. Resultaten visar att lagring av sensibel värme är att föredra före latent värme när den kopplas med LTDH: pris per lagrade kWh kvarstår som 15% högre än för latent värme i system under 5 MWh av lagringsutrymme; dock fordrar de endast hälften av volymen. Utifrån systemperspektiv innebär introduktionen av TES system i nätverket en ökning av flexibilitet vilket leder till reducerade värmeproduktionskostnaderna i mindre belastning. I fallstudien nås en sparnivå av femhundra euro per år genom denna operativa strategi, med en investering av 2000 euro för inköp av vattentank. Resultaten kan också vidgas till en situation där värmeproduktionen ersätts av förnybara, intermittenta energikällor; till detta medföljer högre vinster, lägre bruk av bränsle vilket skulle innebära lägre utsläpp. Studien kan ses som ett steg framåt mot skapandet av en mer effektiv DH system genom integrationen av TES, vilket kommer att spela en betydande roll i framtida smarta energisystem.
Di, Fiore Silvia. "La dimensione discorsiva della Politica di Coesione. Confronto fra Content Analysis e Topic Modeling." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17284/.
Raposo, Carlos Olympio Lima. "Estudo experimental de compactação e expansão de uma escória de aciaria LD para uso em pavimentação." Universidade Federal do Espírito Santo, 2005. http://repositorio.ufes.br/handle/10/6184.
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The steel slag or BOF (Basic Oxygen Furnace) slag is a by-product generated in integrated steel plants. The use of this material in bases and sub-bases of pavements may present technical, economical and environmental advantages compared to natural aggregates. However, problems, like the expansibility and the deficiency of technical criteria for its acceptance, limit the use of steel slags in pavements. The expansibility of steel slags is mainly generated by hydration of free calcium and magnesium oxides (CaO and MgO). The purpose of this study is to evaluate compaction and expansibility of a steel slag using laboratory tests. And then contributing for definition of technical criteria of expansibility evaluation of this material for use in bases and sub-bases of pavements. The steel slag of this study is originated in a integrated steel plant located in Vitória, Espírito Santo, Brazil. In this work, the laboratory compaction tests were conducted using standard and modified effort, and for the expansibility tests three methods were used: PTM-130/78, JIS 5015/92 and ASTM D 4792/00. The compaction tests of the steel slag did not yield optimum water content, showing the same characteristic of granular materials. Statistical analysis of compaction tests did not show significant differences between two procedures (with and without reuse of material), between two efforts of compaction (standard and modified effort) and between two different samples (with and without treatment for expansion reduction). The statistical analysis of expansibility tests using method PTM-130/78 show that: The water content of compaction was not statistically significant in the expansion results; the influence of the temperature in the expansion results was statistically significant; and the influence of the effort of compaction in the expansion results was statistically significant. Also a technical criterion for acceptance of lots of steel slag is proposed here, using PTM- 130/78 test method. The criterion includes sampling procedure, a statistical significant methodology to calculate the minimum number of specimens and the maximum limit of 3% expansion using PTM-130/78 test method.
A escória de aciaria LD é um subproduto gerado durante o processo de fabricação do aço em siderúrgicas que utilizam conversores a oxigênio. A utilização desse material em bases e subbases de pavimentos pode ser vantajosa em termos técnico, econômico e ambiental, comparados a agregados convencionais. Porém, problemas como sua natureza expansiva e a deficiência de critérios técnicos para sua aceitação têm limitado a utilização das escórias de aciaria em pavimentação. A expansão das escórias de aciaria é provocada principalmente pela hidratação dos óxidos de cálcio e magnésio livres presentes nesse material. O objetivo desta dissertação é estudar experimentalmente a compactação e a expansão de uma escória de aciaria LD, e assim contribuir para a definição de critérios técnicos de avaliação da expansão desse material visando a sua utilização em bases e sub-bases de pavimentos. A escória de aciaria LD deste estudo é proveniente de uma siderúrgica da região metropolitana de Vitória, Espírito Santo. Neste trabalho, são apresentados ensaios laboratoriais de caracterização química, física e ambiental, ensaios de compactação com energias do Proctor normal e do Proctor modificado e ensaios de expansão. Os três métodos de ensaio de expansão utilizados foram os métodos: PTM-130/78, JIS A 5015/92 e ASTM D 4792/00. Os resultados dos ensaios de compactação demonstraram que as amostras de escória de aciaria LD estudadas não apresentaram umidade ótima de compactação definida, tendo um comportamento típico de solos e materiais granulares. Análises estatísticas realizadas nos resultados dos ensaios de compactação mostraram ausência de diferenças estatisticamente significantes entre os dois procedimentos utilizados (com e sem reuso de material), entre as duas energias de compactação utilizadas (Proctor normal e Proctor modificado) e entre as duas amostras utilizadas (com e sem tratamento de redução de expansão). As análises estatísticas dos resultados dos ensaios de expansão obtidos pelo método PTM-130/78, nas amostras de escória de aciaria estudadas, demonstraram que: a umidade de compactação não teve influência estatisticamente significante nos resultados de expansão; a influência da temperatura nos resultados de expansão foi altamente significativa; e a influência da energia de compactação nos resultados de expansão foi estatisticamente significante, sendo que a energia do Proctor modificado provocou menores valores de expansão em relação à energia do Proctor normal. Neste trabalho, também se propõe um critério técnico para aceitação dos lotes de escória de aciaria LD segundo o requisito expansão utilizando o método PTM-130/78. Esse critério engloba o procedimento de amostragem, uma metodologia estatisticamente significante para o cálculo do número mínimo de corpos-de-prova e o limite máximo de 3% de expansão determinada pelo método PTM-130/78.
Wei, Zhihua. "The research on chinese text multi-label classification." Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20025/document.
La thèse est centrée sur la Classification de texte, domaine en pleine expansion, avec de nombreuses applications actuelles et potentielles. Les apports principaux de la thèse portent sur deux points : Les spécificités du codage et du traitement automatique de la langue chinoise : mots pouvant être composés de un, deux ou trois caractères ; absence de séparation typographique entre les mots ; grand nombre d’ordres possibles entre les mots d’une phrase ; tout ceci aboutissant à des problèmes difficiles d’ambiguïté. La solution du codage en «n-grams »(suite de n=1, ou 2 ou 3 caractères) est particulièrement adaptée à la langue chinoise, car elle est rapide et ne nécessite pas les étapes préalables de reconnaissance des mots à l’aide d’un dictionnaire, ni leur séparation. La classification multi-labels, c'est-à-dire quand chaque individus peut être affecté à une ou plusieurs classes. Dans le cas des textes, on cherche des classes qui correspondent à des thèmes (topics) ; un même texte pouvant être rattaché à un ou plusieurs thème. Cette approche multilabel est plus générale : un même patient peut être atteint de plusieurs pathologies ; une même entreprise peut être active dans plusieurs secteurs industriels ou de services. La thèse analyse ces problèmes et tente de leur apporter des solutions, d’abord pour les classifieurs unilabels, puis multi-labels. Parmi les difficultés, la définition des variables caractérisant les textes, leur grand nombre, le traitement des tableaux creux (beaucoup de zéros dans la matrice croisant les textes et les descripteurs), et les performances relativement mauvaises des classifieurs multi-classes habituels
文本分类是信息科学中一个重要而且富有实际应用价值的研究领域。随着文本分类处理内容日趋复杂化和多元化,分类目标也逐渐多样化,研究有效的、切合实际应用需求的文本分类技术成为一个很有挑战性的任务,对多标签分类的研究应运而生。本文在对大量的单标签和多标签文本分类算法进行分析和研究的基础上,针对文本表示中特征高维问题、数据稀疏问题和多标签分类中分类复杂度高而精度低的问题,从不同的角度尝试运用粗糙集理论加以解决,提出了相应的算法,主要包括:针对n-gram作为中文文本特征时带来的维数灾难问题,提出了两步特征选择的方法,即去除类内稀有特征和类间特征选择相结合的方法,并就n-gram作为特征时的n值选取、特征权重的选择和特征相关性等问题在大规模中文语料库上进行了大量的实验,得出一些有用的结论。针对文本分类中运用高维特征表示文本带来的分类效率低,开销大等问题,提出了基于LDA模型的多标签文本分类算法,利用LDA模型提取的主题作为文本特征,构建高效的分类器。在PT3多标签分类转换方法下,该分类算法在中英文数据集上都表现出很好的效果,与目前公认最好的多标签分类方法效果相当。针对LDA模型现有平滑策略的随意性和武断性的缺点,提出了基于容差粗糙集的LDA语言模型平滑策略。该平滑策略首先在全局词表上构造词的容差类,再根据容差类中词的频率为每类文档的未登录词赋予平滑值。在中英文、平衡和不平衡语料库上的大量实验都表明该平滑方法显著提高了LDA模型的分类性能,在不平衡语料库上的提高尤其明显。针对多标签分类中分类复杂度高而精度低的问题,提出了一种基于可变精度粗糙集的复合多标签文本分类框架,该框架通过可变精度粗糙集方法划分文本特征空间,进而将多标签分类问题分解为若干个两类单标签分类问题和若干个标签数减少了的多标签分类问题。即,当一篇未知文本被划分到某一类文本的下近似区域时,可以直接用简单的单标签文本分类器判断其类别;当未知文本被划分在边界域时,则采用相应区域的多标签分类器进行分类。实验表明,这种分类框架下,分类的精确度和算法效率都有较大的提高。本文还设计和实现了一个基于多标签分类的网页搜索结果可视化系统(MLWC),该系统能够直接调用搜索引擎返回的搜索结果,并采用改进的Naïve Bayes多标签分类算法实现实时的搜索结果分类,使用户可以快速地定位搜索结果中感兴趣的文本。
Nymark, Marianne Kristine. "Taxonomy of the Rufous-naped lark (Mirafra africana) complex based on song analysis." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-435322.
Uys, J. W. "A framework for exploiting electronic documentation in support of innovation processes." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/1449.
ENGLISH ABSTRACT: The crucial role of innovation in creating sustainable competitive advantage is widely recognised in industry today. Likewise, the importance of having the required information accessible to the right employees at the right time is well-appreciated. More specifically, the dependency of effective, efficient innovation processes on the availability of information has been pointed out in literature. A great challenge is countering the effects of the information overload phenomenon in organisations in order for employees to find the information appropriate to their needs without having to wade through excessively large quantities of information to do so. The initial stages of the innovation process, which are characterised by free association, semi-formal activities, conceptualisation, and experimentation, have already been identified as a key focus area for improving the effectiveness of the entire innovation process. The dependency on information during these early stages of the innovation process is especially high. Any organisation requires a strategy for innovation, a number of well-defined, implemented processes and measures to be able to innovate in an effective and efficient manner and to drive its innovation endeavours. In addition, the organisation requires certain enablers to support its innovation efforts which include certain core competencies, technologies and knowledge. Most importantly for this research, enablers are required to more effectively manage and utilise innovation-related information. Information residing inside and outside the boundaries of the organisation is required to feed the innovation process. The specific sources of such information are numerous. Such information may further be structured or unstructured in nature. However, an ever-increasing ratio of available innovation-related information is of the unstructured type. Examples include the textual content of reports, books, e-mail messages and web pages. This research explores the innovation landscape and typical sources of innovation-related information. In addition, it explores the landscape of text analytical approaches and techniques in search of ways to more effectively and efficiently deal with unstructured, textual information. A framework that can be used to provide a unified, dynamic view of an organisation‟s innovation-related information, both structured and unstructured, is presented. Once implemented, this framework will constitute an innovation-focused knowledge base that will organise and make accessible such innovation-related information to the stakeholders of the innovation process. Two novel, complementary text analytical techniques, Latent Dirichlet Allocation and the Concept-Topic Model, were identified for application with the framework. The potential value of these techniques as part of the information systems that would embody the framework is illustrated. The resulting knowledge base would cause a quantum leap in the accessibility of information and may significantly improve the way innovation is done and managed in the target organisation.
AFRIKAANSE OPSOMMING: Die belangrikheid van innovasie vir die daarstel van „n volhoubare mededingende voordeel word tans wyd erken in baie sektore van die bedryf. Ook die belangrikheid van die toeganklikmaking van relevante inligting aan werknemers op die geskikte tyd, word vandag terdeë besef. Die afhanklikheid van effektiewe, doeltreffende innovasieprosesse op die beskikbaarheid van inligting word deurlopend beklemtoon in die navorsingsliteratuur. „n Groot uitdaging tans is om die oorsake en impak van die inligtingsoorvloedverskynsel in ondernemings te bestry ten einde werknemers in staat te stel om inligting te vind wat voldoen aan hul behoeftes sonder om in die proses deur oormatige groot hoeveelhede inligting te sif. Die aanvanklike stappe van die innovasieproses, gekenmerk deur vrye assosiasie, semi-formele aktiwiteite, konseptualisering en eksperimentasie, is reeds geïdentifiseer as sleutelareas vir die verbetering van die effektiwiteit van die innovasieproses in sy geheel. Die afhanklikheid van hierdie deel van die innovasieproses op inligting is besonder hoog. Om op „n doeltreffende en optimale wyse te innoveer, benodig elke onderneming „n strategie vir innovasie sowel as „n aantal goed gedefinieerde, ontplooide prosesse en metingskriteria om die innovasieaktiwiteite van die onderneming te dryf. Bykomend benodig ondernemings sekere innovasie-ondersteuningsmeganismes wat bepaalde sleutelaanlegde, -tegnologiëe en kennis insluit. Kern tot hierdie navorsing, benodig organisasies ook ondersteuningsmeganismes om hul in staat te stel om meer doeltreffend innovasie-verwante inligting te bestuur en te gebruik. Inligting, gehuisves beide binne en buite die grense van die onderneming, word benodig om die innovasieproses te voer. Die bronne van sulke inligting is veeltallig en hierdie inligting mag gestruktureerd of ongestruktureerd van aard wees. „n Toenemende persentasie van innovasieverwante inligting is egter van die ongestruktureerde tipe, byvoorbeeld die inligting vervat in die tekstuele inhoud van verslae, boeke, e-posboodskappe en webbladsye. In hierdie navorsing word die innovasielandskap asook tipiese bronne van innovasie-verwante inligting verken. Verder word die landskap van teksanalitiese benaderings en -tegnieke ondersoek ten einde maniere te vind om meer doeltreffend en optimaal met ongestruktureerde, tekstuele inligting om te gaan. „n Raamwerk wat aangewend kan word om „n verenigde, dinamiese voorstelling van „n onderneming se innovasieverwante inligting, beide gestruktureerd en ongestruktureerd, te skep word voorgestel. Na afloop van implementasie sal hierdie raamwerk die innovasieverwante inligting van die onderneming organiseer en meer toeganklik maak vir die deelnemers van die innovasieproses. Daar word verslag gelewer oor die aanwending van twee nuwerwetse, komplementêre teksanalitiese tegnieke tot aanvulling van die raamwerk. Voorts word die potensiele waarde van hierdie tegnieke as deel van die inligtingstelsels wat die raamwerk realiseer, verder uitgewys en geillustreer.
Le, Thi Khuyen. "Sparse precision matrix estimation in high dimension and application to medical imaging : hypothesis testing on some particular graphical models of GLASSO." Thesis, Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0129.
Our research exploits two main characteristics of the GLASSO model: the sparsity and the monotony. Based on the sparsity of this model, we propose to adapt the linear discriminant analysis method (LDA) in high dimension by using a sparse estimated precision matrix on the whole population. In order to improve the performance of this method, we propose to reduce the dimension of data by selecting the most discriminant connected components. This method is based on a block diagonal form of the precision matrix. In fact, each block is corresponding to a component of the GLASSO model. Inspired from the factorial discriminant analysis method, we define the discriminant capacity for each component. Then the selection is restricted only on the variables within the components whose discriminant capacities are the largest. Our Adapted-LDA and connected component selection methods are applied to real data from PET tomography brain imaging for classifying certain patient groupes such as fibromyalgia, depression, or Alzheimer's disease. In addition, based on the monotony of the GLASSO model, we propose to formulate a significance test for the connected components on the intersection as well as the union of GLASSO models. These tests allow us to determine the sparsest estimated model which contains all the components of the real model. They converge to exponential distribution with reasonable numbers of observations and variables
Eriksson, Olle. "Studies on Premenstrual Dysphoria." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5812.
Scheu, Julia. "Ut pictura philosophia." Doctoral thesis, Humboldt-Universität zu Berlin, Kultur-, Sozial- und Bildungswissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17801.
The study deals with the pictorial examination of self-implicating topics relating to the genesis, the fundamentals and the aims of painting by Italian printmaking of the late 16th and 17th century. For the first time, a research is focussed on the pictorial examination of abstract contents of art theory as shown in the selected and compared examples which are extraordinary regarding their iconographical concentration – the Lamento della pittura by Federico Zuccari, the Liceo della pittura by Pierto Testa, the Genio di Salvator Rosa by Salvator Rosa and the Scuola del Disegno by Carlo Maratta. Besides the reconstruction of the history of origins the research is dealing with the relationship of image and text, problems of iconography, the coeval publishing situation as well as the target audience of these prints and finally the motivation for those very complex visual reflections on painting. As essential similarity of those arttheoretical prints, which all araised within the context of the Roman Art Accademy, has been determined the ambition to specify painting as a kind of Meta-science, which is somehow superior to all other modern age sciences. By means of an extensive reevaluation of the unique iconography of every single sheet it became feasible to illustrate that the comparison between painting and philosophy as the origin of the entire spectrum of sciences has attained a completely new dimension within the pictorial art theory of the 17th century. The novel comparison has opened a wider range and diversity for the visual definition of the artists` self-conception compared to the traditional comparison between painting and poetry, as it emerged from the dictum „Ut pictura poesis“ by Horaz. Accordingly the study deals with the question of the particular reflexive capability of images, their medial autonomy and their potential primacy over language.
"Monitoring for Reliable and Secure Power Management Integrated Circuits via Built-In Self-Test." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54959.
Dissertation/Thesis
Masters Thesis Electrical Engineering 2019
Wu, Jian-Sheng, and 吳建生. "Prediction of stock price trend from news articles:using text mining and LDA algorithm." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/8a8agg.
國立高雄應用科技大學
資訊管理研究所碩士班
103
Most predictions on the rise and fall of the stock in the past time either take aspect on the key words or are based on the technical and fundamental analysis, rather than study how stock is affected by specific subjects from relevant informative reports, news. This paper investigates the individual weekly stock prices of foods, semiconductors, and computer peripherals equipments categories in the cnYES, and acquires the subjects of news, reports by Latent Dirichlet Allocation(LDA) and text mining. By forming new key words from subject of news, we have the basis to analysis and speculate the subjects of news. We use the article about foods, semiconductors, and computer peripherals equipments dated from September, 2014 to February, 2015 as the training data and establish a topic model set of various subjects. Then, we figure out an exceptive value of the odds that each subject appearing in the news and reports, so as to obtain a predictive value every other day. We also use the Receiver Operating Characteristic Curve to determine the predicted results, and take the superior results as the prediction for lookup table and threshold value. When we have new article in the topic model set, we will calculate the odds of new which is relevant to the subjects we need and the expected value, and use them as predictive value to do a table look up. The closest value in the table look up we find will be regarded as the result of this research. In the end, the result shows that the stock of food category perform the best. Due to the booming information, the other categories might involve the wrong or the old information which are not able to use as a new subject to determine the chance of the rise and fall of the stock. Therefore, the categories of semiconductors and computer peripherals equipments do not perform well.
Rosiello, Fernandina. "Relatório de estágio nas Edições Piaget Lda." Master's thesis, 2015. http://hdl.handle.net/10362/16153.
Yang, Ting-Hsuan, and 楊庭瑄. "Applying Techniques of Text Mining on Trading Investment Strategy:an LDA Approach to Distinguish the Topics." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/bmm33k.
國立清華大學
計量財務金融系
105
Sentiment analysis has triggered a heated discussion in recent years, and it can be widely used in various kinds of fields. For example, It can be applied on the detection of network security, the prediction of the president election, the recommendation system on the shopping website, and so on. This thesis aims to apply the sentiment analysis on the trading investment strategy and make use of the articles of Federal Reserve to do the sentiment analysis to predict the return rate of stocks. Moreover, the thesis uses the topic model of latent dirichlet allocation to investigate the latent topics from the articles of Federal Reserve, and the goal is to distinguish the topics which influence the return rate of stock the most from the articles of Federal Reserve. Finally, my research expects to frame a lucrative trading investment strategy based on the research results. The thesis is inspired by the researches of Tetlock (2007) and Tetlock, Saar-Tsechansky, and MacSkassy (2008). First, I will use the topic model of latent dirichlet allocation to classify the words according to different topics. Second, I will eliminate the paragraph which is irrelevant to finance in order to assess the exact financial sentiment and to apply it on investment trading strategy. Last but not least, I will add the derivatives into the investment trading strategy so as to hedge the loss from the wrong prediction of sentiment, and then I will examine the performance of the investment trading strategy after the modification.
JHENG, YU-JIE, and 鄭宇傑. "A Comparative Study of Automatic Text Labeling Using Von Neumann Kernel and LDA Topic Model." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/74450264834068611898.
國立臺北大學
資訊管理研究所
103
There are tools and techniques that are capable of grouping vast documents into cohesive clusters based on the relatedness or similarity metrics between these documents. The resulted clusters of documents need to be properly labeled to facilitate a fast and holistic comprehension of the main themes or topics bore by them. There were systems that employed various theoretical or empirical based approaches to label clusters of documents automatically. Our study applied Latent Dirichlet Allocation (LDA) to obtain the most likely keywords for topics in the document clusters. The obtained keywords are then composed into key phrases as the representative labels of the clusters. The appropriateness of the labels are evaluated using the evaluative framework proposed by Treeratpituk. We found the LDA-based automatic labeling system generates proper clusters’ labels. We also compare the effectiveness of the LDA-based labeling system with our home-grown kernel-based system. In most of the cases, the LDA-based system generated better clusters’ labels then our kernel-based system in the experiment.
Zaplatílková, Lucie. "Vztah fyzické zdatnosti a studijního prospěchu žáků ZŠ." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-415576.
Koštial, Martin. "Získavanie a analýza dát pre oblasť crowdfundingu." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-428891.
HSIANG, CHUANG KAI, and 莊凱翔. "The prediction of trend toward stock price by text mining and sentiment analysis on social media: Using SVM and LDA Algorithm." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/mqp258.
國立高雄應用科技大學
資訊管理研究所碩士班
106
In recent years, text mining is largely applied in the surrounding. this paper uses text mining to explore the social media content and uses the classification algorithm to predict future stock trends. In this paper, Using Latent Dirichlet Allocation and sentiment analysis and other text mining methods to analysis the social media context which is collected by the Internet, in addition, this study uses technical indicators to predict the stock price of Taiwan stock market, including Williams %R, Psychological Line and On Balance Volume…and many more, use the model to predict stock price movements. Through the LDA to establish the social media context topics and sentiment analysis of social media context to obtain the analysis results. then it through the support vector machine training to obtain the accuracy of social media context prediction, and compare influence of correct rate between sentiment analysis and topic vector.
Correia, Acácio Filipe Pereira Pinto. "Towards Preemptive Text Edition using Topic Matching on Corpora." Master's thesis, 2016. http://hdl.handle.net/10400.6/6368.
Hoje em dia, a realização de uma investigação científica só é valorizada quando resulta na publicação de artigos científicos em jornais ou revistas internacionais de renome na respetiva área do conhecimento. Esta perspetiva reflete a importância de que os estudos realizados sejam validados por pares. A validação implica uma análise detalhada do estudo realizado, incluindo a qualidade da escrita e a existência de novidades, entre outros detalhes. Por estas razões, com a publicação do documento, outros investigadores têm uma garantia de qualidade do estudo realizado e podem, por isso, utilizar o conhecimento gerado para o seu próprio trabalho. A publicação destes documentos cria um ciclo de troca de informação que é responsável por acelerar o processo de desenvolvimento de novas técnicas, teorias e tecnologias, resultando na produção de valor acrescido para a sociedade em geral. Apesar de todas estas vantagens, a existência de uma verificação detalhada do conteúdo do documento enviado para publicação requer esforço e trabalho acrescentado para os autores. Estes devem assegurar-se da qualidade do manuscrito, visto que o envio de um documento defeituoso transmite uma imagem pouco profissional dos autores, podendo mesmo resultar na rejeição da sua publicação nessa revista ou ata de conferência. O objetivo deste trabalho é desenvolver um algoritmo para ajudar os autores na escrita deste tipo de documentos, propondo sugestões para melhoramentos tendo em conta o seu contexto específico. A ideia genérica para solucionar o problema passa pela extração do tema do documento a ser escrito, criando sugestões através da comparação do seu conteúdo com o de documentos científicos antes publicados na mesma área. Tendo em conta esta ideia e o contexto previamente apresentado, foi realizado um estudo de técnicas associadas à área de Processamento de Linguagem Natural (PLN). O PLN fornece ferramentas para a criação de modelos capazes de representar o documento e os temas que lhe estão associados. Os principais conceitos incluem n-grams e modelação de tópicos (topic modeling). Para concluir o estudo, foram analisados trabalhos realizados na área dos artigos científicos, estudando a sua estrutura e principais conteúdos, sendo ainda abordadas algumas características comuns a artigos de qualidade e ferramentas desenvolvidas para ajudar na sua escrita. O algoritmo desenvolvido é formado pela junção de um conjunto de ferramentas e por uma coleção de documentos, bem como pela lógica que liga todos os componentes, implementada durante este trabalho de mestrado. Esta coleção de documentos é constituída por artigos completos de algumas áreas, incluindo Informática, Física e Matemática, entre outras. Antes da análise de documentos, foi feita a extração de tópicos da coleção utilizada. Deste forma, ao extrair os tópicos do documento sob análise, é possível selecionar os documentos da coleção mais semelhantes, sendo estes utilizados para a criação de sugestões. Através de um conjunto de ferramentas para análise sintática, pesquisa de sinónimos e realização morfológica, o algoritmo é capaz de criar sugestões de substituições de palavras que são mais comummente utilizadas na área. Os testes realizados permitiram demonstrar que, em alguns casos, o algoritmo é capaz de fornecer sugestões úteis de forma a aproximar os termos utilizados no documento com os termos mais utilizados no estado de arte de uma determinada área científica. Isto constitui uma evidência de que a utilização do algoritmo desenvolvido pode melhorar a qualidade da escrita de documentos científicos, visto que estes tendem a aproximar-se daqueles já publicados. Apesar dos resultados apresentados não refletirem uma grande melhoria no documento, estes deverão ser considerados uma baixa estimativa ao valor real do algoritmo. Isto é justificado pela presença de inúmeros erros resultantes da conversão dos documentos pdf para texto, estando estes presentes tanto na coleção de documentos, como nos testes. As principais contribuições deste trabalho incluem a partilha do estudo realizado, o desenho e implementação do algoritmo e o editor de texto desenvolvido como prova de conceito. A análise de especificidade de um contexto, que advém dos testes realizados às várias áreas do conhecimento, e a extensa coleção de documentos, totalmente compilada durante este mestrado, são também contribuições do trabalho.
BOUDOVÁ, Adéla. "Barevná modifikace Warteggova kresebného testu - typický způsob zpracování u dětí se SPU." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-136129.
Zondo, Raymond Mnyamezeli Mlungisi. "The replacement of the doctrine of pith and marrow by the catnic test in English Patent Law : a historical evaluation." Diss., 2012. http://hdl.handle.net/10500/5697.
Mercantile Law
LL.M.