To see the other types of publications on this topic, follow the link: Extraction efficieny.

Dissertations / Theses on the topic 'Extraction efficieny'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Extraction efficieny.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wackersreuther, Bianca. "Efficient Knowledge Extraction from Structured Data." Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-138079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

García-Martín, Eva. "Extraction and Energy Efficient Processing of Streaming Data." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15532.

Full text
Abstract:
The interest in machine learning algorithms is increasing, in parallel with the advancements in hardware and software required to mine large-scale datasets. Machine learning algorithms account for a significant amount of energy consumed in data centers, which impacts the global energy consumption. However, machine learning algorithms are optimized towards predictive performance and scalability. Algorithms with low energy consumption are necessary for embedded systems and other resource constrained devices; and desirable for platforms that require many computations, such as data centers. Data stream mining investigates how to process potentially infinite streams of data without the need to store all the data. This ability is particularly useful for companies that are generating data at a high rate, such as social networks. This thesis investigates algorithms in the data stream mining domain from an energy efficiency perspective. The thesis comprises of two parts. The first part explores how to extract and analyze data from Twitter, with a pilot study that investigates a correlation between hashtags and followers. The second and main part investigates how energy is consumed and optimized in an online learning algorithm, suitable for data stream mining tasks. The second part of the thesis focuses on analyzing, understanding, and reformulating the Very Fast Decision Tree (VFDT) algorithm, the original Hoeffding tree algorithm, into an energy efficient version. It presents three key contributions. First, it shows how energy varies in the VFDT from a high-level view by tuning different parameters. Second, it presents a methodology to identify energy bottlenecks in machine learning algorithms, by portraying the functions of the VFDT that consume the largest amount of energy. Third, it introduces dynamic parameter adaptation for Hoeffding trees, a method to dynamically adapt the parameters of Hoeffding trees to reduce their energy consumption. The results show an average energy reduction of 23% on the VFDT algorithm.<br>Scalable resource-efficient systems for big data analytics
APA, Harvard, Vancouver, ISO, and other styles
3

Morsey, Mohamed. "Efficient Extraction and Query Benchmarking of Wikipedia Data." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-130593.

Full text
Abstract:
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches. However, the DBpedia release process is heavy-weight and the releases are sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication. Basically, knowledge bases, including DBpedia, are stored in triplestores in order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks. Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
APA, Harvard, Vancouver, ISO, and other styles
4

Gordon, Ross John. "Improved mass transport efficiency in copper solvent extraction." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/5673.

Full text
Abstract:
This thesis considers methods which can be employed to increase the mass of copper transferred into and out of the organic phase during the load and strip stages of commercial solvent extraction processes. Conventional 5-alkylsalicylaldoxime reagents transfer 1 mol of divalent copper per 2 mol of ligand in a neutral complex of the type [Cu(L-H)2] via a pH-swing process. New triacidic ligands have been designed which triple the molar transport of copper to form [Cu3(L-3H)2]. Until recently copper recovery by solvent extraction has been confined to oxidic ores which are leached with sulfuric acid. New leaching technologies generate high tenor copper sulfate feed streams from sulfidic ores. The conventional 5- alkylsalicylaldoxime reagents do not work effectively in conjunction with these leach processes as they do not consume the acid which is generated on loading the oxime. To address this problem ditopic zwitterionic ligands have been designed which can transfer both metal cation and attendant anion. These new metal salt reagents are diacidic, therefore not only transfer metal salts but also increase the molar transport relative to the conventional reagents. Equilibrium-modifiers are often added to improve the mass transport efficiency of conventional solvent extraction processes. The nature of their interaction with the species in solution is poorly understood. This thesis investigates their interaction with the free ligands and copper complexes to gain an understanding of their mode of action in order to rationalise the design of future modifiers to optimise recovery efficiencies. Increased molar transport is addressed in Chapter 2. The diacidic ligand 5- methylsalicylaldehyde-pivaloylhydrazide (L2) and its dinuclear copper complex [Cu2(L2-2H)2] were synthesised and characterised to gain an understanding of their speciation in solution. X-ray structural analysis of [Cu2(L2-2H)2] confirmed that the phenolate oxygen atoms bridge the copper centres rather than the amidato oxygen atoms of the hydrazone. Variable temperature magnetic susceptibility data confirm that the copper centres are antiferromagnetically coupled as expected for the Cu-OCu angle (99.6(2)°). An understanding of the coordination geometry of the dinuclear systems lead to design of triacidic ligands. A series of 3-hydrazono- and 3- hydroxyanil- 5-alkylsalicylic acids were synthesised. The prototype ligand 5-methyl 3-octanoylhydrazonosalicylic acid (L6) was demonstrated to triple molar transport and increase mass transport by 2.5 fold. Solvent extraction results indicate that copper is sequentially loaded as pH is increased. The plateaux observed in loading curves suggest formation of stable mono-, di-, and tri-nuclear copper complexes within the pH-ranges 1.75 - 2.75, 3.25 - 4.0 and > 4.25 respectively. The triacidic ligands were also demonstrated to double the molar transport of the conventional salicylaldoximes when used in 1:1 blends by formation of a ternary complex. Chapter 3 describes the incorporation of two tertiary amine groups into diacidic salicylaldehydehydrazone ligands to form dinucleating metal salt extractants. Piperidinomethyl, piperazinomethyl and dihexylamino groups were incorporated into various positions of the ligand including 3- and/or 5- positions of the salicylaldehyde or incorporated into the hydrazone. Solvent extraction results obtained for 3,5- bis((dihexylamino)methyl)salicylaldehyde-octanoic hydrazone (L20) are consistent with transfer of 1 mol of copper sulfate per mol of ligand in the organic phase between pH 2.0 and 2.5. This result is indicative of the formation of [Cu2(L20)2(SO4)2]. Conventional salicylaldoximes are “strong” copper extractants which require concentrated acid electrolyte to efficiently strip the copper from the organic phase. However, as the use of concentrated acid affects the quality of the copper cathodes, oxygen-containing equilibrium modifiers are often added. These facilitate copper stripping without adversely affecting the loading. The affect of 2-ethylhexanol (2- EH) and trioctylphosphine oxide (TOPO) on the extractive ability of 5-toctylsalicylaldoxime (19) in n-heptane is reported. Both are found to decrease copper extraction more under stripping conditions than loading conditions. 2-EH shows little affect at pH greater than 2.5. TOPO does not significantly affect copper loading at pH greater than 3.0. Evidence for the formation of the adduct [Cu(19-H)2)(TOPO)] was obtained from UV/Vis, IR, EPR and sonic spray mass spectrometry.
APA, Harvard, Vancouver, ISO, and other styles
5

Buss, lan. "Enhancing the Extraction Efficiency of Light-Emitting Diodes." Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alkaabi, Salem Hamdan. "Efficient corner extraction and matching for image registration." Thesis, University of Kent, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

CARDOSO, EDUARDO TEIXEIRA. "EFFICIENT METHODS FOR INFORMATION EXTRACTION IN NEWS WEBPAGES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28984@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>Nós abordamos a tarefa de segmentação de páginas de notícias; mais especificamente identificação do título, data de publicação e corpo da notícia. Embora existam resultados muito bons na literatura, a maioria deles depende da renderização da página, que é uma tarefa muito demorada. Nós focamos em cenários com um alto volume de documentos, onde desempenho de tempo é uma necessidade. A abordagem escolhida estende nosso trabalho prévio na área, combinando propriedades estruturais com traços de atributos visuais, calculados através de um método mais rápido do que a renderização tradicional, e algoritmos de aprendizado de máquina. Em nossos experimentos, nos atentamos para alguns fatos não comumente abordados na literatura, como tempo de processamento e a generalização dos nossos resultados para domínios desconhecidos. Nossa abordagem se mostrou aproximadamente uma ordem de magnitude mais rápida do que alternativas equivalentes que se apoiam na renderização completa da página e manteve uma boa qualidade de extração.<br>We tackle the task of news webpage segmentation, specifically identifying the news title, publication date and story body. While there are very good results in the literature, most of them rely on webpage rendering, which is a very time-consuming step. We focus on scenarios with a high volume of documents, where a short execution time is a must. The chosen approach extends our previous work in the area, combining structural properties with hints of visual presentation styles, computed with a faster method than regular rendering, and machine learning algorithms. In our experiments, we took special attention to some aspects that are often overlooked in the literature, such as processing time and the generalization of the extraction results for unseen domains. Our approach has shown to be about an order of magnitude faster than an equivalent full rendering alternative while retaining a good quality of extraction.
APA, Harvard, Vancouver, ISO, and other styles
8

Kihlman, Jonas. "On the resource efficiency of kraft lignin extraction." Licentiate thesis, Karlstads universitet, Institutionen för ingenjörs- och kemivetenskaper (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-81473.

Full text
Abstract:
Lignin is regarded as a promising raw material for the production of biobased products, such as chemicals, materials and fuels, and will most probably be a key component in future lignocellulosic biorefineries. This thesis examines the lignin extraction process in a kraft pulp mill, the technologies that are available for this purpose, and the impact made on the mill. Several different kraft lignin extraction processes and technologies are currently available and are basically linear: chemicals are brought from outside the mill and introduced into the process and the mill balance. Depending on their origin, the addition of these chemicals will affect the mill to a lesser or greater degree, both economically and environmentally. A conceivable way of reducing the impact made on the mill´s balance would be the in-house production of the chemicals used, sulphuric acid and CO2, which takes a more sustainable circular approach. The results obtained show that utilisation of existing process streams in the mill as a source of chemicals could be a way of not only reducing these impacts but also making lignin extraction more sustainable. Internal production of sulphuric acid is possible and could generate a substantial amount for use as replacement of the fresh sulphuric acid needed for the lignin extraction process; CO2 is available in large quantities in the mill and could be captured and used for lignin extraction.
APA, Harvard, Vancouver, ISO, and other styles
9

Yean, Su Jin. "Factors influencing the efficiency of arsenic extraction by phosphate." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2638.

Full text
Abstract:
Extraction with sodium phosphate has been used as a method of accessing arsenic in soils. Arsenic extraction efficiency by phosphate from rice-paddy soils of Bangladesh usually has been low and highly variable between soils. The major objectives of this study were to examine the relationships between phosphate-extractable arsenic and soil iron-oxide composition and to investigate the experimental factors which might influence arsenic-extraction efficiency from rice-paddy soils of Bangladesh by phosphate. Statistical analysis of approximately 500 surface soils from Bangladesh indicated that phosphate-extractable arsenic was well correlated with total soil arsenic (r2 = 0.832) and oxalate-extractable arsenic (r2 = 0.825), though extraction efficiency varied widely (5 - 54 % of the total soil arsenic). The thanas with the lowest arsenic contents generally also had the soils with the lowest arsenic-extraction efficiencies. Quantity of phosphate-extractable arsenic was weakly correlated with the soil iron-oxide content, but extraction efficiency (i.e., the proportion of phosphate-extractable arsenic to total soil arsenic) was not correlated with any iron-oxide parameter. Arsenic extraction was strongly influenced by reaction variables such as sample grinding, phosphate concentration, principal counterion, reaction pH, and reaction time. The extraction efficiency was impacted by the influence of these individual factors on reaction kinetics and accessibility of arsenic adsorption sites for ligand exchange by phosphate. A portion of the arsenic was readily exchanged during the first few hours of extraction, followed by a much slower subsequent extraction. These results indicate that some of the arsenic is easily exchanged, but for a substantial portion of the arsenic, either the reaction kinetics is very slow or the sites are not accessible for reaction with phosphate. Extraction by phosphate is a useful procedure for the assessment of readily ligand-exchanged arsenic.
APA, Harvard, Vancouver, ISO, and other styles
10

Parthepan, Vijayeandra. "Efficient Schema Extraction from a Collection of XML Documents." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1061.

Full text
Abstract:
The eXtensible Markup Language (XML) has become the standard format for data exchange on the Internet, providing interoperability between different business applications. Such wide use results in large volumes of heterogeneous XML data, i.e., XML documents conforming to different schemas. Although schemas are important in many business applications, they are often missing in XML documents. In this thesis, we present a suite of algorithms that are effective in extracting schema information from a large collection of XML documents. We propose using the cost of NFA simulation to compute the Minimum Length Description to rank the inferred schema. We also studied using frequencies of the sample inputs to improve the precision of the schema extraction. Furthermore, we propose an evaluation framework to quantify the quality of the extracted schema. Experimental studies are conducted on various data sets to demonstrate the efficiency and efficacy of our approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Tianyu. "Efficient extraction of ontologies from domain specific text corpora." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/39686.

Full text
Abstract:
There is a huge body of domain-specific knowledge embedded in free-text repositories such as engineering documents, instruction manuals, medical references and legal files. Extracting ontological relationships (e.g., ISA and HASA) from this kind of corpus can improve users’ queries and improve navigation through the corpus, as well as benefiting applications built for these domains. Current methods to extract ontological relationships from text data usually fail to capture many meaningful relationships because they concentrate on single-word-terms or very short phrases. This is particularly problematic in a smaller corpus, where it is harder to find statistically meaningful relationships. We propose a novel pattern-based algorithm that finds ontological relationships between complex concepts by exploiting parsing information to extract concepts consisting of multi-word and nested phrases. Our procedure is iterative: we tailor the constrained sequential pattern mining framework to discover new patterns. We compare our algorithm with previous representative ontology extraction algorithms on four real data sets and achieve consistently and significantly better results.
APA, Harvard, Vancouver, ISO, and other styles
12

Jia, Fei, Jeerwan Chawhuaymak, Mark Riley, Werner Zimmt, and Kimberly Ogden. "Efficient extraction method to collect sugar from sweet sorghum." BioMed Central, 2013. http://hdl.handle.net/10150/610172.

Full text
Abstract:
BACKGROUND:Sweet sorghum is a domesticated grass containing a sugar-rich juice that can be readily utilized for ethanol production. Most of the sugar is stored inside the cells of the stalk tissue and can be difficult to release, a necessary step before conventional fermentation. While this crop holds much promise as an arid land sugar source for biofuel production, a number of challenges must be overcome. One lies in the inherent labile nature of the sugars in the stalks leading to a short usable storage time. Also, collection of sugars from the sweet sorghum stalks is usually accomplished by mechanical squeezing, but generally does not collect all of the available sugars.RESULTS:In this paper, we present two methods that address these challenges for utilization of sweet sorghum for biofuel production. The first method demonstrates a means to store sweet sorghum stalks in the field under semi-arid conditions. The second provides an efficient water extraction method that can collect as much of the available sugar as feasible. Operating parameters investigated include temperature, stalk size, and solid-liquid ratio that impact both the rate of sugar release and the maximal amount recovered with a goal of low water use. The most desirable conditions include 30degreesC, 0.6 ratio of solid to liquid (w/w), which collects 90 % of the available sugar. Variations in extraction methods did not alter the efficiency of the eventual ethanol fermentation.CONCLUSIONS:The water extraction method has the potential to be used for sugar extraction from both fresh sweet sorghum stalks and dried ones. When combined with current sugar extraction methods, the overall ethanol production efficiency would increase compared to current field practices.
APA, Harvard, Vancouver, ISO, and other styles
13

Bozorgmehr, Pouya. "An efficient online feature extraction algorithm for neural networks." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p1470604.

Full text
Abstract:
Thesis (M.S.)--University of California, San Diego, 2009.<br>Title from first page of PDF file (viewed January 13, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 61-63).
APA, Harvard, Vancouver, ISO, and other styles
14

Tan, Chiu Chiang. "Secure and efficient data extraction for ubiquitous computing applications." W&M ScholarWorks, 2010. https://scholarworks.wm.edu/etd/1539623571.

Full text
Abstract:
Ubiquitous computing creates a world where computers have blended seamlessly into our physical environment. In this world, a "computer" is no longer a monitor-and-keyboard setup, but everyday objects such as our clothing and furniture. Unlike current computer systems, most ubiquitous computing systems are built using small, embedded devices with limited computational, storage and communication abilities. A common requirement for many ubiquitous computing applications is to utilize the data from these small devices to perform more complex tasks. For critical applications such as healthcare or medical related applications, there is a need to ensure that only authorized users have timely access to the data found in the small device. In this dissertation, we study the problem of how to securely and efficiently extract data from small devices.;Our research considers two categories of small devices that are commonly used in ubiquitous computing, battery powered sensors and battery free RFID tags. Sensors are more powerful devices equipped with storage and sensing capabilities that are limited by battery power, whereas tags are less powerful devices with limited functionalities, but have the advantage of being operable without battery power. We also consider two types of data access patterns, local and remote access. In local data access, the application will query the tag or the sensor directly for the data, while in remote access, the data is already aggregated at a remote location and the application will query the remote location for the necessary information, The difference between local and remote access is that in local access, the tag or sensor only needs to authenticate the application before releasing the data, but in remote access, the small device may have to perform additional processing to ensure that the data remains secure after being collected. In this dissertation, we present secure and efficient local data access solutions for a single RFID tag, multiple RFID tags, and a single sensor, and remote data access solutions for both RFID tag and sensor.
APA, Harvard, Vancouver, ISO, and other styles
15

Yan, Shu. "Efficient numerical methods for capacitance extraction based on boundary element method." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3230.

Full text
Abstract:
Fast and accurate solvers for capacitance extraction are needed by the VLSI industry in order to achieve good design quality in feasible time. With the development of technology, this demand is increasing dramatically. Three-dimensional capacitance extraction algorithms are desired due to their high accuracy. However, the present 3D algorithms are slow and thus their application is limited. In this dissertation, we present several novel techniques to significantly speed up capacitance extraction algorithms based on boundary element methods (BEM) and to compute the capacitance extraction in the presence of floating dummy conductors. We propose the PHiCap algorithm, which is based on a hierarchical refinement algorithm and the wavelet transform. Unlike traditional algorithms which result in dense linear systems, PHiCap converts the coefficient matrix in capacitance extraction problems to a sparse linear system. PHiCap solves the sparse linear system iteratively, with much faster convergence, using an efficient preconditioning technique. We also propose a variant of PHiCap in which the capacitances are solved for directly from a very small linear system. This small system is derived from the original large linear system by reordering the wavelet basis functions and computing an approximate LU factorization. We named the algorithm RedCap. To our knowledge, RedCap is the first capacitance extraction algorithm based on BEM that uses a direct method to solve a reduced linear system. In the presence of floating dummy conductors, the equivalent capacitances among regular conductors are required. For floating dummy conductors, the potential is unknown and the total charge is zero. We embed these requirements into the extraction linear system. Thus, the equivalent capacitance matrix is solved directly. The number of system solves needed is equal to the number of regular conductors. Based on a sensitivity analysis, we propose the selective coefficient enhancement method for increasing the accuracy of selected coupling or self-capacitances with only a small increase in the overall computation time. This method is desirable for applications, such as crosstalk and signal integrity analysis, where the coupling capacitances between some conductors needs high accuracy. We also propose the variable order multipole method which enhances the overall accuracy without raising the overall multipole expansion order. Finally, we apply the multigrid method to capacitance extraction to solve the linear system faster. We present experimental results to show that the techniques are significantly more efficient in comparison to existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
16

Solis, Montero Andres. "Efficient Feature Extraction for Shape Analysis, Object Detection and Tracking." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34830.

Full text
Abstract:
During the course of this thesis, two scenarios are considered. In the first one, we contribute to feature extraction algorithms. In the second one, we use features to improve object detection solutions and localization. The two scenarios give rise to into four thesis sub-goals. First, we present a new shape skeleton pruning algorithm based on contour approximation and the integer medial axis. The algorithm effectively removes unwanted branches, conserves the connectivity of the skeleton and respects the topological properties of the shape. The algorithm is robust to significant boundary noise and to rigid shape transformations. It is fast and easy to implement. While shape-based solutions via boundary and skeleton analysis are viable solutions to object detection, keypoint features are important for textured object detection. Therefore, we present a keypoint featurebased planar object detection framework for vision-based localization. We demonstrate that our framework is robust against illumination changes, perspective distortion, motion blur, and occlusions. We increase robustness of the localization scheme in cluttered environments and decrease false detection of targets. We present an off-line target evaluation strategy and a scheme to improve pose. Third, we extend planar object detection to a real-time approach for 3D object detection using a mobile and uncalibrated camera. We develop our algorithm based on two novel naive Bayes classifiers for viewpoint and feature matching that improve performance and decrease memory usage. Our algorithm exploits the specific structure of various binary descriptors in order to boost feature matching by conserving descriptor properties. Our novel naive classifiers require a database with a small memory footprint because we only store efficiently encoded features. We improve the feature-indexing scheme to speed up the matching process creating a highly efficient database for objects. Finally, we present a model-free long-term tracking algorithm based on the Kernelized Correlation Filter. The proposed solution improves the correlation tracker based on precision, success, accuracy and robustness while increasing frame rates. We integrate adjustable Gaussian window and sparse features for robust scale estimation creating a better separation of the target and the background. Furthermore, we include fast descriptors and Fourier spectrum packed format to boost performance while decreasing the memory footprint. We compare our algorithm with state-of-the-art techniques to validate the results.
APA, Harvard, Vancouver, ISO, and other styles
17

Etchells, Terence Anthony. "Rule extraction from neural networks : a practical and efficient approach." Thesis, Liverpool John Moores University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Svolos, Andrew. "Space and time efficient data structures in texture feature extraction." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ho, John C. 1980. "Improving the external extraction efficiency of organic light emitting devices." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28396.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.<br>Includes bibliographical references (p. 54-55).<br>Over the last decade Organic Light Emitting Device (OLED) technology has matured, progressing to the point where state-of-the-art OLEDs can demonstrate external extraction efficiencies that surpass those of fluorescent lights. Additionally, OLEDs have the benefits over conventional display and lighting technologies of large viewing angles and mechanical flexibility. However, in order to become a commercially viable, widely adopted technology, OLEDs must not only match the long-term stability of competing technologies, but must demonstrate a distinct advantage in efficiency. This thesis presents various strategies for fabricating nanopatterned structures that can be integrated into OLEDs with the aim of improving the external extraction efficiency. Soft nanolithography, colloidal deposition, and preparation of metallic nanoparticle films are among the fabrication techniques investigated for potential applications in enhancing OLED performance.<br>by John C. Ho.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
20

Kamon, Mattan 1969. "Efficient techniques for inductance extraction of complex 3-D geometries." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Raisglid, Margaret Ellen. "Factors affecting the selectivity and efficiency of solid-phase extraction." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/282246.

Full text
Abstract:
The modified surface of solid phase extraction sorbents is studied with respect to the impact on the isolation and purification of analytes. Interactions at the interface are characterized by quantifying recoveries of a broad range of analytes, on a variety of surfaces, and under various extraction conditions. Bonded phases of varying hydrocarbon chain length are studied. A hydrophobic surface (e.g., C18) favors the retention of small polar compounds, while a more polar surface (C2) favors the elution of larger hydrophobic compounds. A compromise phase (C8) improves overall recoveries, while analyte recoveries were optimized by extraction onto stacked and layered phases. Analytes are retained by different mechanisms and under different solvent conditions. Selective elution of analytes is achieved by judiciously choosing the elution solvent. Data obtained from comparing the time requirements for drying various phases are consistent with previously developed models of the bonded silica surface. The impact of the presence of water on the elution of analytes is also studied. Experiments are presented where increasing concentrations of organic solvent are added to the sample matrix. Recoveries for polar compounds dropped as the matrix became more energetically favorable. Recoveries improved for hydrophobic species as the formation of agglomerations was disrupted. The impact of sample loading rates on analyte recoveries is studied. No significant differences in recoveries of a broad range of non-ionizable analytes are observed for loading rates ranging from 8 to 30 mL per minute on a 13 mm diameter x 15 mm height sorbent bed. The impact of the porous nature of the extraction sorbent on analyte recoveries, under different conditions of temperature and solvent contact time, is studied. A dependence on the diffusion of analytes into and out of the pores is observed. Experiments are devised to characterize the role of particulates in the sample matrix during solid phase extraction. Parameters studied include size of particles in the matrix, in the sorbent bed, porosity of the frit retaining the sorbent, and utility of a depth filter. Samples laden with particulates are spiked with trace analytes and show no reduction in recoveries resulting from the presence of particulate matter.
APA, Harvard, Vancouver, ISO, and other styles
22

Chia, Mark P. C. "Efficient critical area extraction for photolithographically defined patterns on ICs." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/13371.

Full text
Abstract:
The IC industry is developing at a phenomenal rate where smaller and denser chips are being manufactured. The yield of the fabrication process is one of the key factors that determine the cost of a chip. The pattern transfered onto silicon is not a perfect representation of the mask layout, and for an SRAM cell this results in a difference of 3 % between the average number of faults calculated from the mask layout and the aerial image. This thesis investigates methods that are capable of better estimating the yield of an IC during their design phase which can efficiently and accurately estimate the critical area (CA) without the need to directly calculate the aerial image. The initial attempt generates an equivalent set of parallel lines from the mask layout which is then used to estimate the CA after pattern transfer. To achieve this EYE, Depict and WorkBench were integrated with in-house software. Benchmarking on appropriate layouts resulted in estimates within 0.5 - 2.5 <i>% </i>of the aerial image compared with 1.5 -3.5 % for the mask layout. However, for layouts which did not lend themselves to representation by equivalent parallel lines, this method resulted in estimates that were not as accurate as those obtained using the mask layout. The second approach categorises CA curves into different groups based on physical characteristics of the layout. By identifying which group a curve belongs to, the appropriate mapping can be made to estimate the pattern transfer process. However, due to the large number of track combinations it proved too difficult to reliably classify layouts into an appropriate group. Another method proposed determines a track length and position using a combination of AND and OR operations with shifting algorithms. The limitation of this approach was that it was not robust and only proved to work with certain layout types. The fourth method used a one dimensional algorithm to categorise layouts. The estimated CA was within 0.2 % of the aerial image as compared to the mask layout CA of 2.2 <i>%. </i>The disadvantage of this method is that it can only classify parallel tracks. The next approach built upon the above method and can categorise a layout in two dimensions, not being limited to parallel tracks. A variety of designs were used as benchmarks, and for these layouts this method resulted in estimates that were within 0 - 10.7 % of the aerial image compared with 0.5 - 13.4 % for the mask layout.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Pailing. "Enhancing TK rubber extraction efficiency with fungus and enzyme treatments." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515162731667997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Vučković, Jelena Scherer Axel. "Photonic crystal structures for efficient localization or extraction of light /." Diss., Pasadena, Calif. : California Institute of Technology, 2002. http://resolver.caltech.edu/CaltechETD:etd-08252004-130544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wackersreuther, Bianca [Verfasser], and Christian [Akademischer Betreuer] Böhm. "Efficient Knowledge Extraction from Structured Data / Bianca Wackersreuther. Betreuer: Christian Böhm." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2011. http://d-nb.info/1018847189/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

El-Sadi, Haifa. "Efficiency of PAHs removal from clayey soil using supercritical fluid extraction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0007/MQ43546.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Calie, Janko. "Highly efficient low-level feature extraction for video representation and retrieval." Thesis, Queen Mary, University of London, 2004. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1812.

Full text
Abstract:
Witnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Zhenhai 1970. "Efficient techniques for wideband impedance extraction of complex 3-D geometries." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yu, Li Ph D. Massachusetts Institute of Technology. "Efficient IC statistical modeling and extraction using a Bayesian inference framework." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99786.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 147-156).<br>Variability modeling and extraction in advanced process technologies is a key challenge to ensure robust circuit performance as well as high manufacturing yield. In this thesis, we present an ecient framework for device and circuit variability modeling and extraction by combining an ultra-compact transistor model, called the MIT virtual source (MVS) model, and a Bayesian extraction method. Based on statistical formulations extended from the MVS model, we propose algorithms for three applications that greatly reduce time and cost required for measurement of on-chip test structures and characterization of library cells. We start with a novel DC and transient parameter extraction methodology for the MVS model and achieve a quantitative match with industry standard models for output characteristics of MOS transistor devices. We develop a physically based statistical MVS model extension and a corresponding statistical extraction technique based on the backward propagation of variance (BPV). The resulting statistical MVS model is validated using Monte Carlo simulations, and the statistical distributions of several gures of merit for logic and memory cells are compared with those of a 40-nm CMOS industrial design kit. A critical problem in design for manufacturability (DFM) is to build statistically valid prediction models of circuit performance based on a small number of measurements taken from a mixture of on-chip test structures. Towards this goal, we propose a technique named physical subspace projection to transfer a mixture of measurements into a unique probability space spanned by MVS parameters. We search over MVS parameter combinations to nd those with the maximum probability by extending the expectation-maximization (EM) algorithm and iteratively solve the maximum a posteriori (MAP) estimation problem. Finally, we develop a process shift calibration technique to estimate circuit performance by combining SPICE simulation and very few new measurements. We further develop a parameter extraction algorithm to accurately extract all current-voltage (I - V ) parameters given limited and incomplete I - V measurements, applicable to early technology evaluation and statistical parameter extraction. An important step in this method is the use of MAP estimation where past measurements of transistors from various technologies are used to learn a prior distribution and its uncertainty matrix for the parameters of the target technology. We then utilize Bayesian inference to facilitate extraction and posterior estimates for the target technologies using a very small set of additional measurements. Finally, we develop a novel flow to enable computationally efficient statistical characterization of delay and slew in standard cell libraries. We first propose a novel ultra-compact, analytical model for gate timing characterization. Next, instead of exploiting the sparsity of the regression coefficients of the process space with a reduced process sample size, we exploit correlations between dierent cell variables (design and input conditions) by a Bayesian learning algorithm to estimate the parameters of the aforementioned timing model using past library characterizations along with a very small set of additional simulations.<br>by Li Yu.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Fuchs, Cornelius. "Increasing the light extraction efficiency of monochrome organic light-emitting diodes." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-186848.

Full text
Abstract:
Organische, lichtemittierende Dioden (OLEDs) bezeichnen neuartige Lichtquellen, welche zur Beleuchtung oder für Displayanwendungen nutzbar sind. Im Allgemeinen ist die Lichtausbeute durch den hohen Brechungsindex und die Dünnschichtgeometrie der OLED begrenzt. Der hohe Brechungsindex sorgt dafür, dass ein signifikanter Anteil des emittierten Lichts in der OLED durch Totalreflexion (TIR) gefangen ist. Durch den Dünnschichtaufbau der OLED wird außerdem die Lichterzeugung für resonante Moden der kohärenten optischen Mikrokavität erhöht. Dies gilt im Besonderen für die nichtstrahlenden Moden. In dieser Arbeit wurden zwei Methoden untersucht, um die Lichtausbeute aus OLEDs zu erhöhen. Zuerst wurde die Implementierung von Materialien mit niedrigem Brechungsindex angrenzend zum undurchsichtigen metallischen Rückkontakt untersucht. Die Modifizierung des Brechungindexes verändert die Dispersionsrelation der an der Grenzfläche zwischen Metall und Dielektrikum angeregten nicht-strahlenden Oberflächenplasmonpolariton-Resonanz (SPP). Dadurch wird der Phasenraum verkleinert, in welchen effizient Strahlung abgegeben werden kann. Da die SPP-Resonanz eine nichtstrahlende Verlustquelle der Mikrokavität darstellt, wird so die Auskopplungseffizienz der OLED erhöht. In experimentellen Umsetzungen konnte die externe Quanteneffizienz (EQE) sowohl für einen Emitter gesteigert werden, welcher eine isotrope Verteilung der Strahlungsquellen besitzt (Ir(ppy)3 , +19 %), als auch für eine vorzugsweise horizontale Ausrichtung (Ir(ppy)2 (acac), +18 %). Die Steigerung der EQE korrespondiert sehr gut mit der berechneten Steigerung der Auskopplungseffizienz für die jeweiligen Mikrokavitäten (+23 %, bzw. +19 %). Weitere optische Simulationen legen den Schluss nahe, dass dieser Ansatz ebenso für perfekt horizontale Ausrichtung der Quellen sowie für weiße OLEDs anwendbar ist. Als zweiter Ansatz wurde die erhöhte Lichtausbeute durch Bragg-Streuung an periodische Linienstrukturen untersucht. In dieser Arbeit wurden Methoden untersucht, bei denen die Oberflächen strukturiert wurde, auf welche die organischen Halbleiterschichten der OLEDs aufgebracht wurden. Für bottom-OLEDs (durch ein Substrat emittierende OLEDs), wurde direkt die transparente Elektrode durch ein Laserinterferenzablationsverfahren (DLIP) modifiziert. Zusätzlich wurden top-OLEDs untersucht (direkt aus der Mikrokavität Licht emittierende OLEDs), für welche alle Schichten auf eine periodisch strukturierte Photolackschicht aufgedampft wurden. Für die bottom-OLEDs konnte für eine Gitterkonstante von 0.71 μm eine Steigerung der EQE um 27 %, verglichen zu einer optimierten unstrukturierten Referenz, ermittelt werden. Eine Vergrößerung der Gitterkonstante führt zu einer Abnahme der EQE. Die erhöhte EQE wird auf die Überlagerung des planaren Emissionsspektrums mit Beiträgen von Bragg-gestreuten, ursprünglich nicht-strahlenden Moden zurückgeführt, wobei die Intensitäten der Anteile von der Gitterkonstante und der Strukturhöhe abhängen. Für die top-OLEDs konnte eine Steigerung der EQE um 13 % für eine Gitterkonstante von 1.0 μm festgestellt werden. Im Gegensatz zu den bottom-OLEDs wird für kleinere Gitterkonstanten (0.6 μm) hier die EQE nicht erhöht. Vielmehr kommt es durch die starke Veränderung des Emissionsspektrums zu einer Erhöhung der photometrischen Lichtausbeute um 13.5 %. Die starke Veränderung des Emissionspektrums wird auf eine kohärente Kopplung zwischen den Bragg-gestreuten Moden zurückgeführt, bedingt durch die starke optische Mikrokavität dieses OLED-Typs. Um diese Effekte quantitativ zu beschreiben, wurde ein entsprechendes Modell entwickelt und implementiert. Die Qualität der Simulationsergebnisse wird anhand von Literaturreferenzen überprüft, wobei eine gute Übereinstimmung zu experimentell gemessenen Spektren erzeugt wird. Mit dem Simulationsmodell werden Vorhersagen über das Emissionspektrum und die resultierenden Effizienzen möglich. Für bottom-OLEDs wurde festgestellt, dass eine starke Veränderung des Emissionspektrums für Gitterkonstanten unterhalb von 0.5 μm erzeugt werden kann. Hingegen sind für top-OLEDs sehr schwache Strukturen oder große Gitterkonstanten notwendig, um eine nur schwache Veränderung des Emissionsspektrums und damit einen allgemeinen Effizienzgewinn zu erzeugen. Bezüglich der Gitterkonstante, ist diese Erkenntnis ist im Gegensatz zur üblichen Herangehensweise zur Implementierung periodischer Streuschichten in OLEDs. Mit der implementierten Simulationsmethode werden jedoch Aussagen bzgl. Emissionspektrum und Effizienz für eine breite Spanne an OLED-Strukturen vor der experimentellen Umsetzung möglich<br>Organic light-emitting diodes (OLEDs) are an attractive new light source for display and lighting applications. In general, the light extraction from OLEDs is limited due to the high refractive index of the active emitter material and the thin film geometry. The high refractive index causes the trapping of a significant portion of the emitted light due to total internal reflection (TIR). Due to the thin film layout, the light emission is enhanced for resonant modes of the coherent optical microcavity, in particular for light affected by TIR. In this work two approaches are investigated in detail in order to increase the light extraction efficiency of OLEDs. In a first approach, the implementation of a low refractive index material next to the opaque metallic back-reflector is discussed. This modifies the dispersion relation of the non-radiative surface plasmon polariton (SPP) mode at the metal / dielectric interface, causing a shift of the SPPs dispersion relation. Thereby, the phase space into which power can be efficiently dissipated by the emitter is reduced. For the SPP this power would have been lost to the cavity, such that in total the outcoupling efficiency is increased. In experiment, an increased external quantum efficiency (EQE) is observed for an emitter exhibiting isotropic orientation of the sources (Ir(ppy)3 ,+19 %), as well as for an emitter which shows preferential horizontal orientation (Ir(ppy)2 (acac), +18 %), compared to an optimized device which uses standard material. This corresponds very well to the enhancement of the outcoupling efficiencies of the corresponding microcavities (+23 %, resp. +19 %) reducing the refractive index of the hole transport layer by 15 %. Optical simulations indicate that the approach is generally applicable to a wide range of device architectures. These in particular include OLEDs with emitters showing a perfectly horizontal alignment of their transition dipole moments. Furthermore, the approach is suitable for white OLEDs. Bragg scattering was investigated as second option to increase the light extraction from OLEDs. The method requires a periodically structured surface. For the bottom-emitting OLEDs, this is achieved by a direct laser interference patterning (DLIP) of the transparent electrode. Additionally, top-emitting devices were fabricated onto periodically corrugated photoresist layers. Using a periodic line pattern with a lattice constant of 0.71 μm, the EQE of the bottom-emitting devices was enhanced by 27 % compared to an optimized planar reference. For the bottom-emitting layout, increasing the lattice constant leads to lower EQEs. The increased EQE is attributed to the superposition of the radiative cavity resonances by Bragg scattered intensities of trapped modes. The intensities depend on the lattice constants as well as the height of the periodic surface perturbation. For top-emitting OLEDs comprising a lattice constant of 1.0 μm the EQE was increased by 13 %. Reducing the lattice constant (0.6 μm) decreases the EQE, albeit the luminous efficacy is increased by 13.5 % due to a heavily perturbed emission spectrum. The perturbation is attributed to a coherent interaction of the Bragg scattered modes due to the strong optical microcavity for the top-emitting OLEDs. Thus, for strong perturbation specific emission patterns can be achieved, but an overall enhanced efficiency is difficult to obtain. To investigate the observed results theoretically, a detailed simulation approach is outlined. The simulation method is carefully evaluated using reference data from literature. Using the simulation approach, the emission patterns as well as the efficiencies of the devices can be estimated. The emission spectra reproduced from simulation are in good agreement with the experiment. Furthermore, for the bottom-emitting layout, a strong interaction can be found from simulations for lattice constants below 0.5 μm. For top-emitting OLEDs, the weak interaction regime seems to be more likely to result in an overall enhanced emission. This requires, in contrast to conventional opinion, very shallow perturbations or lattice constants which exceed the peak wavelength of the emission spectrum. However, with the established simulation approach a-priori propositions on the emission spectrum or particular beneficial device layouts become feasible
APA, Harvard, Vancouver, ISO, and other styles
31

Sreevalson, Nair Jaya. "Modular processing of two-dimensional significance map for efficient feature extraction." Thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-07012002-111746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hamaali, Arkan Latif. "A Study of Selecting an Efficient Procedure for Intermittent Electrochemical Chloride Extraction." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for konstruksjonsteknikk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11563.

Full text
Abstract:
In this eight specimens were prepared in laboratory. Four of the specimens were with w/cratio (0.4) and the other four specimens were with w/c ratio (0.5). All of the specimens were intentionally contaminated with 3% (by cement weight) of chlorides during the mixing process in order to study the effect of various parameters, like (current density, intermitting the current in different periods, and the w/c ratio), on the total efficiency of the ECE treatment. A titanium net, immersed in calcium hydroxide electrolyte solution, was used in this work, and the current densities were (0.7 and 1.0) A/m2 by steel surface. Two different current-on intervals (12 and 5) days were used as well, while the current-off interval was 2 day in all cases of treatment. The plexiglass-spacers, which used between the anode net and the concrete surface in order to reduce the acidification of the concrete surface, result in lower efficiency of the ECE in the adjacent concrete surface above it. Keeping the concrete surface immersed in the electrolyte solution during all the treatment’s period led to diffusing a considerable amount of chlorides in these areas out of the concrete. According to the overall results of the chloride measurement; the 12 days of current-on treatment was most efficient in the early stage of the intermittent treatment, while the 5 days of current-on treatment was most efficient in the advanced stages of the intermittent treatment. Therefore, it can be beneficial to apply the ECE with gradient current-on intervals that starts with long intervals and ends with short intervals. Due to the low current densities that used, the total duration of the treatment was rather long (90 days) and the ECE treatment was most efficient in the specimens that treated with 1.0 A/m2 by steel surface. The variation between the two types of w/c ratios that used in this work was low and therefore, the influence of the w/c ratio on the ECE’s efficiency was difficult to be noticed. At the end of the treatment, the rates of extracted chlorides were generally between (80 and86)% in all the specimens excepting two of them which had two parameters in common (current density 0.7 A/m2 and 5 days of current-on intervals). These two specimens gave worst result and only about 66% of chlorides were extracted at the end of the treatment.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Quan. "Efficient numerical modeling of random surface roughness for interconnect internal impedance extraction." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B3955708X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Quan, and 陳全. "Efficient numerical modeling of random surface roughness for interconnect internal impedance extraction." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B3955708X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chu, Jiangtao. "Microdialysis Sampling of Macro Molecules : Fluid Characteristics, Extraction Efficiency and Enhanced Performance." Doctoral thesis, Uppsala universitet, Mikrosystemteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-261068.

Full text
Abstract:
In this thesis, fluid characteristics and sampling efficiency of high molecular weight cut-off microdialysis are presented, with the aim of improving the understanding of microdialysis sampling mechanisms and its performance regarding extraction efficiency of biological fluid and biomarkers. Microdialysis is a well-established clinical sampling tool for monitoring small biomarkers such as lactate and glucose. In recent years, interest has raised in using high molecular weight cut-off microdialysis to sample macro molecules such as neuropeptides, cytokines and proteins. However, with the increase of the membrane pore size, high molecular weight cut-off microdialysis exhibits drawbacks such like unstable catheter performance, imbalanced fluid recovery, low and unstable molecule extraction efficiency, etc. But still, the fluid characteristics of high molecular weight cut-off microdialysis is rarely studied, and the clinical or in vitro molecule sampling efficiency from recent studies vary from each other and are difficult to compare.   Therefore, in this thesis three aspects of high molecular weight cut-off microdialysis have been explored. The first, the fluid characteristics of large pore microdialysis has been investigated, theoretically and experimentally. The results suggest that the experimental fluid recovery is in consistency with its theoretical formula. The second, the macromolecule transport behaviour has been visualized and semi-quantified, using an in vitro test system and fluorescence imaging. The third, two in vitro tests have been done to mimic in vivo cerebrospinal fluid sampling under pressurization, using native and differently surface modified catheters. As results, individual protein/peptide extraction efficiencies were achieved, using targeted mass spectrometry analysis. In summary, a theory system of the fluid characteristics of high molecular weight cut-off microdialysis has been built and testified; Macromolecular transport of microdialysis catheter has been visualized; In vivo biomolecules sampling has been simulated by well-defined in vitro studies; Individual biomolecular extraction efficiency has been shown; Different surface modifications of microdialysis catheter have been investigated. It was found that, improved sampling performance can be achieved, in terms of balanced fluid recovery and controlled protein extraction efficiency.
APA, Harvard, Vancouver, ISO, and other styles
36

Marvi, Hossein. "Efficient feature extraction based on two-dimensional cepstrum analysis for speech recognition." Thesis, University of Surrey, 2004. http://epubs.surrey.ac.uk/843940/.

Full text
Abstract:
Solving speech recognition problems requires an adequate feature extraction technique to transform the raw speech signal to a set of feature vectors to preserve most of information corresponding to the speech signal. The features should ideally be compact, distinct and well representative of the speech signal. If the feature vectors do not represent the important content of the speech, the performance of the system will perform poorly regardless of the pattern recognition techniques applied. Many different feature extraction representations of the speech signal have been suggested and tried for speech recognition. The most popular features which are used currently are Mel- frequency cesptral coefficients (MFCC) and perceptual linear prediction (PLP), which are based on one dimensional cepstrum analysis. The two dimensional cepstrum (TDC) is an alternative approach for time-frequency representation of any speech signal which can preserve both the instantaneous and transitional information of the speech signal. Here, in this thesis, the principle aim concerns the study of the two dimensional cepstrum analysis as a feature extraction technique for speech recognition. A novel feature extraction technique, two dimensional root cepstrum (TDRC) is also introduced. It has the advantage of an adjustable y parameter which can be used to optimise the feature extraction process, reducing the dimensions of the feature matrix and giving simple computation. In addition, the Mel TDRC has been proposed as a modified method of original TDRC to improve the accuracy. It is shown that both the TDC and the TDRC outperform the conventional cepstrum. To preserve both magnitude and phase details of the speech signal simultaneously in a feature matrix, the Hartley transform (HT) is suggested as a substitute for the Fourier transform (FT) in two-dimensional cepstrum analysis. Experimental results demonstrate the enhanced capability of the HT in the two dimensional root cepstral analysis to improve recognition accuracy. An experimental comparative study of 9 kinds of feature extraction methods based on cepstral analysis are also carried out.
APA, Harvard, Vancouver, ISO, and other styles
37

Quintana, Ashwell Nicolas Efrain. "Essays on optimal extraction of groundwater in Western Kansas." Diss., Kansas State University, 2017. http://hdl.handle.net/2097/38153.

Full text
Abstract:
Doctor of Philosophy<br>Department of Agricultural Economics<br>Jeffrey M. Peterson<br>Nathan P. Hendricks<br>The two studies presented in this dissertation examine incentives for groundwater extraction and their resulting effect on aquifer depletion. Both studies apply dynamic optimization methods in a context of irrigated agriculture in arid and semi-arid regions such as in western Kansas. The first study examines the effects of capital subsidies aimed at increasing irrigation application efficiency. The second study examines the effects of changing incentives posed by changes in climatic patterns and by technical progress in the form of increasing crop water productivity. Both studies have significant policy and groundwater management implications. Subsidies for the adoption of (more) efficient irrigation technologies are commonly proposed and enacted with the goal of achieving water conservation. These subsidies are more politically feasible than water taxes or water use restrictions. The reasoning behind this type of policy is that increased application efficiency makes it possible to sustain a given level of crop production per acre with lower levels of groundwater pumping, all else equal. Previous literature argues that adoption of more efficient irrigation systems may not reduce groundwater extraction. Rewarding the acquisition of more efficient --and capital intensive-- irrigation equipment affects the incentives farmers have to pump groundwater. For instance, the farmer may choose to produce more valuable and water intensive crops or to expand the irrigated acreage after adopting the more efficient irrigation system. Hence, the actual impact of the policy on overall groundwater extraction and related aquifer depletion is unclear. The first chapter examines the effects of such irrigation technology subsidies using a model of inter-temporal common pool groundwater use with substitutable technology and declining well-yields from groundwater stocks, where pumping cost and stock externalities arise from the common property problem. An optimal control analytical model is developed and simulated with parameters from Sheridan County, Kansas-- a representative region overlying the Ogallala aquifer. The study contrasts competitive and optimal allocations and accounts for endogenous and time-varying irrigation capital on water use and groundwater stock. The analysis is the first to account for the labor savings from improved irrigation technologies. The results show that in the absence of policy intervention, the competitive solution yields an early period with underinvestment in efficiency-improving irrigation technology relative to the socially efficient solution, followed by a period of over-investment. This suggests a potential role for irrigation capital subsidies to improve welfare over certain ranges of the state variables. In contrast to previous work, the findings are evidence that significant returns may be achieved from irrigation capital subsidies. Finally, a policy scenario is simulated where an irrigation technology subsidy is implemented to explore whether such a program can capture significant portions of the potential welfare gain. Results indicate that the technology subsidy can improve welfare, but it captures a relatively small portion of the potential gains in welfare. The second chapter presents a dynamic model of groundwater extraction for irrigation where climate change and technical progress are included as exogenous state variables-- in addition to the usual state variable of the stock of groundwater. The key contributions of this study are (i) an intuitive description of the conditions under which groundwater extraction can be non-monotonic, (ii) a numerical demonstration that extraction is non-monotonic in an important region overlying the Ogallala Aquifer, and (iii) the predicted gains from management are substantially larger after accounting for climate and technical change. Intuitively, optimal extraction is increasing in early periods when the marginal benefits of extraction are increasing sufficiently fast due to climate and technical change compared to the increase in the marginal cost of extraction. In contrast, most previous studies include the stock of groundwater as the only state variable and, consequently, recommend a monotonically decreasing extraction path. In this study, the numerical simulations for a region in Kansas overlying the Ogallala Aquifer indicate that optimal groundwater extraction peaks 23 years in the future and the gains from management are large (29.5%). Consistent with previous literature, the predicted gains from management are relatively small (6.1%) when ignoring climate and technical change. The realized gains from management are not substantially impacted by incorrect assumptions of climate and technical change when formulating the optimal plan.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Peng. "Optimizing Simultaneous-Isomerization-and-Reactive-Extraction (SIRE) Followed by Back-Extraction (BE) Process for Efficient Fermentation of Ketose Sugars to Products." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1524617555286546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wildi, Marc. "Signal extraction : efficient estmation, 'unit-root'-tests and early detection of turning-points /." [St. Gallen], 2002. http://aleph.unisg.ch/hsgscan/hm00131369.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sharif, Bakhtiar Alireza. "An efficient CMOS RF power extraction circuit for long-range passive RFID tags." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/33712.

Full text
Abstract:
Effective matching and efficient power conversion play key roles in long- range power telemetry. This thesis discusses challenges and suggests solutions for long-range power telemetry with an emphasis on radio-frequency identification (RFID) applications. As a proof-of-concept a radio-frequency (RF) power harvesting system in a 0.13-µm CMOS technology is designed, fabricated, and successfully tested. The RF power harvesting system must maintain matching over the the wide operation frequency range of passive RFID tags, mandated by EPC- global. In this work, we first analyze the series-inductor matching network and show that there is a trade-o between bandwidth and efficiency. We then derive some guidelines for matching circuit design for RFID tags. To solve the matching problem over a wide frequency range, an adaptive matching system is proposed. At the startup, this system turns on while the rest of the chip is still inactive, and automatically tunes the matching network to achieve its maximum output voltage. Then the rest of the chip wakes up and functions as normal. A new CMOS rectifier stage is also proposed. This stage is capable of efficient operation even with very low input powers. In addition, this rectifier stage can be cascaded to reach higher output voltages without significantly compromising the overall efficiency. Combination of low-power performance and cascadability makes this rectifier suitable for long-range RFID tags. The test setup and measurement results are also discussed in a separate chapter. The measurement results show a 50% rectifiers efficiency at 4-µW input power. To the best of our knowledge, to date, this is the highest efficiency reported for rectifiers operating at such a low input power. Also, as compared to the output voltage at the nominal center frequency of the input matching network, the system shows less than 6% drop in output voltage over the entire 55-MHz bandwidth of the system which verifies the effectiveness of adaptive matching.
APA, Harvard, Vancouver, ISO, and other styles
41

Sellers, Andrew. "OXPath : a scalable, memory-efficient formalism for data extraction from modern web applications." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555325.

Full text
Abstract:
The evolution of the web has outpaced itself: The growing wealth of informa- tion and the increasing sophistication of interfaces necessitate automated pro- cessing. Web automation and extraction technologies have been overwhelmed by this very growth. To a'ddress this trend, we identify four key requirements of web extraction: (1) Interact with sophisticated web application interfaces, (2) Precisely capture the relevant data for most web extraction tasks, (3) Scale with the number of visited pages, and (4) Readily embed into existing web technologies. ThIS dissertation discusses OXPATH, an extension of XPath for interacting with web applications and for extracting information thus revealed. It ad- -: dresses all the above requirements. OXPATH's page-at-a-time evaluation guar- antees memory use independent of the number of visited pages, yet remains polynomial in time. We validate experimentally the theoretical complexity and demonstrate that its evaluation is dominated by technical aspects such as the page rendering of the underlying browser. We also present OXPATH host languages, including Ox LATIN. Ox LATIN extends the well-known Pig Latin language and can run on a standard Hadoop cluster. The Ox LATIN language facilitates distributed expression evaluation in a cloud computing paradigm, providing support for common web extraction scenarios that include expression composition, aggregation, and integration. Ox LATIN adds support for continuations within its programs, which increases its efficiency by eliminating unneeded page fetches. Our experiments confirm the scalability of OXPATH and Ox LATIN. We fur- ther show that OXPATH outperforms existing commercial and academic data extraction tools by a wide margin. OXPATH is available under an open source license. We also discuss applications and ongoing tool development that establish OX- PATH as a data extraction tool that advances the state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
42

Yeh, Mei-Ting, and 葉美廷. "Efficient Extraction for Chicken Skin Collagen." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/01290063476740168776.

Full text
Abstract:
碩士<br>國立中興大學<br>食品暨應用生物科技學系所<br>100<br>Earlier, most of the raw material for collagen extraction came from bovine and swine, but since there are the epidemics situations or the religion effect, customers will hesitate in using the products made by bovine and swine. A lot of researches have been done to find a better substitute for the raw material, such as marine animals. Marine animals are an ideal material for collagen extraction, but the impurity of the extract may lead the consumers to allergy concerns. This research, in order to avoid the above-mentioned concerns, choose the chicken skin which has abundant of collagen type I as the material. Chicken is the world’s largest poultry production, but in some country, like America, chicken skin is treated as the waste. If the high value collagen could be extracted, it may increase the value of this by-product. This study proposes an efficient extraction process of chicken skin collagen. There are three steps to extract the collagen, first, the raw material is simply minced, and then ethanol is added to undergo the oil extraction, next is the alkaline treatment to remove the unnecessary proteins and finally the acid with enzyme mixture is used to extract the collagen. The oil extraction part, using 1:20 (vw) of 20% ethanol under 25℃ for 3 hr will extract about 65% of the fat. 1:20 (v/w) of 0.2 M sodium hydroxide is then added to the defatted skin and undergo the alkaline treatment for 1.5 hr, this condition may decrease the collagen loss. After the purification process, the material is soaked under 1: 10 (v/w) of 0.5 M acetate with 0.1% pepsin mixture solution for 48 hr under 25℃, the final result is the collagen solution. This method may yield 6% collagen, 85,71% recovery, and 58.63% purity. SDS-PAGE proves that using this processing method, the collagen extracted from the chicken skin is mostly the type I collagen. The procedure time compared with the previous research has been shortened to 52.5 hr, this result and the chosen raw material may enhance the competitiveness of the market.
APA, Harvard, Vancouver, ISO, and other styles
43

Tsung, Cheng-Sheng, and 宗成聖. "Enhanced LED Light Extraction Efficiency Using Ag Nanoparticles." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/24734360218786542999.

Full text
Abstract:
碩士<br>國立中興大學<br>材料科學與工程學系所<br>101<br>As time goes by, the energy efficiency and economic benefit have become a famous issue. In this thesis, a surface-plasmon-enhanced LED was successfully fabricated by improving the external quantum efficiency via the increase of light extraction efficiency of the device. It was found that the light extraction efficiency of lateral conducting blue LED can be enhanced by using surface plasmon polariton(SPP)and localized surface plasmon(LSP). With a silver nanoparticle layer deposited on p-GaN layer, a metallic thin film as the grating structure increases the out-coupling efficiency during the SPP excitation. Subsequently, electron-beam evaporated 200-nm-thick indium tin oxide(ITO) as an ohmic contact layer with rough surface was deposited on a silver nanoparticle layer using a thermal annealing process. With a comparison of the optical performance of the conventional LED without the silver nanoparticles, we confirm the superior performance of these surface-plasmon-enhanced LED. It exhibited over 1.1 times output power increase than that of the conventional LED at 350 mA and the good current-voltage characteristic as well as the conventional LED. Especially, the transmittance of silver nanoparticles on the glass substrates is higher than 80%, indicating that the increase in output power intensity is not due to the reflection of light by a silver nanoparticle layer.
APA, Harvard, Vancouver, ISO, and other styles
44

Morsey, Mohamed. "Efficient Extraction and Query Benchmarking of Wikipedia Data." Doctoral thesis, 2013. https://ul.qucosa.de/id/qucosa%3A12247.

Full text
Abstract:
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches. However, the DBpedia release process is heavy-weight and the releases are sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication. Basically, knowledge bases, including DBpedia, are stored in triplestores in order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks. Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
APA, Harvard, Vancouver, ISO, and other styles
45

Sharp, Stephen R. "Investigation of factors influencing chloride extraction efficiency during electrochemical chloride extraction from reinforcing concrete /." 2005. http://wwwlib.umi.com/dissertations/fullcit/3169692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Yung-Ming, and 黃永銘. "FDTD Simulation for Extraction Efficiency of Light Emitting Devices." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/82613205735150220485.

Full text
Abstract:
碩士<br>國立臺灣大學<br>光電工程學研究所<br>95<br>In this thesis, we use three dimensional finite-difference time-domain (FDTD) method to simulate the extraction efficiency of light emitting devices and improve the extraction efficiency by applying photonic crystal structure. Light wave in light emitting devices, such as organic light-emitting diodes (OLED) or light emitting diodes (LED), is confined when total internal reflection occurs due to difference of refractive index in each layer. We simulate behavior of light in these devices and evaluate the extraction efficiency of each based on FDTD. The advantages of FDTD are easy derivation and convenience for complex structure design. However, some modification is essential for the metal cathode and inner micro-structures in OLED. We use the Drude model to approximate the light behavior in the metal and evaluate the energy penetrating to air using near-to-far-field transformation and geometric optics. Parallel programming and alternating-direction-implicit difference are also used to improve calculation efficiency since FDTD simulation requires a huge amount of calculation resource. To enhance the extraction efficiency of OLED, we modify the thickness of layers of OLED and insert the grating structures between the layers. Light is scattered to air due to the grating structure. Finally, we apply photonic crystal structures on LED with different radius-period ratio and thickness. By manipulating the photonic bandgap, light will be coupled to air, resulting in high extraction efficiency.
APA, Harvard, Vancouver, ISO, and other styles
47

CHIANG, CHIH-YIN, and 江志胤. "Simulation of the light extraction efficiency of GaN LEDs." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/54357227828513943658.

Full text
Abstract:
碩士<br>南臺科技大學<br>光電工程系<br>105<br>Light-Emitting Diodes (LEDs) have the advantages of high luminous efficiency, long life, environmental protection and power saving, and open a new era for lighting. The light extraction efficiency of the LED is crucial for its luminous efficiency. This paper will focus on improving the light extraction efficiency of LEDs. We used optical simulation software ASAP (Advanced Systems Analysis Program) to establish the model of several GaN LEDs, and employed Monte Carlo ray tracing method to simulate the LED light path. First,for flip-chip GaN LEDs, we study LEDs with different chip shapes, including square and triangular shapes, and LEDs with different structures, including sapphire substrate, sapphire substrate with flip chip, and GaN substrate with flip chip. In addition, we add hemispherical microstructure to these three structures to improve their light extraction efficiency. Finally, we propose a novel method-source regeneration method to study the behavior of rays inside the LED in order to understand more about light extraction of LEDs.
APA, Harvard, Vancouver, ISO, and other styles
48

Meng, Hsin-Wei, and 孟欣薇. "Increasing GaN Blue LED Light Extraction Efficiency With TRIZ." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/09219717890272221911.

Full text
Abstract:
碩士<br>元智大學<br>工業工程與管理學系<br>96<br>Since the energy-saving and the global warming effect are becoming the topics concerned all around the world, the light emitting diode(LED) with the benefits of environmental friendliness, tiny volume, long service life and low energy consumption seems to be the new light source in the 21st century. Since the mid-1990s, GaN Blue LED has been of great interest because of its potential for optoelectronic application. One of the critical problems in a conventional GaN Blue LED chip was increasing light efficiency. With respect to the LED technology, the variety of materials and the manufacturing variations are the key points. In this study, the means Contradiction matrix and the innovative principles based TRIZ innovation process concerned would be put forth in the project. Contradictions caused by transition actions were formulated and resolved with the guidance of inventive principles that suggested conceptual guidelines for technical contradiction. More than four useful ideas were developed with the aid of the inventive principles. Many ideas have proven excellent performers for light extraction up to a 46 percent increase in average. And it can be found the brightness improvement and heat dissipation could be solved simultaneously. As for the innovation process to produce GaN Blue LED, the target will be raising the efficiency of light extraction.
APA, Harvard, Vancouver, ISO, and other styles
49

Dong, To Bao, and 蘇寶同. "Enhancing Light Extraction Efficiency in Organic Light-Emitting Diodes." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/wzb4kp.

Full text
Abstract:
博士<br>國立中正大學<br>物理系研究所<br>106<br>Organic light-emitting diodes (OLEDs) based on the principle of electroluminescence have been considered as a promising candidate for the future flat panel displays and solid-state lighting. The OLEDs have various advantages, including energy saving, ultra thin, large viewing angles, low operating voltage, rapid response time, and flexibility. However, there are still some drawbacks for the device performances such as lifetime especially on blue organic films, cost of manufacturing process, moisture, and low light extraction efficiency due to light losses from the substrate mode, the waveguide mode and the surface plasmon polaritons (SPPs). This thesis has focused on enhancing the light extraction from OLEDs by using some new, simple, cost-effective methods, which can be classified into two major approaches: (1) External extraction techniques (EETs), in which micro/nanostructures are incorporated in the airside surface of the glass substrate. In our study, we fabricated the convex microlens arrays (MLAs) or microporous polymer thin films (MPFs) then attached these films to the rear of the glass substrate of an OLED. MLAs structure was obtained by molding from breath figures of PS (polystyrene) molds. MPFs were fabricated by blending polymers with starch particles, spin-coating the blended polymer onto a polyethylene terephthalate (PET) substrate, and then removing starch particles through an acid hydrolysis process. The fabrication methods of MLAs and MPFs demonstrate the facile, low-cost, and large area applicability. These structures scattered emitted photons inside the OLED device at the glass air interface and reduced light trapped in the substrate mode. As a result, the light extraction efficiency of OLED can be enhanced about 1.5 fold. The color quality of white-light OLED can ii be also improved by attaching the MPFs. (2) The internal extraction techniques (IETs) are applied inside of the device stack, which can overcome the light loss incurred in organic/ITO waveguide modes and surface plasmon polaritons. The corrugated OLEDs were fabricated by patterned substrates or patterned hole injection layers. The islands or network structures were fabricated by a self-assembly technique based on the deliquescence of cesium chloride (CsCl) salt. These structures were pattern-transferred to tungsten trioxide (WO3) hole injection layer by the lift-off method or to glass substrate by reactive ion etching (RIE). The corrugated structures allowed Bragg scattering to extract the light trapped in waveguide mode. Moreover, the quasi-periodical corrugation at the metal cathode provided an additional in-plane wave vector to fulfill matching condition, thus the light loss from SPP is recovered by transforming SPP into free-space radiation. Therefore, the light performance of the corrugated OLEDs was significantly improved; typically, the maximum enhancement factor of light extraction efficiencies of the OLED were 1.83 and 2.25 folds by patterning WO3 hole injection layer and patterned glass substrate, respectively. The light trapped in waveguide modes also can be reduced via replacing the high refractive index refractive ITO anode by high conductive, low index transparent polymer anodes. The OLED fabricated on polymer anode displayed superior performance and lifetime than conventional OLEDs with ITO anode.
APA, Harvard, Vancouver, ISO, and other styles
50

Jhan, Jia-Jhen, and 詹家甄. "Study on Light Extraction Efficiency of Light-Emitting Thyristor." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/bd8uba.

Full text
Abstract:
碩士<br>國立中興大學<br>精密工程學系所<br>106<br>In this research, distributed bragg reflector (DBR) and gallium phosphide (GaP) were applied to the light-emitting thyristor, thereby enhancing the characteristics of the conventional light-emitting thyristor. In order to investigate the effects of DBR and GaP on the photoelectric properties, the different kinds of devices were fabricated. One is light-emitting thyristor with DBR, another is with GaP epilayer, and the other is the integration of thyristor with both of GaP and DBR structure. Notably, all of the different structures of devices exhibit the unique S-shape curve, which means they are succeeded to fabricate into thyristors. The material GaP would make the current spread in thyristor uniformly, and thus improve the photoelectric characteristics of the light-emitting thyristor. In addition, the GaP epilayer would prevent the current crowding effect and thus mitigate the heating accumulation in light-emitting thyristor. Also, to enhance the properties of light-emitting thyristor output power, the light-emitting thyristor with DBR structure had been utilized in this study. This way could reflect the emitted light back to the surface and prevent the light from absorbing from GaAs substrate. The results show that it would improve the light extraction efficiency. When comparing the thyristor with both of DBR and GaP structure, with DBR, and with GaP only, respectively. The results show that the output power increases 47.34% and 68.84%, and EQE increases 47.19% and 30.31%, and LEE increases 47.17and 30.25%, respectively. According to the afore mentioned, the light-emitting thyristor with both of DBR and GaP structure would enhance the possibility of current spreading, output power, and EQE.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!