To see the other types of publications on this topic, follow the link: Singular-Value Decomposition (SVD).

Dissertations / Theses on the topic 'Singular-Value Decomposition (SVD)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 dissertations / theses for your research on the topic 'Singular-Value Decomposition (SVD).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ek, Christoffer. "Singular Value Decomposition." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-21481.

Full text
Abstract:
Digital information och kommunikation genom digitala medier är ett växande område. E-post och andra kommunikationsmedel används dagligen över hela världen. Parallellt med att området växer så växer även intresset av att hålla informationen säker. Transmission via antenner är inom signalbehandling ett välkänt område. Transmission från en sändare till en mottagare genom fri rymd är ett vanligt exempel. I en tuff miljö som till exempel ett rum med reflektioner och oberoende elektriska apparater kommer det att finnas en hel del distorsion i systemet och signalen som överförs kan, på grund av systemets egenskaper och buller förvrängas.Systemidentifiering är ett annat välkänt begrepp inom signalbehandling. Denna avhandling fokuserar på systemidentifiering i en tuff miljö med okända system. En presentation ges av matematiska verktyg från den linjära algebran samt en tillämpning inom signalbehandling. Denna avhandling grundar sig främst på en matrisfaktorisering känd som Singular Value Decomposition (SVD). SVD’n används här för att lösa komplicerade matrisinverser och identifiera system.Denna avhandling utförs i samarbete med Combitech AB. Deras expertis inom signalbehandling var till stor hjälp när teorin praktiserades. Med hjälp av ett välkänt programmeringsspråk känt som LabView praktiserades de matematiska verktygen och kunde synkroniseras med diverse instrument som användes för att generera signaler och system.
Digital information transmission is a growing field. Emails, videos and so on are transmitting around the world on a daily basis. Along the growth of using digital devises there is in some cases a great interest of keeping this information secure. In the field of signal processing a general concept is antenna transmission. Free space between an antenna transmitter and a receiver is an example of a system. In a rough environment such as a room with reflections and independent electrical devices there will be a lot of distortion in the system and the signal that is transmitted might, due to the system characteristics and noise be distorted. System identification is another well-known concept in signal processing. This thesis will focus on system identification in a rough environment and unknown systems. It will introduce mathematical tools from the field of linear algebra and applying them in signal processing. Mainly this thesis focus on a specific matrix factorization called Singular Value Decomposition (SVD). This is used to solve complicated inverses and identifying systems. This thesis is formed and accomplished in collaboration with Combitech AB. Their expertise in the field of signal processing was of great help when putting the algorithm in practice. Using a well-known programming script called LabView the mathematical tools were synchronized with the instruments that were used to generate the systems and signals.
APA, Harvard, Vancouver, ISO, and other styles
2

Jolly, Vineet Kumar. "Activity Recognition using Singular Value Decomposition." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/35219.

Full text
Abstract:
A wearable device that accurately records a user's daily activities is of substantial value. It can be used to enhance medical monitoring by maintaining a diary that lists what a person was doing and for how long. The design of a wearable system to record context such as activity recognition is influenced by a combination of variables. A flexible yet systematic approach for building a software classification environment according to a set of variables is described. The integral part of the software design is the use of a unique robust classifier that uses principal component analysis (PCA) through singular value decomposition (SVD) to perform real-time activity recognition. The thesis describes the different facets of the SVD-based approach and how the classifier inputs can be modified to better differentiate between activities. This thesis presents the design and implementation of a classification environment used to perform activity detection for a wearable e-textile system.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Renkjumnong, Wasuta. "SVD and PCA in Image Processing." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/math_theses/31.

Full text
Abstract:
The Singular Value Decomposition is one of the most useful matrix factorizations in applied linear algebra, the Principal Component Analysis has been called one of the most valuable results of applied linear algebra. How and why principal component analysis is intimately related to the technique of singular value decomposition is shown. Their properties and applications are described. Assumptions behind this techniques as well as possible extensions to overcome these limitations are considered. This understanding leads to the real world applications, in particular, image processing of neurons. Noise reduction, and edge detection of neuron images are investigated.
APA, Harvard, Vancouver, ISO, and other styles
4

Haque, S. M. Rafizul. "Singular Value Decomposition and Discrete Cosine Transform based Image Watermarking." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5269.

Full text
Abstract:
Rapid evolution of digital technology has improved the ease of access to digital information enabling reliable, faster and efficient storage, transfer and processing of digital data. It also leads to the consequence of making the illegal production and redistribution of digital media easy and undetectable. Hence, the risk of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. One possible solution may be to embed a secondary signal or pattern into the image that is not perceivable and is mixed so well with the original digital data that it is inseparable and remains unaffected against any kind of multimedia signal processing. This embedded secondary information is digital watermark which is, in general, a visible or invisible identification code that may contain some information about the intended recipient, the lawful owner or author of the original data, its copyright etc. in the form of textual data or image. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. Several types of watermarking algorithms have been developed so far each of which has its own advantages and limitations. Among these, recently Singular Value Decomposition (SVD) based watermarking algorithms have attracted researchers due to its simplicity and some attractive mathematical properties of SVD. Here a number of pure and hybrid SVD based watermarking schemes have been investigated and finally a RST invariant modified SVD and Discrete Cosine Transform (DCT) based algorithm has been developed. A preprocessing step before the watermark extraction has been proposed which makes the algorithm resilient to geometric attack i.e. RST attack. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm against geometric attack including rotation, scaling, translation (RST) and some other attacks. Experimental results have been compared with existing algorithm which seems to be promising.
Phone number: +88041730212
APA, Harvard, Vancouver, ISO, and other styles
5

Kaufman, Jason R. "Digital video watermarking using singular value decomposition and two-dimensional principal component analysis." Ohio : Ohio University, 2006. http://www.ohiolink.edu/etd/view.cgi?ohiou1141855950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brown, Michael J. "SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1149187904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chu, Yue. "SVD-BAYES: A SINGULAR VALUE DECOMPOSITION-BASED APPROACH UNDER BAYESIAN FRAMEWORK FOR INDIRECT ESTIMATION OF AGE-SPECIFIC FERTILITY AND MORTALITY." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1609638415015896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Campbell, Kathlleen. "Extension of Kendall's tau Using Rank-Adapted SVD to Identify Correlation and Factions Among Rankers and Equivalence Classes Among Ranked Elements." Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/284578.

Full text
Abstract:
Statistics
Ph.D.
The practice of ranking objects, events, and people to determine relevance, importance, or competitive edge is ancient. Recently, the use of rankings has permeated into daily usage, especially in the fields of business and education. When determining the association among those creating the ranks (herein called sources), the traditional assumption is that all sources compare a list of the same items (herein called elements). In the twenty-first century, it is rare that any two sources choose identical elements to rank. Adding to this difficulty, the number of credible sources creating and releasing rankings is increasing. In statistical literature, there is no current methodology that adequately assesses the association among multiple sources. We introduce rank-adapted singular value decomposition (R-A SVD), a new method that uses Kendall's tau as the underlying correlation method. We begin with (P), a matrix of data ranks. The first step is to factor the covariance matrix (K) as follows: K = cov(P) = V D^2 V Here, (V) is an orthonormal basis for the rows that is useful in identifying when sources agree as to the rank order and specifically which sources. D is a diagonal of eigenvalues. By analogy with singular value decomposition (SVD), we define U^* as U^* = PVD^(-1) The diagonal matrix, D, provides the factored eigenvalues in decreasing order. The largest eigenvalue is used to assess the overall association among the sources and is a conservative unbiased method comparable to Kendall's W. Anderson's test determines whether this association is significant and also identifies other significant eigenvalues produced by the covariance matrix.. Using Anderson's test (1963) we identify the a significantly large eigenvalues from D. When one or more eigenvalues is significant, there is evidence that the association among the sources is significant. Focusing on the a corresponding vectors of V specifically identifies which sources agree. In cases where more than one eigenvalue is significant, the $a$ significant vectors of V provide insight into factions. When more than one set of sources is in agreement, each group of agreeing sources is considered a faction. In many cases, more than one set of sources will be in agreement with one another but not necessarily with another set of sources; each group that is in agreement would be considered a faction. Using the a significant vectors of U^* provides different but equally important results. In many cases, the elements that are being ranked can be subdivided into equivalence classes. An equivalence class is defined as subpopulations of ranked elements that are similar to one another but dissimilar from other classes. When these classes exist, U^* provides insight as to how many classes and which elements belong in each class. In summary, the R-A SVD method gives the user the ability to assess whether there is any underlying association among multiple rank sources. It then identifies when sources agree and allows for more useful and careful interpretation when analyzing rank data.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
9

Idrees, Zunera, and Eliza Hashemiaghjekandi. "Image Compression by Using Haar Wavelet Transform and Singualr Value Decomposition." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-11467.

Full text
Abstract:
The rise in digital technology has also rose the use of digital images. The digital imagesrequire much storage space. The compression techniques are used to compress the dataso that it takes up less storage space. In this regard wavelets play important role. Inthis thesis, we studied the Haar wavelet system, which is a complete orthonormal systemin L2(R): This system consists of the functions j the father wavelet, and y the motherwavelet. The Haar wavelet transformation is an example of multiresolution analysis. Ourpurpose is to use the Haar wavelet basis to compress an image data. The method ofaveraging and differencing is used to construct the Haar wavelet basis. We have shownthat averaging and differencing method is an application of Haar wavelet transform. Afterdiscussing the compression by using Haar wavelet transform we used another method tocompress that is based on singular value decomposition. We used mathematical softwareMATLAB to compress the image data by using Haar wavelet transformation, and singularvalue decomposition.
APA, Harvard, Vancouver, ISO, and other styles
10

Gunyan, Scott Nathan. "An Examination into the Statistics of the Singular Vectors for the Multi-User MIMO Wireless Channel." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd539.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Koski, Antti E. "Rapid frequency estimation." Worcester, Mass. : Worcester Polytechnic Institute, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-032806-165036/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: DSS; ECM; SVD; Singular Value Decomposition; rapid frequency estimation; frequency estimation. Includes bibliographical references (leaves 174-177).
APA, Harvard, Vancouver, ISO, and other styles
12

Odedo, Victor. "High resolution time reversal (TR) imaging based on spatio-temporal windows." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/high-resolution-time-reversal-tr-imaging-based-on-spatiotemporal-windows(f0589f73-901f-4de2-9886-7045b7f6cfd4).html.

Full text
Abstract:
Through-the-wall Imaging (TWI) is crucial for various applications such as law enforcement, rescue missions and defense. TWI methods aim to provide detailed information of spaces that cannot be seen directly. Current state-of-the-art TWI systems utilise ultra-wideband (UWB) signals to simultaneously achieve wall penetration and high resolution. These TWI systems transmit signals and mathematically back-project the reflected signals received to image the scenario of interest. However, these systems are diffraction-limited and encounter problems due to multipath signals in the presence of multiple scatterers. Time reversal (TR) methods have become popular for remote sensing because they can take advantage of multipath signals to achieve superresolution (resolution that beats the diffraction limit). The Decomposition Of the Time-Reversal Operator (DORT in its French acronym) and MUltiple SIgnal Classification (MUSIC) methods are both TR techniques which involve taking the Singular Value Decomposition (SVD) of the Multistatic Data Matrix (MDM) which contains the signals received from the target(s) to be located. The DORT and MUSIC imaging methods have generated a lot of interests due to their robustness and ability to locate multiple targets. However these TR-based methods encounter problems when the targets are behind an obstruction, particularly when the properties of the obstruction is unknown as is often the case in TWI applications. This dissertation introduces a novel total sub-MDM algorithm that uses the highly acclaimed MUSIC method to image targets hidden behind an obstruction and achieve superresolution. The algorithm utilises spatio-temporal windows to divide the full-MDM into sub-MDMs. The summation of all images obtained from each sub-MDM give a clearer image of a scenario than we can obtain using the full-MDM. Furthermore, we propose a total sub-differential MDM algorithm that uses the MUSIC method to obtain images of moving targets that are hiddenbehind an obstructing material.
APA, Harvard, Vancouver, ISO, and other styles
13

Brundin, Michelle, Peter Morris, Gustav Åhlman, and Emil Rosén. "Implementation av webbsida för rekommendationssystem med användaruppbyggd databas." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175489.

Full text
Abstract:
The goal of this project was to create a web-based, crowd-sourced, correlational database, that easily allowed users to submit objects and receive correlated objects as results. The webservice was created in the web development languages of HTML, CSS, PHP and Javscript, with MySQL to handle the database. Simultaneous development was kept in check with the aid of the source code management system GIT. Upon completion, the service contained several HTML-views, the ability to add and rate objects, a per-object dedicated page with information parsed from Wikipedia.org, and a view with objects ranked in accordance to the preferences specific to the current user. Roughly a month after the beginning of development, the website was publicly launched and promoted in order to collect data, and improvements were added to the website as needed. Two weeks after the public launch, the collected data was measured and analyzed. The algorithm proved effective and scalable, especially with the introduction of tags and simultaneous computation of object features.
APA, Harvard, Vancouver, ISO, and other styles
14

Svoboda, Pavel. "Vyhledávání osob ve fotografii." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236716.

Full text
Abstract:
The essence of face recognition within the image is generally computer vision, which provides methods and algorithms for the implementation. Some of them are described just in this work. Whole process is split in to three main phases. These are detection, aligning of detected faces and finally its recognition. Algorithms which are used to applied in given issue and which are still in progress from todays view are mentioned in every phase. Implementation is build up on three main algorithms, AdaBoost to obtain the classifier for detection, method of aligning face by principal features and method of Eigenfaces for recognizing. There are theoretically described except already mentioned algorithms neural networks for detection, ASM - Active Shape Models algorithm for aligning and AAM - Active Appearance Model for recognition. In the end there are tables of data retrieved by implemented system, which evaluated the main implementation.
APA, Harvard, Vancouver, ISO, and other styles
15

Kučaidze, Artiom. "Tinklalapio navigavimo asociacijų analizės ir prognozavimo modelis." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20090908_201802-74260.

Full text
Abstract:
Darbe, remiantis informacijos paieškos teorija, bandoma sukurti tinklalapio navigavimo asociacijų analizės ir prognozavimo modelį. Šio modelio tikslas – simuliuoti potencialių tinklalapio vartotojų informacijos paieškos kelius turint apibrėžtą informacinį tikslą. Modelis kuriamas apjungiant LSA, SVD algoritmus ir koreliacijos koeficientų skaičiavimus. LSA algoritmas naudojamas kuriant semantines erdves, o koreliacijos koeficientų skaičiavimai naudojami statistikoje. Kartu jie leidžia tinklalapio navigavimo asociacijų analizės ir prognozavimo modeliui analizuoti žodžių semantinį panašumą. Darbo eigoje išskiriamos pagrindinės problemos, su kuriomis gali susidurti tinklalapio lankytojai sudarant tinklalapio navigavimo asociacijas – tai yra konkurencijos tarp nuorodų problema, klaidinančių nuorodų problema ir nesuprantamų nuorodų problema. Demonstruojama kaip sukurtas modelis atpažįsta ir analizuoja šias problemas.
In this document we develop a model for analyzing and predicting the scent of a web site, which is based on information foraging theory. The goal of this model is to simulate potential web page users and their information foraging paths having specific information needs. Model is being developed combining LSA, SVD algorithms and correlation values calculations. LSA algorithm is used for creating semantic spaces and correlation values are user in statistics. Together they provide possibility to analyze word‘s semantic similarity. Primary problems of web navigation are described in this document. These problems can occur for users while creating the scent of a web site. User can face with concurrency between links problem, wrong sense link problem and unfamiliar link problem. In this document we demonstrate how model recognizes and analyzes these problems.
APA, Harvard, Vancouver, ISO, and other styles
16

Samadi, Afshin. "Large Scale Solar Power Integration in Distribution Grids : PV Modelling, Voltage Support and Aggregation Studies." Doctoral thesis, KTH, Elektriska energisystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154602.

Full text
Abstract:
Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load and high PV generation conditions. Therefore, new strategies are required to address the associated challenges. Moreover, due to these changes in distribution grids, a different response behavior of the distribution grid on the transmission side can be expected. Hence, a new equivalent model of distribution grids with high penetration of PV systems is needed to be addressed for future power system studies. The thesis contributions lie in three parts. The first part of the thesis copes with the PV modelling. A non-proprietary PV model of a three-phase, single stage PV system is developed in PSCAD/EMTDC and PowerFactory. Three different reactive power regulation strategies are incorporated into the models and their behavior are investigated in both simulation platforms using a distribution system with PV systems. In the second part of the thesis, the voltage rise problem is remedied by use of reactive power. On the other hand, considering large numbers of PV systems in grids, unnecessary reactive power consumption by PV systems first increases total line losses, and second it may also jeopardize the stability of the network in the case of contingencies in conventional power plants, which supply reactive power. Thus, this thesis investigates and develops the novel schemes to reduce reactive power flows while still keeping voltage within designated limits via three different approaches: decentralized voltage control to the pre-defined set-points developing a coordinated active power dependent (APD) voltage regulation Q(P)using local signals developing a multi-objective coordinated droop-based voltage (DBV) regulation Q(V) using local signals   In the third part of the thesis, furthermore, a gray-box load modeling is used to develop a new static equivalent model of a complex distribution grid with large numbers of PV systems embedded with voltage support schemes. In the proposed model, variations of voltage at the connection point simulate variations of the model’s active and reactive power. This model can simply be integrated intoload-flow programs and replace the complex distribution grid, while still keepingthe overall accuracy high. The thesis results, in conclusion, demonstrate: i) using rms-based simulations in PowerFactory can provide us with quite similar results using the time domain instantaneous values in PSCAD platform; ii) decentralized voltage control to specific set-points through the PV systems in the distribution grid is fundamentally impossible dueto the high level voltage control interaction and directionality among the PV systems; iii) the proposed APD method can regulate the voltage under the steady-state voltagelimit and consume less total reactive power in contrast to the standard characteristicCosφ(P)proposed by German Grid Codes; iv) the proposed optimized DBV method can directly address voltage and successfully regulate it to the upper steady-state voltage limit by causing minimum reactive power consumption as well as line losses; v) it is beneficial to address PV systems as a separate entity in the equivalencing of distribution grids with high density of PV systems.

The Doctoral Degrees issued upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology and KTH Royal Institute of Technology. The invested degrees are official in Spain, the Netherlands and Sweden, respectively. QC 20141028

APA, Harvard, Vancouver, ISO, and other styles
17

Golub, Frank. "An Estimation Technique for Spin Echo Electron Paramagnetic Resonance." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1372095953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Akkari, Samy. "Contrôle d'un système multi-terminal HVDC (MTDC) et étude des interactions entre les réseaux AC et le réseau MTDC." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC069/document.

Full text
Abstract:
La multiplication des projets HVDC de par le monde démontre l'engouement toujours croissant pour cette technologie de transport de l'électricité. La grande majorité de ces transmissions HVDC correspondent à des liaisons point-à-point et se basent sur des convertisseurs AC/DC de type LCC ou VSC à 2 ou 3 niveaux. Les travaux de cette thèse se focalisent sur l'étude, le contrôle et la commande de systèmes HVDC de type multi-terminal (MTDC), avec des convertisseurs de type VSC classique ou modulaire multi-niveaux. La première étape consiste à obtenir les modèles moyens du VSC classique et du MMC. La différence fondamentale entre ces deux convertisseurs, à savoir la possibilité pour le MMC de stocker et de contrôler l'énergie des condensateurs des sous-modules, est détaillée et expliquée. Ces modèles et leurs commandes sont ensuite linéarisés et mis sous forme de représentations d'état, puis validés en comparant leur comportement à ceux de modèles de convertisseurs plus détaillés à l'aide de logiciels de type EMT. Une fois validés, les modèles d'état peuvent être utilisés afin de générer le modèle d'état de tout système de transmissions HVDC, qu'il soit point-à-point ou MTDC. La comparaison d'une liaison HVDC à base de VSCs classiques puis de MMCs est alors réalisée. Leurs valeurs propres sont étudiées et comparées, et les modes ayant un impact sur la tension DC sont identifiés et analysés. Cette étude est ensuite étendue à un système MTDC à 5 terminaux, et son analyse modale permet à la fois d'étudier la stabilité du système, mais aussi de comprendre l'origine de ses valeurs propres ainsi que leur impact sur la dynamique du système. La méthode de décomposition en valeurs singulières permet ensuite d'obtenir un intervalle de valeurs possibles pour le paramètre de"voltage droop", permettant ainsi le contrôle du système MTDC tout en s'assurant qu'il soit conforme à des contraintes bien définies, comme l'écart maximal admissible en tension DC. Enfin, une proposition de "frequency droop" (ou "statisme"), permettant aux convertisseurs de participer au réglage de la fréquence des réseaux AC auxquels ils sont connectés, est étudiée. Le frequency droop est utilisé conjointement avec le voltage droop afn de garantir le bon fonctionnement de la partie AC et de la partie DC. Cependant, l'utilisation des deux droop génère un couplage indésirable entre les deux commandes. Ces interactions sont mathématiquement quantifiées et une correction à apporter au paramètre de frequency droop est proposée. Ces résultats sont ensuite validés par des simulations EMT et par des essais sur la plate-forme MTDC du laboratoire L2EP
HVDC transmission systems are largely used worldwide, mostly in the form of back-to-back and point-to-point HVDC, using either thyristor-based LCC or IGBT-based VSC. With the recent deployment of the INELFE HVDC link between France and Spain, and the commissioning in China of a three-terminal HVDC transmission system using Modular Multilevel Converters (MMCs), a modular design of voltage source converters, the focus of the scientific community has shifted onto the analysis and control of MMC-based HVDC transmission systems. In this thesis, the average value models of both a standard 2-level VSC and an MMC are proposed and the most interesting difference between the two converter technologies -the control of the stored energy in the MMC- is emphasised and explained. These models are then linearised, expressed in state-space form and validated by comparing their behaviour to more detailed models under EMT programs. Afterwards, these state-space representations are used in the modelling of HVDC transmission systems, either point-to-point or Multi-Terminal HVDC (MTDC). A modal analysis is performed on an HVDC link, for both 2-level VSCs and MMCs. The modes of these two systems are specifed and compared and the independent control of the DC voltage and the DC current in the case of an MMC is illustrated. This analysis is extended to the scope of a 5-terminal HVDC system in order to perform a stability analysis, understand the origin of the system dynamics and identify the dominant DC voltage mode that dictates the DC voltage response time. Using the Singular Value Decomposition method on the MTDC system, the proper design of the voltage-droop gains of the controllers is then achieved so that the system operation is ensured within physical constraints, such as the maximum DC voltage deviation and the maximum admissible current in the power electronics. Finally, a supplementary droop "the frequency-droop control" is proposed so that MTDC systems also participate to the onshore grids frequency regulation. However, this controller interacts with the voltage-droop controller. This interaction is mathematically quantified and a corrected frequency-droop gain is proposed. This control is then illustrated with an application to the physical converters of the Twenties project mock-up
APA, Harvard, Vancouver, ISO, and other styles
19

Jalboub, Mohamed K. "Investigation of the application of UPFC controllers for weak bus systems subjected to fault conditions. An investigation of the behaviour of a UPFC controller: the voltage stability and power transfer capability of the network and the effect of the position of unsymmetrical fault conditions." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5699.

Full text
Abstract:
In order to identify the weakest bus in a power system so that the Unified Power Flow Controller could be connected, an investigation of static and dynamic voltage stability is presented. Two stability indices, static and dynamic, have been proposed in the thesis. Multi-Input Multi-Output (MIMO) analysis has been used for the dynamic stability analysis. Results based on the Western System Coordinate Council (WSCC) 3-machine, 9-bus test system and IEEE 14 bus Reliability Test System (RTS) shows that these indices detect with the degree of accuracy the weakest bus, the weakest line and the voltage stability margin in the test system before suffering from voltage collapse. Recently, Flexible Alternating Current Transmission systems (FACTs) have become significant due to the need to strengthen existing power systems. The UPFC has been identified in literature as the most comprehensive and complex FACTs equipment that has emerged for the control and optimization of power flow in AC transmission systems. Significant research has been done on the UPFC. However, the extent of UPFC capability, connected to the weakest bus in maintaining the power flows under fault conditions, not only in the line where it is installed, but also in adjacent parallel lines, remains to be studied. In the literature, it has normally been assumed the UPFC is disconnected during a fault period. In this investigation it has been shown that fault conditions can affect the UPFC significantly, even if it occurred on far buses of the power system. This forms the main contribution presented in this thesis. The impact of UPFC in minimizing the disturbances in voltages, currents and power flows under fault conditions are investigated. The WSCC 3-machine, 9-bus test system is used to investigate the effect of an unsymmetrical fault type and position on the operation of UPFC controller in accordance to the G59 protection, stability and regulation. Results show that it is necessary to disconnect the UPFC controller from the power system during unsymmetrical fault conditions.
Libyan Government
APA, Harvard, Vancouver, ISO, and other styles
20

Reninger, Pierre-Alexandre. "Méthodologie d'analyse de levés électromagnétiques aéroportés en domaine temporel pour la caractérisation géologique et hydrogéologique." Phd thesis, Université d'Orléans, 2012. http://tel.archives-ouvertes.fr/tel-00802341.

Full text
Abstract:
Cette thèse doctorale aborde divers aspects méthodologiques de l‟analyse de levés électromagnétiques aéroportés en domaine temporel (TDEM) pour une interprétation détaillée à finalités géologique et hydrogéologique. Ce travail s‟est appuyé sur un levé réalisé dans la région de Courtenay (Nord-Est de la région Centre) caractérisée par un plateau de craie karstifié (karst des Trois Fontaines) recouvert par des argiles d‟altération et des alluvions. Tout d‟abord, une méthode de filtrage des données TDEM utilisant la Décomposition en Valeurs Singulières (SVD) a été développée. L‟adaptation rigoureuse de cette technique aux mesures TDEM a permis de séparer avec succès les bruits, qui ont pu être cartographiés, et le " signal géologique ", diminuant grandement le temps nécessaire à leur traitement. De plus, la méthode s‟est avérée efficace pour obtenir, rapidement, des informations géologiques préliminaires sur la zone. Ensuite, une analyse croisée entre le modèle de résistivité obtenu en inversant les données filtrées et les forages disponibles a été effectuée. Celle-ci a mené à une amélioration de la connaissance géologique et hydrogéologique de la zone. Une figure d‟ondulation, séparant deux dépôts de craie, et le réseau de failles en subsurface ont pu être imagés, apportant un cadre géologique au karst des Trois Fontaines. Enfin, une nouvelle méthode combinant l‟information aux forages et les pentes issues du modèle de résistivité EM a permis d‟obtenir un modèle d‟une précision inégalée du toit de la craie. L‟ensemble de ces travaux fournit un cadre solide pour de futures études géo-environnementales utilisant des données TDEM aéroportées, et ce, même en zone anthropisée.
APA, Harvard, Vancouver, ISO, and other styles
21

Jalboub, Mohamed. "Investigation of the application of UPFC controllers for weak bus systems subjected to fault conditions : an investigation of the behaviour of a UPFC controller : the voltage stability and power transfer capability of the network and the effect of the position of unsymmetrical fault conditions." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5699.

Full text
Abstract:
In order to identify the weakest bus in a power system so that the Unified Power Flow Controller could be connected, an investigation of static and dynamic voltage stability is presented. Two stability indices, static and dynamic, have been proposed in the thesis. Multi-Input Multi-Output (MIMO) analysis has been used for the dynamic stability analysis. Results based on the Western System Coordinate Council (WSCC) 3-machine, 9-bus test system and IEEE 14 bus Reliability Test System (RTS) shows that these indices detect with the degree of accuracy the weakest bus, the weakest line and the voltage stability margin in the test system before suffering from voltage collapse. Recently, Flexible Alternating Current Transmission systems (FACTs) have become significant due to the need to strengthen existing power systems. The UPFC has been identified in literature as the most comprehensive and complex FACTs equipment that has emerged for the control and optimization of power flow in AC transmission systems. Significant research has been done on the UPFC. However, the extent of UPFC capability, connected to the weakest bus in maintaining the power flows under fault conditions, not only in the line where it is installed, but also in adjacent parallel lines, remains to be studied. In the literature, it has normally been assumed the UPFC is disconnected during a fault period. In this investigation it has been shown that fault conditions can affect the UPFC significantly, even if it occurred on far buses of the power system. This forms the main contribution presented in this thesis. The impact of UPFC in minimizing the disturbances in voltages, currents and power flows under fault conditions are investigated. The WSCC 3-machine, 9-bus test system is used to investigate the effect of an unsymmetrical fault type and position on the operation of UPFC controller in accordance to the G59 protection, stability and regulation. Results show that it is necessary to disconnect the UPFC controller from the power system during unsymmetrical fault conditions.
APA, Harvard, Vancouver, ISO, and other styles
22

Belica, Michal. "Metody sumarizace dokumentů na webu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236386.

Full text
Abstract:
The work deals with automatic summarization of documents in HTML format. As a language of web documents, Czech language has been chosen. The project is focused on algorithms of text summarization. The work also includes document preprocessing for summarization and conversion of text into representation suitable for summarization algorithms. General text mining is also briefly discussed but the project is mainly focused on the automatic document summarization. Two simple summarization algorithms are introduced. Then, the main attention is paid to an advanced algorithm that uses latent semantic analysis. Result of the work is a design and implementation of summarization module for Python language. Final part of the work contains evaluation of summaries generated by implemented summarization methods and their subjective comparison of the author.
APA, Harvard, Vancouver, ISO, and other styles
23

Ayvazyan, Vigen. "Etude de champs de température séparables avec une double décomposition en valeurs singulières : quelques applications à la caractérisation des propriétés thermophysiques des matérieux et au contrôle non destructif." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14671/document.

Full text
Abstract:
La thermographie infrarouge est une méthode largement employée pour la caractérisation des propriétés thermophysiques des matériaux. L’avènement des diodes laser pratiques, peu onéreuses et aux multiples caractéristiques, étendent les possibilités métrologiques des caméras infrarouges et mettent à disposition un ensemble de nouveaux outils puissants pour la caractérisation thermique et le contrôle non desturctif. Cependant, un lot de nouvelles difficultés doit être surmonté, comme le traitement d’une grande quantité de données bruitées et la faible sensibilité de ces données aux paramètres recherchés. Cela oblige de revisiter les méthodes de traitement du signal existantes, d’adopter de nouveaux outils mathématiques sophistiqués pour la compression de données et le traitement d’informations pertinentes. Les nouvelles stratégies consistent à utiliser des transformations orthogonales du signal comme outils de compression préalable de données, de réduction et maîtrise du bruit de mesure. L’analyse de sensibilité, basée sur l’étude locale des corrélations entre les dérivées partielles du signal expérimental, complète ces nouvelles approches. L'analogie avec la théorie dans l'espace de Fourier a permis d'apporter de nouveaux éléments de réponse pour mieux cerner la «physique» des approches modales.La réponse au point source impulsionnel a été revisitée de manière numérique et expérimentale. En utilisant la séparabilité des champs de température nous avons proposé une nouvelle méthode d'inversion basée sur une double décomposition en valeurs singulières du signal expérimental. Cette méthode par rapport aux précédentes, permet de tenir compte de la diffusion bi ou tridimensionnelle et offre ainsi une meilleure exploitation du contenu spatial des images infrarouges. Des exemples numériques et expérimentaux nous ont permis de valider dans une première approche cette nouvelle méthode d'estimation pour la caractérisation de diffusivités thermiques longitudinales. Des applications dans le domaine du contrôle non destructif des matériaux sont également proposées. Une ancienne problématique qui consiste à retrouver les champs de température initiaux à partir de données bruitées a été abordée sous un nouveau jour. La nécessité de connaitre les diffusivités thermiques du matériau orthotrope et la prise en compte des transferts souvent tridimensionnels sont complexes à gérer. L'application de la double décomposition en valeurs singulières a permis d'obtenir des résultats intéressants compte tenu de la simplicité de la méthode. En effet, les méthodes modales sont basées sur des approches statistiques de traitement d'une grande quantité de données, censément plus robustes quant au bruit de mesure, comme cela a pu être observé
Infrared thermography is a widely used method for characterization of thermophysical properties of materials. The advent of the laser diodes, which are handy, inexpensive, with a broad spectrum of characteristics, extend metrological possibilities of infrared cameras and provide a combination of new powerful tools for thermal characterization and non destructive evaluation. However, this new dynamic has also brought numerous difficulties that must be overcome, such as high volume noisy data processing and low sensitivity to estimated parameters of such data. This requires revisiting the existing methods of signal processing, adopting new sophisticated mathematical tools for data compression and processing of relevant information.New strategies consist in using orthogonal transforms of the signal as a prior data compression tools, which allow noise reduction and control over it. Correlation analysis, based on the local cerrelation study between partial derivatives of the experimental signal, completes these new strategies. A theoretical analogy in Fourier space has been performed in order to better understand the «physical» meaning of modal approaches.The response to the instantaneous point source of heat, has been revisited both numerically and experimentally. By using separable temperature fields, a new inversion technique based on a double singular value decomposition of experimental signal has been introduced. In comparison with previous methods, it takes into account two or three-dimensional heat diffusion and therefore offers a better exploitation of the spatial content of infrared images. Numerical and experimental examples have allowed us to validate in the first approach our new estimation method of longitudinal thermal diffusivities. Non destructive testing applications based on the new technique have also been introduced.An old issue, which consists in determining the initial temperature field from noisy data, has been approached in a new light. The necessity to know the thermal diffusivities of an orthotropic medium and the need to take into account often three-dimensional heat transfer, are complicated issues. The implementation of the double singular value decomposition allowed us to achieve interesting results according to its ease of use. Indeed, modal approaches are statistical methods based on high volume data processing, supposedly robust as to the measurement noise
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, Shun-Lin, and 楊順霖. "Development of TFT-LCD Automatic mura Inspection Using Singular Value Decomposition (SVD)." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/qbc62y.

Full text
Abstract:
碩士
國立臺北科技大學
自動化科技研究所
95
A novel TFT-LCD defect inspection algorithm is proposed for automatic detection of Macro defect (mura) based on background reconstruction concept. Efficient and accurate surface defect detection on TFT LCD panels has become extremely crucial to success of LCD panel manufacturing. Detecting mura defects in a LCD panel can be difficult due to non-uniform brightness background and slightly different brightness levels between the defect region and the background. As such, a singular value decomposition (SVD) based background reconstruction algorithm is developed to establish the background image without mixing mura defects. To extract mura defects, an adaptive threshold strategy is then employed to separate defects from the background image. Through some experimental tests on real mura defects, it was verified that the proposed algorithm has a superior capability for detecting mura defects. The method of detecting mura defects in this article is divided into three stages. In the first stage, SVD is used to separate the detecting image into two singular vector matrix and a diagonal matrix call singular value matrix, that include images energy. The second stage we recontruct the background image without mura defects using the largest singular value. The third stage, we sgment the defect image using the maximum entropy threshold method and expect to minimize the overkill rate of defects.We deployed the mura measurement index call semu of SEMI to measure the quantification of mura and use it to eliminate the ghost defects. At the same time, there exist some noises to be filtered away from the result when the background image is reconstructed in SVD method.To overcome this proplem,this paper proposed a modification method for SVD singular vector and experimented on natual mura defect inspection. Experimental results mura defects can be detected successfull and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
25

Mcneill, Daniel Kyle. "Evolutionary and Iterative Training of Recurrent Neural Networks via the Singular Value Decomposition." Doctoral thesis, 2021. http://hdl.handle.net/11589/216712.

Full text
Abstract:
La tesi esamina l'uso della decomposizione ai valori singolari (SVD) dall'algebra lineare come strumento per l'analisi delle reti neurali, nonché il suo utilizzo per accelerare o addirittura limitare l'apprendimento (ad esempio, per prevenire l'over-fitting o mantenere la stabilità) e come base per gli algoritmi di apprendimento iterativo ed evolutivo. Ciò che si presenta sono metodi per tenere conto della struttura intrinseca della trasformazione, anche durante l'utilizzo di metodi evolutivi, impiegando la decomposizione ai valori singolari. Naturalmente, il tentativo di preservare una certa struttura delle trasformazioni non è inedito, sia che questo significhi preservare la scarsità sia che si riferisca a qualche tipo di invarianza, come nell'invarianza di spostamento di uno strato convoluzionale. I metodi presentati nel lavoro consentono di addestrare reti neurali ricorrenti per una varietà di problemi con cambiamenti nel tempo, tra cui la previsione dei prezzi, la manutenzione predittiva, l'identificazione del modello e il controllo automatico. Il nostro metodo non si basa sulla propagazione all'indietro e può essere utilizzato in ambienti supervisionati o non supervisionati. Inoltre, i nostri modelli possono essere facilmente inizializzati utilizzando la conoscenza del dominio o il metodo dei minimi quadrati (lineari) per "pre-programmare" il modello e iniziare l'ottimizzazione in un'area dello spazio della soluzione suscettibile di produrre risultati. Infine, data una rete neurale precedentemente addestrata in un dominio, i nostri modelli e metodi consentono il riutilizzo e la rapida riqualificazione per un dominio simile, preservando la struttura intrinseca della trasformazione nel cuore della rete neurale.
This work examines the use of the singular value decomposition (SVD) from linear algebra as a tool for the analysis of neural networks, as well as its use to speed up or even limit learning (to prevent over-fitting or maintain stability, for example) and as the basis for iterative and evolutionary learning algorithms. What we present here are methods of taking the inherent structure of the transformation into account — even while using evolutionary methods — using the singular value decomposition. Of course, preserving some structure of the transformations is not completely new — whether this means preserving sparseness or some type of invariance, as in the shift invariance of a convolutional layer. The methods we present allow us to train recurrent neural networks for a variety of problems with changes through time, including price prediction, predictive maintenance and model identification, and automatic control. Our method does not rely on back propagation and can be used in either supervised or unsupervised settings. Further, our models can be easily initialized by using either domain knowledge or (linear) least squares to “pre-program” the model and begin optimization in an area of the solution space likely to yield results. Finally, given a neural network previously trained in one domain, our models and methods allow the reuse and quick retraining for a similar domain, by preserving the inherent structure of the transformation at the heart of the neural network.
APA, Harvard, Vancouver, ISO, and other styles
26

Chiang, Tse-Yu, and 江則佑. "Abbe-SVD: Compact Abbe’s Kernel Generation for Microlithography Aerial Image Simulation using Singular-Value Decomposition Method." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/17771551777073303525.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
96
At the present days, the key and critical part of industrial IC manufacture is the optical lithography technology which can duplicate the design patterns on the mask onto wafer by light exposure. However, when the mask patterns are too small which are approaching light wave length, the image quality and resolution on the wafer are getting worse owing to diffraction effect. Therefore, some necessary resolution enhancement techniques are proposed with remarkable skills and algorithms, and need to be verified by simulations or experimental results. The most direct and accurate simulation is the imaging of patterns on wafer. Accurate imaging simulation can show exposed and unexposed regions after photolithography by computer. Existing commercial and academic OPC simulators which compute in frequency domain with Abbe''s method applied on partial coherent light source take several days for computation with hundreds of computers working together at the same time. Hence, we propose to generate a compact Abbe''s kernel for microlithography aerial image simulation using singular-value decomposition method. The advantages of this approach are as follows: First, since not all the Abbe''s kernels have critical effects on aerial image, we can eliminate them to generate a compact one with SVD. Therefore, we can speed up simulation time, and furthermore keep the accuracy user specified. Second, with advanced concentric circles source discretization, equivalent kernels with higher precision is produced. Finally, we can use compact Abbe''s kernel to build LUT to speed up simulation time. In this thesis, we introduce some basic knowledge of optical lithography in chapter 1 and some coherent light in optics with analytical solution in chapter 2. Then, partially coherent light concept, advanced illumination aperture and Abbe''s method are introduced in chapter 3 and our Abbe-SVD algorithm and advanced source discretization will also be derived. Experimental result and some comparisons will be shown in chapter 4 and finally conclusion will be made in chapter 5.
APA, Harvard, Vancouver, ISO, and other styles
27

Chiang, Tse-Yu. "Abbe-SVD: Compact Abbe's Kernel Generation for Microlithography Aerial Image Simulation using Singular-Value Decomposition Method." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2306200804053500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sukkari, Dalal. "High Performance Polar Decomposition on Manycore Systems and its application to Symmetric Eigensolvers and the Singular Value Decomposition." Diss., 2019. http://hdl.handle.net/10754/652466.

Full text
Abstract:
The Polar Decomposition (PD) of a dense matrix is an important operation in linear algebra, while being a building block for solving the Symmetric Eigenvalue Problem (SEP) and computing the Singular Value Decomposition (SVD). It can be directly calculated through the SVD itself, or iteratively using the QR Dynamically-Weighted Halley (QDWH) algorithm. The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. The latter is an iterative method, which performs more floating-point operations than the SVD approach, but exposes at the same time more parallelism. Looking at the roadmap of the hardware technology scaling, algorithms perform- ing floating-point operations on locally cached data should be favored over those requiring expensive horizontal data movement. In this context, this thesis investigates new high-performance algorithmic designs of QDWH algorithm to compute the PD. Originally introduced by Nakatsukasa et al. [1, 2], our algorithmic contributions include mixed precision techniques, task-based formulations, and parallel asynchronous executions. Moreover, by making the PD competitive, its application to the SEP and the SVD becomes practical. In particular, we introduce for the first time new algorithms for partial SVD decomposition using QDWH. By the same token, we extend the QDWH to support partial eigen decomposition for SEP. We present new high-performance implementations of the QDWH-based algorithms relying on fine-grained computations, which allows exploiting the sparsity of the underlying data structure. To demonstrate performance efficiency, portability and scalability, we conduct benchmarking campaigns on some of the latest shared/distributed-memory systems. Our QDWH-based algorithm implementations outperform the state-of-the-art numerical libraries by up to 2.8x and 12x on shared and distributed-memory, respectively. The task-based QDWH has been integrated into the Chameleon library (https://gitlab.inria.fr/solverstack/chameleon) for support on shared-memory systems with hardware accelerators. It is also currently being used by astronomers from the Subaru telescope located at the summit of Mauna Kea, Hawaii, USA. The distributed-memory software library of QDWH and its SVD extension are freely available under modified-BSD license at https: //github.com/ecrc/qdwh.git and https://github.com/ecrc/ksvd.git, respectively. Both software libraries have been integrated into the Cray Scientific numerical library LibSci v17.11.1 and v19.02.1.
APA, Harvard, Vancouver, ISO, and other styles
29

Hinga, Mark Brandon. "Using parallel computation to apply the singular value decomposition (SVD) in solving for large Earth gravity fields based on satellite data." Thesis, 2004. http://hdl.handle.net/2152/1190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hinga, Mark Brandon Tapley Byron D. "Using parallel computation to apply the singular value decomposition (SVD) in solving for large Earth gravity fields based on satellite data." 2004. http://wwwlib.umi.com/cr/utexas/fullcit?p3143269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Atemnkeng, Tabi Rosy Christy. "Estimation of Longevity Risk and Mortality Modelling." Master's thesis, 2022. http://hdl.handle.net/10362/135573.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and Management
Previous mortality models failed to account for improvements in human mortality rates thus in general, human life expectancy was underestimate. Declining mortality and increasing life expectancy (longevity) profoundly alter the population age distribution. This demographic transition has received considerable attention on pension and annuity providers. Concerns have been expressed about the implications of increased life expectancy for government spending on old-age support. The goal of this paper is to lay out a framework for measuring, understanding, and analyzing longevity risk, with a focus on defined pension plans. Lee-Carter proposed a widely used mortality forecasting model in 1992. The study looks at how well the Lee-Carter model performed for female and male populations in the selected country (France) from 1816 to 2018. The Singular Value Decomposition (SVD) method is used to estimate the parameters of the LC model. The mortality table then assesses future improvements in mortality and life expectancy, taking into account mortality assumptions, to see if pension funds and annuity providers are exposed to longevity risk. Mortality assumptions are predicted death rates based on a mortality table. The two types of mortality are mortality at birth and mortality in old age. Longevity risk must be effectively managed by pension and annuity providers. To mitigate this risk, pension providers must factor in future improvements in mortality and life expectancy, as mortality rates tend to decrease over time. The findings show that failing to account for future improvements in mortality results in an expected provision shortfall. Protection mechanisms and policy recommendations to manage longevity risk can help to mitigate the financial impact of an unexpected increase in longevity.
APA, Harvard, Vancouver, ISO, and other styles
32

Šikorský, Tomáš. "Studium chirálních vlastností supramolekulárních komplexů." Master's thesis, 2011. http://www.nusl.cz/ntk/nusl-296381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Suresh, V. "Image Structures For Steganalysis And Encryption." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/2273.

Full text
Abstract:
In this work we study two aspects of image security: improper usage and illegal access of images. In the first part we present our results on steganalysis – protection against improper usage of images. In the second part we present our results on image encryption – protection against illegal access of images. Steganography is the collective name for methodologies that allow the creation of invisible –hence secret– channels for information transfer. Steganalysis, the counter to steganography, is a collection of approaches that attempt to detect and quantify the presence of hidden messages in cover media. First we present our studies on stego-images using features developed for data stream classification towards making some qualitative assessments about the effect of steganography on the lower order bit planes(LSB) of images. These features are effective in classifying different data streams. Using these features, we study the randomness properties of image and stego-image LSB streams and observe that data stream analysis techniques are inadequate for steganalysis purposes. This provides motivation to arrive at steganalytic techniques that go beyond the LSB properties. We then present our steganalytic approach which takes into account such properties. In one such approach, we perform steganalysis from the point of view of quantifying the effect of perturbations caused by mild image processing operations–zoom-in/out, rotation, distortions–on stego-images. We show that this approach works both in detecting and estimating the presence of stego-contents for a particularly difficult steganographic technique known as LSB matching steganography. Next, we present our results on our image encryption techniques. Encryption approaches which are used in the context of text data are usually unsuited for the purposes of encrypting images(and multimedia objects) in general. The reasons are: unlike text, the volume to be encrypted could be huge for images and leads to increased computational requirements; encryption used for text renders images incompressible thereby resulting in poor use of bandwidth. These issues are overcome by designing image encryption approaches that obfuscate the image by intelligently re-ordering the pixels or encrypt only parts of a given image in attempts to render them imperceptible. The obfuscated image or the partially encrypted image is still amenable to compression. Efficient image encryption schemes ensure that the obfuscation is not compromised by the inherent correlations present in the image. Also they ensure that the unencrypted portions of the image do not provide information about the encrypted parts. In this work we present two approaches for efficient image encryption. First, we utilize the correlation preserving properties of the Hilbert space-filling-curves to reorder images in such a way that the image is obfuscated perceptually. This process does not compromise on the compressibility of the output image. We show experimentally that our approach leads to both perceptual security and perceptual encryption. We then show that the space-filling curve based approach also leads to more efficient partial encryption of images wherein only the salient parts of the image are encrypted thereby reducing the encryption load. In our second approach, we show that Singular Value Decomposition(SVD) of images is useful from the point of image encryption by way of mismatching the unitary matrices resulting from the decomposition of images. It is seen that the images that result due to the mismatching operations are perceptually secure.
APA, Harvard, Vancouver, ISO, and other styles
34

Carrelli, David John. "Utilising Local Model Neural Network Jacobian Information in Neurocontrol." Thesis, 2006. http://hdl.handle.net/10539/1815.

Full text
Abstract:
Student Number : 8315331 - MSc dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment
In this dissertation an efficient algorithm to calculate the differential of the network output with respect to its inputs is derived for axis orthogonal Local Model (LMN) and Radial Basis Function (RBF) Networks. A new recursive Singular Value Decomposition (SVD) adaptation algorithm, which attempts to circumvent many of the problems found in existing recursive adaptation algorithms, is also derived. Code listings and simulations are presented to demonstrate how the algorithms may be used in on-line adaptive neurocontrol systems. Specifically, the control techniques known as series inverse neural control and instantaneous linearization are highlighted. The presented material illustrates how the approach enhances the flexibility of LMN networks making them suitable for use in both direct and indirect adaptive control methods. By incorporating this ability into LMN networks an important characteristic of Multi Layer Perceptron (MLP) networks is obtained whilst retaining the desirable properties of the RBF and LMN approach.
APA, Harvard, Vancouver, ISO, and other styles
35

Joshi, Champa. "Understanding Spatio-Temporal Variability and Associated Physical Controls of Near-Surface Soil Moisture in Different Hydro-Climates." Thesis, 2013. http://hdl.handle.net/1969.1/149547.

Full text
Abstract:
Near-surface soil moisture is a key state variable of the hydrologic cycle and plays a significant role in the global water and energy balance by affecting several hydrological, ecological, meteorological, geomorphologic, and other natural processes in the land-atmosphere continuum. Presence of soil moisture in the root zone is vital for the crop and plant life cycle. Soil moisture distribution is highly non-linear across time and space. Various geophysical factors (e.g., soil properties, topography, vegetation, and weather/climate) and their interactions control the spatio-temporal evolution of soil moisture at various scales. Understanding these interactions is crucial for the characterization of soil moisture dynamics occurring in the vadose zone. This dissertation focuses on understanding the spatio-temporal variability of near-surface soil moisture and the associated physical control(s) across varying measurement support (point-scale and passive microwave airborne/satellite remote sensing footprint-scale), spatial extents (field-, watershed-, and regional-scale), and changing hydro-climates. Various analysis techniques (e.g., time stability, geostatistics, Empirical Orthogonal Function, and Singular Value Decomposition) have been employed to characterize near-surface soil moisture variability and the role of contributing physical control(s) across space and time. Findings of this study can be helpful in several hydrological research/applications, such as, validation/calibration and downscaling of remote sensing data products, planning and designing effective soil moisture monitoring networks and field campaigns, improving performance of soil moisture retrieval algorithm, flood/drought prediction, climate forecast modeling, and agricultural management practices.
APA, Harvard, Vancouver, ISO, and other styles
36

Τρικοίλης, Ιωάννης. "Εύρεση γεωμετρικών χαρακτηριστικών ερυθρών αιμοσφαιρίων από εικόνες σκεδασμένου φωτός." Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/3696.

Full text
Abstract:
Στην παρούσα διπλωματική εργασία θα γίνει μελέτη και εφαρμογή μεθόδων επίλυσης του προβλήματος αναγνώρισης γεωμετρικών χαρακτηριστικών ανθρώπινων ερυθρών αιμοσφαιρίων από προσομοιωμένες εικόνες σκέδασης ΗΜ ακτινοβολίας ενός He-Ne laser 632.8 μm. Στο πρώτο κεφάλαιο γίνεται μια εισαγωγή στις ιδιότητες και τα χαρακτηριστικά του ερυθροκυττάρου καθώς, επίσης, παρουσιάζονται διάφορες ανωμαλίες των ερυθροκυττάρων και οι μέχρι στιγμής χρησιμοποιούμενοι τρόποι ανίχνευσής των. Στο δεύτερο κεφάλαιο της εργασίας γίνεται μια εισαγωγή στις ιδιότητες της ΗΜ ακτινοβολίας, περιγράφεται το φαινόμενο της σκέδασης και παρουσιάζεται το ευθύ πρόβλημα σκέδασης ΗΜ ακτινοβολίας ανθρώπινων ερυθροκυττάρων. Το τρίτο κεφάλαιο αποτελείται από δύο μέρη. Στο πρώτο μέρος γίνεται εκτενής ανάλυση της θεωρίας των τεχνητών νευρωνικών δικτύων και περιγράφονται τα νευρωνικά δίκτυα ακτινικών συναρτήσεων RBF. Στη συνέχεια, αναφέρονται οι μέθοδοι εξαγωγής παραμέτρων και, πιο συγκεκριμένα, δίνεται το θεωρητικό και μαθηματικό υπόβαθρο των μεθόδων που χρησιμοποιήθηκαν οι οποίες είναι ο αλογόριθμος Singular Value Decomposition (SVD), o Angular Radial μετασχηματισμός (ART) και φίλτρα Gabor. Στο δεύτερο μέρος περιγράφεται η επίλυση του αντίστροφου προβλήματος σκέδασης. Παρουσιάζεται η μεθοδολογία της διαδικασίας επίλυσης όπου εφαρμόστηκαν ο αλογόριθμος συμπίεσης εικόνας SVD, o περιγραφέας σχήματος ART και ο περιγραφέας υφής με φίλτρα Gabor για την εύρεση των γεωμετρικών χαρακτηριστικών και νευρωνικό δίκτυο ακτινικών συναρτήσεων RBF για την ταξινόμηση των ερυθροκυττάρων. Στο τέταρτο και τελευταίο κεφάλαιο γίνεται δοκιμή και αξιολόγηση της μεθόδου και συνοψίζονται τα αποτελέσματα και τα συμπεράσματα που εξήχθησαν κατά τη διάρκεια της εκπόνησης αυτής της διπλωματικής.
In this thesis we study and implement methods of estimating the geometrical features of the human red blood cell from a set of simulated light scattering images produced by a He-Ne laser beam at 632.8 μm. Ιn first chapter an introduction to the properties and the characteristics of red blood cells are presented. Furthermore, we describe various abnormalities of erythrocytes and the until now used ways of detection. In second chapter the properties of electromagnetic radiation and the light scattering problem of EM radiation from human erythrocytes are presented. The third chapter consists of two parts. In first part we analyse the theory of neural networks and we describe the radial basis function neural network. Then, we describe the theoritical and mathematical background of the methods that we use for feature extraction which are Singular Value Decomposition (SVD), Angular Radial Transform and Gabor filters. In second part the solution of the inverse problem of light scattering is described. We present the methodology of the solution process in which we implement a Singular Value Decomposition approach, a shape descriptor with Angular Radial Transform and a homogenous texture descriptor which uses Gabor filters for the estimation of the geometrical characteristics and a RBF neural network for the classification of the erythrocytes. In the forth and last chapter the described methods are evaluated and we summarise the experimental results and conclusions that were extracted from this thesis.
APA, Harvard, Vancouver, ISO, and other styles
37

LANTERI, ALESSANDRO. "Novel methods for Intrinsic dimension estimation and manifold learning." Doctoral thesis, 2016. http://hdl.handle.net/11573/905425.

Full text
Abstract:
One of the most challenging problems in modern science is how to deal with the huge amount of data that today's technologies provide. Several diculties may arise. For instance, the number of samples may be too big and the stream of incoming data may be faster than the algorithm needed to process them. Another common problem is that when data dimension grows also the volume of the space does, leading to a sparsication of the available data. This may cause problems in the statistical analysis since the data needed to support our conclusion often grows exponentially with the dimension. This problem is commonly referred to as the Curse of Dimensionality and it is one of the reasons why high dimensional data can not be analyzed eciently with traditional methods. Classical methods for dimensionality reduction, like principal component analysis and factor analysis, may fail due to a nonlinear structure of the data. In recent years several methods for nonlinear dimensionality reduction have been proposed. A general way to model high dimensional data set is to represent the observations as noisy samples drawn from a probability distribution mu in the real coordinate space of D dimensions. It has been observed that the essential support of mu can be often well approximated by low dimensional sets. These sets can be assumed to be low dimensional manifolds embedded in the ambient dimension D. A manifold is a topologial space which globally may not be Euclidean but in a small neighbor of each point behaves like an Euclidean space. In this setting we call intrinsic dimension the dimension of the manifold, which is usually much lower than the ambient dimension D. Roughly speaking, the intrinsic dimension of a data set can be described as the minimum number of variables needed to represent the data without signicant loss of information. In this work we propose dierent methods aimed at estimate the intrinsic dimension. The rst method we present models the neighbors of each point as stochastic processes, in such a way that a closed form likelihood function can be written. This leads to a closed form maximum likelihood estimator (MLE) for the intrinsic dimension, which has all the good features that a MLE can have. The second method is based on a multiscale singular value decomposition (MSVD) of the data. This method performs singular value decomposition (SVD) on neighbors of increasing size and nd an estimate for the intrinsic dimension studying the behavior of the singular values as the radius of the neighbor increases. We also introduce an algorithm to estimate the model parameters when the data are assumed to be sampled around an unknown number of planes with dierent intrinsic dimensions, embedded in a high dimensional space. This kind of models have many applications in computer vision and patter recognition, where the data can be described by multiple linear structures or need to be clusterized into groups that can be represented by low dimensional hyperplanes. The algorithm relies on both MSVD and spectral clustering, and it is able to estimate the number of planes, their dimension as well as their arrangement in the ambient space. Finally, we propose a novel method for manifold reconstruction based on a multiscale approach, which approximates the manifold from coarse to ne scales with increasing precision. The basic idea is to produce, at a generic scale j, a piecewise linear approximation of the manifold using a collection of low dimensional planes and use those planes to create clusters for the data. At scale j + 1, each cluster is independently approximated by another collection of low dimensional planes.The process is iterated until the desired precision is achieved. This algorithm is fast because it is highly parallelizable and its computational time is independent from the sample size. Moreover this method automatically constructs a tree structure for the data. This feature can be particularly useful in applications which requires an a priori tree data structure. The aim of the collection of methods proposed in this work is to provide algorithms to learn and estimate the underlying structure of high dimensional dataset.
APA, Harvard, Vancouver, ISO, and other styles
38

Ghous, Hamid. "Building a robust clinical diagnosis support system for childhood cancer using data mining methods." Thesis, 2016. http://hdl.handle.net/10453/90061.

Full text
Abstract:
University of Technology Sydney. Faculty of Engineering and Information Technology.
Progress in understanding core pathways and processes of cancer requires thorough analysis of many coding and noncoding regions of the genome. Data mining and knowledge discovery have been applied to datasets across many industries, including bioinformatics. However, data mining faces a major challenge in its application to bioinformatics: the diversity and dimensionality of biomedical data. The term ‘big data’ was applied to the clinical domain by Yoo et al. (2014), specifically referring to single nucleotide polymorphism (SNP) and gene expression data. This research thesis focuses on three different types of data: gene-annotations, gene expression and single nucleotide polymorphisms. Genetic association studies have led to the discovery of single genetic variants associated with common diseases. However, complex diseases are not caused by a single gene acting alone but are the result of complex linear and non-linear interactions among different types of microarray data. In this scenario, a single gene can have a small effect on disease but cannot be the major cause of the disease. For this reason there is a critical need to implement new approaches which take into account linear and non-linear gene-gene and patient-patient interactions that can eventually help in diagnosis and prognosis of complex diseases. Several computational methods have been developed to deal with gene annotations, gene expressions and SNP data of complex diseases. However, analysis of every gene expression and SNP profile, and finding gene-to-gene relationships, is computationally infeasible because of the high-dimensionality of data. In addition, many computational methods have problems with scaling to large datasets, and with overfitting. Therefore, there is growing interest in applying data mining and machine learning approaches to understand different types of microarray data. Cancer is the disease that kills the most children in Australia (Torre et al., 2015). Within this thesis, the focus is on childhood Acute Lymphoblastic Leukaemia. Acute Lymphoblastic Leukaemia is the most common childhood malignancy with 24% of all new cancers occurring in children within Australia (Coates et al., 2001). According to the American Cancer Society (2016), a total of 6,590 cases of ALL have been diagnosed across all age groups in USA and the expected deaths are 1,430 in 2016. The project uses different data mining and visualisation methods applied on different types of biological data: gene annotations, gene expression and SNPs. This thesis focuses on three main issues in genomic and transcriptomic data studies: (i) Proposing, implementing and evaluating a novel framework to find functional relationships between genes from gene-annotation data. (ii) Identifying an optimal dimensionality reduction method to classify between relapsed and non-relapsed ALL patients using gene expression. (iii) Proposing, implementing and evaluating a novel feature selection approach to identify related metabolic pathways in ALL This thesis proposes, implements and validates an efficient framework to find functional relationships between genes based on gene-annotation data. The framework is built on a binary matrix and a proximity matrix, where the binary matrix contains information related to genes and their functionality, while the proximity matrix shows similarity between different features. The framework retrieves gene functionality information from Gene Ontology (GO), a publicly available database, and visualises the functional related genes using singular value decomposition (SVD). From a simple list of gene-annotations, this thesis retrieves features (i.e Gene Ontology terms) related to each gene and calculates a similarity measure based on the distance between terms in the GO hierarchy. The distance measures are based on hierarchical structure of Gene Ontology and these distance measures are called similarity measures. In this framework, two different similarity measures are applied: (i) A hop-based similarity measure where the distance is calculated based on the number of links between two terms. (ii) An information-content similarity measure where the similarity between terms is based on the probability of GO terms in the gene dataset. This framework also identifies which method performs better among these two similarity measures at identifying functional relationships between genes. Singular value decomposition method is used for visualisation, having the advantage that multiple types of relationships can be visualised simultaneously (gene-to-gene, term-to-term and gene-to-term) In this thesis a novel framework is developed for visualizing patient-to-patient relationships using gene expression values. The framework builds on the random forest feature selection method to filter gene expression values and then applies different linear and non-linear machine learning methods to them. The methods used in this framework are Principal Component Analysis (PCA), Kernel Principal Component Analysis (kPCA), Local Linear Embedding (LLE), Stochastic Neighbour Embedding (SNE) and Diffusion Maps. The framework compares these different machine learning methods by tuning different parameters to find the optimal method among them. Area under the curve (AUC) is used to rank the results and SVM is used to classify between relapsed and non-relapsed patients. The final section of the thesis proposes, implements and validates a framework to find active metabolic pathways in ALL using single nucleotide polymorphism (SNP) profiles. The framework is based on the random forest feature selection method. A collected dataset of ALL patient and healthy controls is constructed and later random forest is applied using different parameters to find highly-ranked SNPs. The credibility of the model is assessed based on the error rate of the confusion matrix and kappa values. Selected high ranked SNPs are used to retrieve metabolic pathways related to ALL from the KEGG metabolic pathways database. The methodologies and approaches presented in this thesis emphasise the critical role that different types of microarray data play in understanding complex diseases like ALL. The availability of flexible frameworks for the task of disease diagnosis and prognosis, as proposed in this thesis, will play an important role in understanding the genetic basis to common complex diseases. This thesis contributes to knowledge in two ways: (i) Providing novel data mining and visualisation frameworks to handle biological data. (ii) Providing novel visualisations for microarray data to increase understanding of disease.
APA, Harvard, Vancouver, ISO, and other styles
39

Pani, Jagdeep. "Provable Methods for Non-negative Matrix Factorization." Thesis, 2016. http://hdl.handle.net/2005/2739.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is an important data-analysis problem which concerns factoring a given d n matrix A with nonnegative entries into matrices B and C where B and C are d k and k n with nonnegative entries. It has numerous applications including Object recognition, Topic Modelling, Hyper-spectral imaging, Music transcription etc. In general, NMF is intractable and several heuristics exists to solve the problem of NMF. Recently there has been interest in investigating conditions under which NMF can be tractably recovered. We note that existing attempts make unrealistic assumptions and often the associated algorithms tend to be not scalable. In this thesis, we make three major contributions: First, we formulate a model of NMF with assumptions which are natural and is a substantial weakening of separability. Unlike requiring a bound on the error in each column of (A BC) as was done in much of previous work, our assumptions are about aggregate errors, namely spectral norm of (A BC) i.e. jjA BCjj2 should be low. This is a much weaker error assumption and the associated B; C would be much more resilient than existing models. Second, we describe a robust polynomial time SVD-based algorithm, UTSVD, with realistic provable error guarantees and can handle higher levels of noise than previous algorithms. Indeed, experimentally we show that existing NMF models, which are based on separability assumptions, degrade much faster than UTSVD, in the presence of noise. Furthermore, when the data has dominant features, UTSVD significantly outperforms existing models. On real life datasets we again see a similar outperformance of UTSVD on clustering tasks. Finally, under a weaker model, we prove a robust version of uniqueness of NMF, where again, the word \robust" refers to realistic error bounds.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography