Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Z Algorithm.

Thèses sur le sujet « Z Algorithm »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Z Algorithm ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Cardelino, Carlos Antonio. « A parellel algorithm for solving a complex function f(z) = 0 ». Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/8147.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kluska, Martin. « Získávání znalostí z procesních logů ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399172.

Texte intégral
Résumé :
This Master's describes knownledge discovery from process logs by using process mining algorithms. Chosen algorithms are described in detail. These aim to create process model based on event log analysis. The goal is to design such components, which would be able to import the process and run the simulations. Results from components can be used for short term planning.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Šrutová, Martina. « Diagnostika ventrikulárních tachykardií z elektrokardiografického záznamu ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218715.

Texte intégral
Résumé :
The aim of this thesis is a diagnosis of ventricular tachycardias, fibrillations and flutters from electrocardiogram. These disturbances of heart rate are ranked among the life threatening arrhytmias. This work presents own method of the automatic detection, which is created for the ECG holter monitoring system. The proposed algorithm is based on the detection in the spectral domain, which is supported by the detection in the time domain. The results show the discrimination of arrhytmias from the normal sinus rhythm and the discrimination from the noise. The method is tested with ECG records from the The AHA Database (American Heart Association) and from The MIT-BIH Malignant Ventricular Arrhythmia Database.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Malengret, Jean-Claude. « A 3-phase Z-source inverter driven by a novel hybrid switching algorithm ». Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/14698.

Texte intégral
Résumé :
A 3-phase Z-source inverter has been researched, designed, simulated, builtand tested. The purpose of the inverter is to deliver 3-phase 400 VAC from aDC supply that can vary over a range of 20 to 70 Vdc. This is done with a Zsourceinverter topology which is a single conversion method with no additionalDC to DC boost converter. A novel DSP control algorithm allows the inverter toachieve the following:· Run Space Vector Pulse Width Modulation (SV-PWM) for maximum DCbus voltage utilization while boosting the DC bus during zero space vectorstates using shoot through.· Seamless transition between modulation control and modulation / shootthrough control.· Optimised efficiency and DC bus utilisation using Hybrid Space VectorBoost Pulse Width Modulation (HSVB PWM) which is unique to thisdissertation.Such a system is particularly suited to fuel cell and particularly wind turbineapplications where the DC bus voltage is varies over a wide range resulting inthe need for a DC to DC buck/boost to regulate the DC bus to maintain a steady3-phase sinusoidal output. A further application could be for general purpose 3-phase inverter capable of operating on different DC standard bus voltages ( e.g.24, 36, 48 VDC).The benefits of a Z-source topology for the above purposes are a reduction inhigh power semi-conductor components (e.g. power MOSFET). There is also areduction in switching losses and inherent shoot through protection.Furthermore, the inverter is more robust in the sense that it is not vulnerable to spurious shoot through, which could be disastrous in the case of a traditionalvoltage fed inverter.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chmelíková, Lucie. « Bezkontaktní měření tepové frekvence z obličeje ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241972.

Texte intégral
Résumé :
This thesis deals with the study of contactless and noninvasive methods for estimation of heart rate. Contactless measurement is based on capturing person faces by video camera and from sequences of pictures are estimated values of the heart rate. The theoretical part describes heart rate and methods that are being used to estimate heart rate from color changes in the face. It also contains testing of tracking algorithms. Practical part deals with user interface of program for contactless measurement of heart rate and its software solution. Thesis also contains statistical evaluation of program functionality.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rybka, Ondřej. « Problém plnění palet a využití jedné z jeho heuristik při rozmístění zboží ve skladu ». Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-76813.

Texte intégral
Résumé :
This work concerns new borders, heuristics, algoritms and mathematic models of pallet loading problem (PLP). We try to describe these computational methods and find out if we can use them in real. We maximalize number of boxes placed on rectangular pallets in a particular warehouse by using chosen heuristics. Every box has a rectangular form with the same lenght and width and is fully placed on the pallet. We can rotate with the box by 90% degree until it is fixed as we want and its side lies parallelly with side of the pallet. All instances are setted in model (X, Y, a, b), where X is lenght, Y width of the pallet, a lenght and b width of the box.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Chlup, Jiří. « Optimalizace expedice zboží z regálového systému logistického skladu ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228828.

Texte intégral
Résumé :
The diploma thesis submitted solves problems of optimization of goods expedition from rack system of a logistic store. It concerns solution of a practical task which optimizes a passage route through the rack system and evaluates individual orders for creation of optimal expedition dispatching batch. The existing dispatching system, use of sources and creation of time schedule is being described at the beginning. Description of general practical solutions of seeking the shortest way through the graph through appointed nodes follows. Chosen solution and its practical implementation are introduced afterwards. Comparative tests with existing system and their evaluations are mentioned at the end of the thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Smékal, Luděk. « Získávání znalostí z textových dat ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412756.

Texte intégral
Résumé :
This MSc Thesis handles with so-called data mining. Data mining is about obtaining some data or informations from databases, where these data or informations are not directly visible, but they are accessible by using special algorithms. This MSc Thesis mainly aims documents clasifying by selected method in scope of digital library. The selected method is based on sets of items called "itemsets method". This method extends Apriori algorithm application field originally designed for transaction databases processing and generation of sets of frequented items.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Semerád, Lukáš. « Generování kryptografického klíče z biometrických vlastností oka ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236038.

Texte intégral
Résumé :
The main topic of the thesis is creation of formulas for the amount of information entropy in biometric characteristics of iris and retina. This field of science in biometrics named above is unstudied yet, so the thesis tries to initiate research in this direction. The thesis also discusses the historical context of security and identification fields according to biometric characteristics of a human being with an overlap for potential usage in biometrics of iris and retina. The Daugman’s algorithm for converting iris image into the binary code which can be used as a cryptographic key is discussed in detail. An application implementing this conversion is also a part of the thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Smékal, Ondřej. « Restaurace obrazových dat z optické koherenční tomografie ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219497.

Texte intégral
Résumé :
Restoration of image data has become an essential part of the processing of medical images obtained by any system. The same applies in the case of optical coherence tomography. The aim of this work is to study the first restoration methods. Second, the description of the data representation from optical coherence tomography and subsequent discussions that restoration methods based on deconvolution would potentially find application in processing of Optical coherence tomography. Finally, the third to create a program solution of the OCT data restoration process in MATLAB environment and followed by discussion of effectiveness of the presented solutions.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Funiak, Martin. « Klasifikace testovacích manévrů z letových dat ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-264978.

Texte intégral
Résumé :
Zapisovač letových údajů je zařízení určené pro zaznamenávání letových dat z různých senzorů v letadlech. Analýza letových údajů hraje důležitou roli ve vývoji a testování avioniky. Testování a hodnocení charakteristik letadla se často provádí pomocí testovacích manévrů. Naměřená data z jednoho letu jsou uložena v jednom letovém záznamu, který může obsahovat několik testovacích manévrů. Cílem této práce je identi kovat základní testovací manévry s pomocí naměřených letových dat. Teoretická část popisuje letové manévry a formát měřených letových dat. Analytická část popisuje výzkum v oblasti klasi kace založené na statistice a teorii pravděpodobnosti potřebnou pro pochopení složitých Gaussovských směšovacích modelů. Práce uvádí implementaci, kde jsou Gaussovy směšovací modely použité pro klasifi kaci testovacích manévrů. Navržené řešení bylo testováno pro data získána z letového simulátoru a ze skutečného letadla. Ukázalo se, že Gaussovy směšovací modely poskytují vhodné řešení pro tento úkol. Další možný vývoj práce je popsán v závěrečné kapitole.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Lipa, Matúš. « Interaktivní webové výukové aplikace z oblasti vektorové grafiky ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400884.

Texte intégral
Résumé :
This work deals with the creation of several interactive web applets for education of vector graphics. Described is the image signal, his discretization, vector and bitmap types of image record. Further they describe selected vector curves, their properties, algorithms for their construction and usage. The principles of rasterization of basic vector objects are explained. Using the Figma tool, graphical user interfaces for each applet are designed. These applets are implemented using HTML and Javascript. The implemented applets are placed on web pages that are used in computer graphics education.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Pešek, Martin. « Získávání znalostí z časoprostorových dat ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237048.

Texte intégral
Résumé :
This thesis deals with knowledge discovery in spatio-temporal data, which is currently a rapidly evolving area of research in information technology. First, it describes the general principles of knowledge discovery, then, after a brief introduction to mining in the temporal and spatial data, it focuses on the overview and description of existing methods for mining in spatio-temporal data. It focuses, in particular, on moving objects data in the form of trajectories with an emphasis on the methods for trajectory outlier detection. The next part of the thesis deals with the process of implementation of the trajectory outlier detection algorithm called TOP-EYE. In order to testing, validation and possibility of using this algorithm is designed and implemented an application for trajectory outlier detection. The algorithm is experimentally evaluated on two different data sets.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Hlavička, Ladislav. « Dolování asociačních pravidel z datových skladů ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-235501.

Texte intégral
Résumé :
This thesis deals with association rules mining over data warehouses. In the first part the reader will be familiarized with terms like knowledge discovery in databases and data mining. The following part of the work deals with data warehouses. Further the association analysis, the association rules, their types and mining possibilities are described. The architecture of Microsoft SQL Server and its tools for working with data warehouses are presented. The rest of the thesis includes description and analysis of the Star-miner algorithm, design, implementation and testing of the application.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Dofek, Ivan. « Parametrická tvarová optimalizace letounu z aerodynamického hlediska ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-234201.

Texte intégral
Résumé :
The work deals with the use of geometric parameterization for shape description of some parts of the airplane. Geometric parametrization is used for creating a parametric model airfoil. This parametric model allows local deformations pobrchu profile and can easily be applied to generate the geometry of the wing or other parts letoumu. Some properties of the parametric model were tested applications in aerodynamic optimization. Furthermore, the work deals with the parametric description of the blades, the aerodynamic optimization and noise analysis. For propeller blade were created distribution function of the control parameters that can be used in aerodynamic optimization of the blades. Geometric parameterization is used for identifying the location and other characteristics of noise sources.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Bršlicová, Tereza. « Bezkontaktní detekce fyziologických parametrů z obrazových sekvencí ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221320.

Texte intégral
Résumé :
This thesis deals with the study of contactless and non-invasive methods for estimating heart and respiratory rate. Non-contact measurement involves sensing persons by using camera and the values of the physiological parameters are then assessed from the sets of image sequences by using suitable approaches. The theoretical part is devoted to description of the various methods and their implementation. The practical part describes the design and realization of the experiment for contactless detection of heart and respiratory rate. The experiment was carried out on 10 volunteers with a known heart and respiratory rate, which was covered by using of a sophisticated system BIOPAC. Processing and analysis of the measured data was conducted in software environment Matlab. Finally, results from contactless detection were compared with the reference from measurement system BIOPAC. Experiment results are statistically evaluated and discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Klimek, Jaroslav. « Řešení diferenčních rovnic a jejich vztah s transformací Z ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-233524.

Texte intégral
Résumé :
This dissertation presents the solution of difference equations and focuses on a method of difference equations solution with the aid of eigenvectors. The first part reminds the basic terms from area of difference equations such as dynamic of difference equations and linear difference equations of first order and higher order. Then the second section recalls also the system of difference equations including the fundamental matrix and general solution description. Afterthat, the method of solving the difference equations with a variation of constants and transform of scalar equations to the system are shown. The second part of the dissertation analyses some known algorithms and methods for the solution of linear difference equations. The Z-transform, its importance and usage for finding the solution of difference equation is recalled. Then the discrete analogue of Putzer's algorithm is mentioned because this algorithm was often used to check the results obtained by the newly described algorithm in further parts of this thesis. Also some ways of the system matrix power are stated. The next section then describes the principle of Weyr's method which is the basic point for further development of the theory including the presentation of the research results gained by Jiří Čermák in this area. The third part describes own solution of the difference equations system via eigenvectors based on the principle of Weyr's method for differential equations. The solution of system of linear homogeneous difference equtions with constant coefficients including the proof is presented and this solution is then extended to nonhomogeneous systems. Consequently to the theory, the influence of a nulity and the multiplicity of roots on the form of the solution is discussed. The last section of this part shows the implementation of the algorithm in Matlab program (for basic simpler cases) and its application to some cases of difference equations and systems with these equations. The final part of the thesis is more practical and it presents the usage of the designed algorithm and theory. Firstly, the algorithm is compared with Z-transform and the method of variation of constants and it is illustrated how to obtain the same results by using these three approaches. Then an example of current response solution in RLC circuit is demonstrated. The continuous case is solved and then the problem is transferred to discrete case and solved with the Z-transform and the method of eigenvectors. The obtained results are compared with the result of the continuous case.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Zapletal, Petr. « Dolovací modul systému pro získávání znalostí z dat FIT-Miner ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-236921.

Texte intégral
Résumé :
This master's thesis deals with with FIT-Miner, the system for knowledge discovery in databases. The first part of this paper describes the data-mining process, mixture model's issues and FIT-Miner system. Second part deals with design, implementation and testing of created module, which is used for cluster analysis with Expectation-Maximalization algorithm. The end of the paper is focused to design of modules using Java Store Procedures Technology.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Klimeš, Filip. « Zpracování obrazových sekvencí sítnice z fundus kamery ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220975.

Texte intégral
Résumé :
Cílem mé diplomové práce bylo navrhnout metodu analýzy retinálních sekvencí, která bude hodnotit kvalitu jednotlivých snímků. V teoretické části se také zabývám vlastnostmi retinálních sekvencí a způsobem registrace snímků z fundus kamery. V praktické části je implementována metoda hodnocení kvality snímků, která je otestována na reálných retinálních sekvencích a vyhodnocena její úspěšnost. Práce hodnotí i vliv této metody na registraci retinálních snímků.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Havlů, Michal. « Algoritmus automatického výběru vhodného typu zařízení z databáze výměníků tepla ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228730.

Texte intégral
Résumé :
Thesis is devoted to development of an database algorithm for selection (or necking selection) of suitable type of heat exchanger for given industrial application. Database creates a part of multipurpose calculation system containing three individual modules: (i) module for selection (or necking selection) of type of heat exchanger for given application, (ii) module for thermal-hydraulic design or rating of heat exchanger, (iii) module for calculation of investments and operating cost. Thesis describes details of method for selection of suitable heat exchanger type for given application and presents and discuss individual criteria for selection process which influence values in tables of priorites for given equipment. These tables are unavoible part of selection algorithm. Details of software application of selection algorithm are also presented in the thesis. Description of behaviour of individual types of heat exchanger creates important part of thesis. Practical application of developed selection algorithm is demonstrated on several industrial examples.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Wagner, Andrea. « Locating a semi-obnoxious facility in the special case of Manhattan distances ». Springer, 2019. http://dx.doi.org/10.1007/s00186-019-00671-z.

Texte intégral
Résumé :
The aim of thiswork is to locate a semi-obnoxious facility, i.e. tominimize the distances to a given set of customers in order to save transportation costs on the one hand and to avoid undesirable interactions with other facilities within the region by maximizing the distances to the corresponding facilities on the other hand. Hence, the goal is to satisfy economic and environmental issues simultaneously. Due to the contradicting character of these goals, we obtain a non-convex objective function. We assume that distances can be measured by rectilinear distances and exploit the structure of this norm to obtain a very efficient dual pair of algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Alzaabi, Mohamed Abdulla Hasan Saif. « New cryptanalysis and modelling for wireless networking ». Thesis, University of Hertfordshire, 2015. http://hdl.handle.net/2299/17115.

Texte intégral
Résumé :
High data rates and interoperability of vender devices have made WiMAX a prime desire for use worldwide. WiMAX is based on the IEEE 802.16 standard. IEEE 802.16a, b, c & d versions were updated within three years of the first launch of WiMAX. However, during those early years reports were published that highlighted the security weaknesses of the standard. These weaknesses prompted the IEEE to issue a new version, 802.16e to tackle the security issues. Despite this security enhancement, WiMAX remains vulnerable. This research project looks at the vulnerability of WiMAX 802.16e Subscriber Station/Mobile Station authentication at the initial entry and proposes approaches to the prevention of Denial of Service (DoS) attacks at this point in order to secure the Media Access Control (MAC) layer from such threats. A new protocol has been designed and developed to provide confidentiality, authentication and integrity to WiMAX users. This new protocol is integrated with Z algorithm (an algorithm described later in this paper) to provide: • Confidentiality of management messages • Message Authentication code • ID to provide for message integrity and user authentication. A simulation package was also required, to prove that a linear load of DoS attack would disable or exhaust the capacity of the base station of a WiMAX network, as well as providing other simulation functions. The freely available simulation tool NIST (NIST IPSec (Internet Protocol Security) and IKE (Internet Key Exchange) Simulation) is oriented towards fixed network communications (NIIST, 2003). There are no other relevant simulation tools; hence the purpose of this research project is to develop a new tool to simulate WiMAX security vulnerabilities and test the new protocol.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Šabatka, Ondřej. « Reprezentace textu a její vliv na kategorizaci ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237263.

Texte intégral
Résumé :
The thesis deals with machine processing of textual data. In the theoretical part, issues related to natural language processing are described and different ways of pre-processing and representation of text are also introduced. The thesis also focuses on the usage of N-grams as features for document representation and describes some algorithms used for their extraction. The next part includes an outline of classification methods used. In the practical part, an application for pre-processing and creation of different textual data representations is suggested and implemented. Within the experiments made, the influence of these representations on accuracy of classification algorithms is analysed.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Salama, Mohamed Ahmed Said. « Automatic test data generation from formal specification using genetic algorithms and case based reasoning ». Thesis, University of the West of England, Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252562.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Stluka, Jakub. « Generování dat pomocí modulu LM Reverse-Miner ». Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-150066.

Texte intégral
Résumé :
In past years, great attention has been paid to evolutionary algorithms and they have been utilized in wide range of industries including data mining field, which nowadays presents a highly demanded product for many commercial institutions. Both mentioned topics are combined in this work. Main thesis subject is testing of new Reverse-Miner module, which can generate data with hidden properties using evolutionary algorithms while using also other modules of LISp-Miner system, commonly used for the purposes of data mining. Main goal lies in generation of two databases by the module in such way so they would meet explicitly set requirements. Other goals are also set within the thesis in the form of understanding the domain necessary for subsequent modeling. The result of the practical part of the thesis is represented not only by two successfully generated databases, but also by description of steps, methods and techniques used. The common recommendations for data preparation by module Reverse-Miner are later summarized, based on experience with modeling. Previous thesis outputs are furthermore contemplating the conclusion of analysis of technical means used for generation and they also provide several suggestions for possible future extensions.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Kučera, Tomáš. « Algoritmy výpočtu polohy, rychlosti a času z GNSS signálů ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220272.

Texte intégral
Résumé :
This master thesis describes the principles of the Global Navigation Satellite System GNSS, specifically the GPS, GLONASS and Galileo systems. The thesis analyzes the structure of individual GNSS subsystems and introduces their properties. The algorithm for calculating the position is designed in the interactive programming environment MATLAB for the processing of GPS and GLONASS sampled signals. The position is calculated by a distance measurement method which searches for the intersection of spherical surfaces. The calculation is designed for four satellites and when more satellites are detected, the calculation is repeated for all possible combinations. From this position the combination with the lowest DOP (Dilution of Precision) factor is determined, and the calculation of the position is repeated for the best constellation of satellites. In this thesis the user graphical interface for entering the input of data, input parameters and the display of calculated values are created. Finally the calculation of the measured data is displayed on the selected location online map portal
Styles APA, Harvard, Vancouver, ISO, etc.
27

Lam, Chi-ming, et 藍志明. « Algorithms to determine tame and wild coordinates of Z[x,y] ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29470560.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Yang, Xile. « Automatic software test data generation from Z specifications using evolutionary algorithms ». Thesis, University of South Wales, 1998. https://pure.southwales.ac.uk/en/studentthesis/automatic-software-test-data-generation-from-z-specifications-using-evolutionary-algorithms(fd661850-9e09-4d28-a857-d551612ccc09).html.

Texte intégral
Résumé :
Test data sets have been automatically generated for both numerical and string data types to test the functionality of simple procedures and a good sized UNIX filing system from their Z specifications. Different structured properties of software systems are covered, such as arithmetic expressions, existential and universal quantifiers, set comprehension, union, intersection and difference, etc. A CASE tool ZTEST has been implemented to automatically generate test data sets. Test cases can be derived from the functionality of the Z specifications automatically. The test data sets generated from the test cases check the behaviour of the software systems for both valid and invalid inputs. Test cases are generated for the four boundary values and an intermediate value of the input search domain. For integer input variables, high quality test data sets can be generated on the search domain boundary and on each side of the boundary for both valid and invalid tests. Adaptive methods such as Genetic Algorithms and Simulated Annealing are used to generate test data sets from the test cases. GA is chosen as the default test data generator of ZTEST. Direct assignment is used if it is possible to make ZTEST system more efficient. Z is a formal language that can be used to precisely describe the functionality of computer systems. Therefore, the test data generation method can be used widely for test data generation of software systems. It will be very useful to the systems developed from Z specifications.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Rivière, Romain. « Algorithmes de graphes pour l'analyse des séquences et structures génomiques ». Paris 11, 2005. http://www.theses.fr/2005PA112157.

Texte intégral
Résumé :
Mon travail de thèse porte sur le développement de méthodes pour l'étude des motifs de structures biologiques. La première partie de ce travail concerne l'étude des motifs d'ADN. L'ADN est modélisé par un mot sur l'alphabet A, C, G, T. Nous nous plaçons dans le cadre du modèle de séquence mélangée (modèle dit du "shuffling") dans lequel les nombres d'occurrences des facteurs de taille k sont fixés. Je propose un algorithme de génération aléatoire uniforme de séquences mélangées dans lesquelles apparaissent un certain nombre d'occurrences de motifs choisi à priori. D'un point de vue algorithmique, cela fait intervenir différente problèmes dont je montre qu'ils sont NP-complets. D'un point de vue biologique, ces séquences permettent d'estimer les Z-scores de motifs sachant que d'autres sont présents, ce qui est particulièrement important lors de la recherche de motifs correspondant à des signaux secondaires. Je propose le logiciel SMACK qui est capable de générer des séquences mélangées sous contraintes de motifs et d'estimer les Z-scores de tous les motifs d'une taille donnée dans ces modèles. La deuxième partie concerne l'étude des motifs d'ARN. L'ARN est modélisé par un graphe mixte, de degré borné, contenant un chemin hamiltonien connu. Je propose de modéliser un motif d'ARN par un sous-graphe induit connexe. Dans un premier temps, je développe un algorithme efficace d'énumération des motifs d'une molécule d'ARN. Puis, je propose plusieurs modèles de coloration des graphes représentant l'ARN, afin d'obtenir des représentations plus ou moins fines de celui-ci. Pour chacun des ces modèles, on introduit un étiquetage canonique des motifs d'ARN, ce qui nous permet de compter les occurrences des motife simplement par comparaisons de séquences. L'étape suivante est de comparer ces occurrences avec celles obtenues dans des modèles d'ARN aléatoire. Je traite du cas d'un modèle de graphe hamiltonien et du cas d'un modèle de structure secondaire utilisant le logiciel GenRGenS. Cette méthodologie est appliquée sur un ARN 23S, constituant de la grande sous-unité du ribosome de l'Haloarcula marismortui, ce qui permet d'en présenter des motifs que l'on pense pertinents
This thesis aims to develop methods for the study of motifs in biological structures. In the first part of this work, I deal with DNA motifs. DNA can be represented by a word on the alphabet A, C, G, T. We stand in the shuffling model, in which the number of occurrences of the factors of size k are fixed. I develop an algorithm for uniform random generation of shuffled sequences, where some motifs are chosen to appear a priori. In an algorithmic point of view, this problem consists of several subproblems which I show to be NP-complete. In a biological point of view, those random sequences allow to estimate Z-scores for motifs when one knows that others are present. This is very important when dealing with the search for motifs corresponding to secondary signals. I develop the SMACK software, which allows to generate sequences uniformly at random in the constraint shuffling model and to estimate Z-scores of all the motifs of a given length. The second part of my work is about motifs in RNA. The RNA molecule is represented by a mixed graph, with bounded degree and a known hamiltonian path. I define a RNA motif to be a connected induced subgraph and I develop an efficient algorithm to enumerate them. Then, I consider several coloration models for RNA graphs in order to obtain more or less precision with the model. For all of these models, I present canonical labels for the RNA motifs. It allows to count occurences of the motifs simply by comparing words. The next step is to compare those occurences with those obtained with models of random RNA. In particular, I deal with the case of an hamiltonian graph model and with the case of a secondary structure model based on the use of the GenRGenS software. This methodology is applied to an 23S RNA which is a part of the large ribosomal subunit of Haloarcula marismortui. This allows us to present motifs that could be meaningful
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hlosta, Martin. « Modul pro shlukovou analýzu systému pro dolování z dat ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237158.

Texte intégral
Résumé :
This thesis deals with the design and implementation of a cluster analysis module for currently developing datamining system DataMiner on FIT BUT. So far, the system lacked cluster analysis module. The main objective of the thesis was therefore to extend the system of such a module. Together with me, Pavel Riedl worked on the module. We have created a common part for all the algorithms so that the system can be easily extended to other clustering algorithms. In the second part, I extended the clustering module by adding three density based clustering aglorithms - DBSCAN, OPTICS and DENCLUE. Algorithms have been implemented and appropriate sample data was chosen to verify theirs functionality.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Vilhena, Vieira Lopes Roberta. « Um algoritmo genético baseado em tipos abstratos de dados e sua especificação em Z ». Universidade Federal de Pernambuco, 2003. https://repositorio.ufpe.br/handle/123456789/1882.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-12T15:52:55Z (GMT). No. of bitstreams: 2 arquivo4815_1.pdf: 1089029 bytes, checksum: fa191598ead39fa665ced50606baeb3e (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2003
Este trabalho apresenta ummodelo de algoritmo genético baseado emtipos abstratos de dados, denominado de GAADT, no qual o cromossomo é representado por um tipo estratificado em dois níveis de percepção (gene e base), em contra ponto aos demais modelos. A adaptação do cromossomo é comprometida com a relevância das informações codificadas nele. A estratégia de busca do GAADT é altamente objetiva, devido à utilização, como critério de preservação dos cromossomos na população seguinte, de uma função baseada na dinâmica adaptativa da população. A presença explícita do ambiente na funcionalidade do GAADT confere a este algoritmo a capacidade de tratar problemas com alto grau de dinamicidade, como está explorado na aplicação do sistema de monitoramento de sinais vitais de pacientes em unidades de tratamento intensivo de um hospital. Um esboço de uma teoria de processos evolutivos é desenvolvido para descrever a convergência do GAADT, independente da natureza do problema, da representação adotada para o cromossomo, e da população inicial considerada. A aplicação do GAADT a um problema requer a definição dos elementos do ambiente específicos para o problema em foco, os quais devem atender as propriedades estabelecidas na definição do ambiente. A prova de que as definições dos elementos do ambiente, para um dado problema, satisfazem as propriedades exigidas, e que o GAADT quando instanciado para estes elementos satisfaz as propriedades de corretude e aplicabilidade são feitas com o formalismo Z, conferindo assim ao GAADT um rigor matemático. Um estudo comparativo entre a convergência do GAADT com outros modelos é apresentado. As experiências avaliadas neste estudo indicam que o GAADT apresenta maior velocidade de convergência. Por fim, são feitas algumas considerações relevantes sobre o GAADT e sugeridas algumas questões interessantes para trabalhos futuros
Styles APA, Harvard, Vancouver, ISO, etc.
32

Condori-Arapa, Cristina. « Antenna elements matching : time-domain analysis ». Thesis, Högskolan i Gävle, Akademin för teknik och miljö, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-7783.

Texte intégral
Résumé :
Time domain analysis in vector network analyzers (VNAs) is a method to represent the frequency response, stated by the S-parameters, in time domain with apparent high resolution. Among other utilities time domain option from Agilent allows to measure microwave devices into a specific frequency range and down till DC as well with the two time domain mode: band-pass and low-pass mode. A special feature named gating is of important as it allows representing a portion of the time domain representation in frequency domain.   This thesis studies the time domain option 010 from Agilent; its uncertainties and sensitivity. The task is to find the best method to measure the antenna element matching taking care to reduce the influence of measurement errors on the results.   The Agilent 8753ES is the instrument used in the thesis. A specific matching problem in the antenna electric down-tilt (AEDT) previously designed by Powerwave Technologies is the task to be solved. This is because it can not be measured directly with 2-port VNAs. It requires adapters, extra coaxial cables and N-connectors, all of which influences the accuracy. The AEDT connects to the array antenna through cable-board-connectors (CBCs). The AEDT and the CBCs were designed before being put into the antenna-system. Their S-parameters do not coincide with the ones measured after these devices were put in the antenna block.   Time domain gating and de-embedding algorithms are two methods proposed in this thesis to measure the S-parameters of the desired antenna element while reducing the influence of measurement errors due to cables CBCs and other connectors. The aim is to find a method which causes less error and gives high confidence measurements.   For the time domain analysis, reverse engineering of the time domain option used in the Agilent VNA 8753ES is implemented in a PC for full control of the process. The results using time-domain are not sufficiently reliable to be used due to the multiple approximations done in the design. The methodology that Agilent uses to compensate the gating effects is not reliable when the gate is not centered on the analyzed response. Big errors are considered due to truncation and masking effects in the frequency response.   The de-embedding method using LRL is implemented in the AEDT measurements, taking away the influences of the CBCs, coaxial cables and N-connector. It is found to have sufficient performance, comparable to the mathematical model. Error analysis of both methods has been done to explaine the different in measurements and design.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Bondarenko, Maxim. « Algoritmy pro detekci anomálií v datech z klinických studií a zdravotnických registrů ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-378147.

Texte intégral
Résumé :
This master's thesis deals with the problems of anomalies detection in data from clinical trials and medical registries. The purpose of this work is to perform literary research about quality of data in clinical trials and to design a personal algorithm for detection of anomalous records based on machine learning methods in real clinical data from current or completed clinical trials or medical registries. In the practical part is described the implemented algorithm of detection, consists of several parts: import of data from information system, preprocessing and transformation of imported data records with variables of different data types into numerical vectors, using well known statistical methods for detection outliers and evaluation of the quality and accuracy of the algorithm. The result of creating the algorithm is vector of parameters containing anomalies, which has to make the work of data manager easier. This algorithm is designed for extension the palette of information system functions (CLADE-IS) on automatic monitoring the quality of data by detecting anomalous records.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Zdražil, Jan. « Algoritmy pro toky v sítích a jejich softwarová podpora ». Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-114400.

Texte intégral
Résumé :
This thesis deals with the maximum flow problem in network. First part describes and explains basic terms of graph theory, which gives theoretical base for following text. Next part is dedicated to algorithms that may be used to solve a maximum flow problem. Each described algorithm contains a brief history, general notation and a practical example. The next very important part of the thesis is in specific computer science applications such as computer vision and web mining. As an essential part of the thesis is developed software in programming language Java, which allows user to compare the implemented algorithms and to solve large network flows problems.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Logatti, Luis Aldomiro. « Linhas e superficies escondidas : taxonomia e proposta de implementação VLSI de um algoritmo Z-buffer ». [s.n.], 1990. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259440.

Texte intégral
Résumé :
Orientador: Leo Pini Magalhães
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica
Made available in DSpace on 2018-07-14T00:45:46Z (GMT). No. of bitstreams: 1 Logatti_LuisAldomiro_M.pdf: 5760725 bytes, checksum: 655e075dd12e90ee57f9108a7a0d6876 (MD5) Previous issue date: 1991
Resumo: A determinação da visibilidade de objetos é fundamental em síntese de imagens. No início dos anos 60 o primeiro algoritmo de eliminação de linhas foi desenvolvido e, desde Iá, vários algoritmos foram propostos. Este trabalho apresenta uma série de algoritmos de eliminação de linhas e superfícies escondidas, bem como duas classificações segundo os mais importantes artigos nesta área. Uma proposta de implementação do algoritmo Z-buffer em gate-array é o tema principal da dissertação
Mestrado
Mestre em Engenharia Elétrica
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ruan, Manqi. « A precise Higgs mass measurement at the ILC and test beam data analyses with CALICE ». Paris 11, 2008. http://www.theses.fr/2008PA112172.

Texte intégral
Résumé :
En utilisant les outils de Monte Carlo et les données de test en faisceau, la performance d’un détecteur au futur collisionneur linéaire international a été étudiée. La contribution de cette thèse porte sur deux parties; d’une part sur une mesure de précision de la masse du boson Higgs et de la section efficace de la production avec le processus e+e  HZ où le boson Z se désintègre en paire  et d’autre part sur une analyse des données de test en faisceau de la collaboration CALICE (CAlorimeter for Linear Collider Experiment). Pour un Higgs de 120GeV, nous avons obtenu une précision de 38. 4MeV sur la masse de Higgs et de 5% sur la section efficace en choisissant une énergie dans le centre de masse optimale de 230 GeV et avec une luminosité intégrée de 500 fb-1. Ces résultats sont indépendants d’un modèle de Higgs donné puisque aucune information sur la désintégration du Higgs n’a été utilisée dans l’analyse. Si on suppose que le Higgs est celui du modèle standard ou il se désintègre principalement en particules invisibles, la précision peut être améliorée de façon significative (29MeV pour la masse et 4% pour la section efficace). Pour l’analyse des données de test en faisceau, mon travail concerne deux aspects. Premièrement une vérification sur la qualité des données en temps quasi réel et deuxièmement une mesure précise sur la résolution angulaire d’une gerbe électromagnétique dans le calorimètre prototype utilisé dans le test en faisceau. Le but pour la vérification de la qualité des données est de détecter des problèmes éventuels sur l’ensemble du détecteur y compris l’électronique, le système de haute tension et d’acquisition, et de classer des différentes données pour faciliter les analyses offlines. Pour déterminer la résolution angulaire du calorimètre électromagnétique, nous avons développé un algorithme qui est basée uniquement sur le dépôt d’énergie dans différentes cellules produites par le faisceau d’électrons sans utilisant l’information du détecteur de trace devant le calorimètre. Celle-ci est importante pour pouvoir identifier le composant neutre d’un jet. Nos résultats montrent que la dépendance de la résolution angulaire en énergie du faisceau est similaire à celle de la résolution en énergie et peut être décrite par 80 mrad/√(E/GeV)
Utilizing Monte Carlo tools and test-beam data, some basic detector performance properties are studied for the International Linear Collider (ILC). The contributions of this thesis are mainly twofold, first, a study of the Higgs mass and cross section measurements at the ILC (with full simulation to e+e  HZ  H channel and corresponding backgrounds); and second, an analysis of test-beam data of the CAlorimeter for Linear Collider Experiment (CALICE). For a 120GeV Higgs particle, setting the center-of-mass energy to 230GeV and with an integrated luminosity of 500fb-1, a precision of 38. 4MeV is obtained in a model independent analysis for the Higgs boson mass measurement, while the cross section can be measured to 5%; if we make some further assumptions about the Higgs boson’s decay, for example a Standard Model Higgs boson or a Higgs boson with a dominant invisible decay mode, the measurement results can be improved by 25% (achieving a mass measurement precision of 29MeV and a cross section measurement precision of 4%). For the CALICE test-beam data analysis, our work is mainly focused upon two aspects: data quality checks and the track-free ECAL angular resolution measurement. Data quality checks aim to detect strange signals or unexpected phenomena in the test-beam data so that one knows quickly how the overall data taking quality is. They also serve to classify all the data and give useful information for later offline data analyses. The track-free ECAL angular resolution algorithm is designed to precisely measure the direction of a photon, a very important component in determining the direction of neutral components in jets. We found that the angular resolution can be well fitted as a function of the square root of the beam energy (in a similar way as for the energy resolution) with a precision of approximately 80mrad/√(E/GeV)
Styles APA, Harvard, Vancouver, ISO, etc.
37

Derycke, Henri. « Combinatoire dans des stabilisations du modèle du tas de sable sur la grille Z² ». Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0327/document.

Texte intégral
Résumé :
Le modèle du tas de sable est un modèle de diffusion discret et isotrope introduit par les physiciens Bak, Tang et Wiesenfeld comme illustration de la criticalité auto-organisée. Pour tout graphe, souvent supposé fini, Dhar a formalisé de nombreuses propriétés simplifiant son analyse. Cette thèse propose des études de ce modèle sur la grille bidimensionnelle usuelle et certains de ses sous-graphes également infinis que sont les bandes bi-infinies de hauteur finie. Des approximations du comportement de la pile de sable peuvent se rapprocher de certains modèles de bootstrap percolation avec un support de stabilisation rectangulaire. Les lois sur son demi-périmètre peuvent se décrire à l’aide de statistiques sur les permutations. Un sous-produit de ce travail fait apparaître une différence de deux séries génératrices comptant des permutations selon deux statistiques mahoniennes classiques dont est extrait un polynôme à coefficients entiers et surtout positifs. La suite de cette thèse revisite dans le cadre de ces graphes infinis, des structures jusque-là bien définies uniquement dans le cas des graphes finis, notamment la récurrence. Dans le modèle sur une bande de hauteur finie H, l’existence donnée par Járai et Lyons d’automates finis reconnaissant les configurations récurrentes lues colonne par colonne est étendu par une construction explicite d’automates avec un nombre moindre d’états, se rapprochant de la conjecture de Gamlin. Dans une seconde approche, l’étude se concentre sur les configurations sur la grille entière qui sont périodiques dans les deux directions. Le puits, un sommet du graphe garantissant la terminaison de la stabilisation, est placé à l’infini dans une direction de pente rationnelle. Ceci permet à la fois de préserver la bipériodicité et de proposer une forme affaiblie du critère de Dhar caractérisant ainsi par un algorithme effectif les configurations récurrentes. Ces configurations récurrentes bipériodiques sont des candidates naturelles pour être les éléments de sous-groupes finis de l’éventuel groupe du tas de sable sur la grille. Des éléments de construction de cette loi de groupe donnent expérimentalement quelques sous-groupes finis
The sandpile model is a discrete model for diffusion of grains on a graph introduced by physicists Bak, Tang and Wiesenfeld as an illustration for self-organised criticality. For any finite graph, Dhar identified many of its numerous structures which simplify its analysis. This thesis focus on the usual square lattice and its subgraphs which are strips of height H, both notions of infinite graphs. Approximations on the behaviour of the stabilisation of a large stack of grains at the origin of the square lattice lead to some random distribution of grains, which stabilisation is connected to some models of bootstrap percolation where modified vertices by this stabilisation forms a rectangle. The laws of the half-perimeter of this rectangle are described by statistics on permutations. As a byproduct, the difference between the generating functions over some permutations of two classical mahonian statistics on permutations appears to mainly be a polynomial with coefficients which are integers and especially positive. Then, this thesis visits in the case of the studied infinite graphs some well-defined structures on finite graphs, in particular the recurrence. In the model on an horizontal strip of height H, we extend the existence of finite automata recognizing recurrent configurations read column by column presented by Járai and Lyons to new automata with significantly less states and these numbers are closer to a conjecture due to Gamlin. An implementation leads to explicit automata for heights 3 and 4 while up to now only the case 2 was obtained by hand. In a second approach, we consider the configurations on the twodimensional square lattice which are periodic in two directions. We suggest to place the sink ensuring that the stabilisation ends at infinity in a direction of rational slope which allows to preserve biperiodicity and a weaker form of Dhar criterion for recurrent configurations. Hence we obtain an effective algorithm defining recurrent configurations among the biperiodic and stable configurations. These biperiodic and recurrent configurations are natural candidates for being the elements of finite subgroups of the hypothetical group on configurations of the sandpile model on the square lattice. We discuss some notions allowing the definition of the law of such a group and experimentally provide some finite subgroups
Styles APA, Harvard, Vancouver, ISO, etc.
38

Galand, Fabien. « Construction de codes Z indice p à la puissance k linéaires de bonne distance minimale et schémas de dissimulation fondés sur les codes de recouvrement ». Caen, 2004. http://www.theses.fr/2004CAEN2047.

Texte intégral
Résumé :
Cette thèse étudie deux axes de recherches reposant sur les codes. Chaque axe porte sur un paramètre particulier. Le premier axe est celui de la correction d'erreur, et nous nous intéressons à la distance minimale des codes. Notre objectif est de construire des codes sur Fp ayant une bonne distance minimale. Pour cela nous utilisons conjointement le relèvement de Hensel et la Zpk-linéarité. Nous donnons la distance minimale en petite longueur d'une généralisation des codes de Kerdock et de Preparata, ainsi que des relevés des codes de résidus quadratiques. Parmi ces codes, nous en obtenons quatre égalant les meilleurs codes linéaires. Nous donnons également une construction visant à augmenter le cardinal des codes Zpk-linéaires par ajout de translatés. Cette construction nous conduit à une borne supérieure sur le cardinaux des codes Zpk-linéaires. Le second axe, disjoint du premier dans son objectif, mais le rejoignant sur les objets étudiés, est la construction de schémas de dissimulation. Nous relions cette problématique, relevant de la stéganographie, à la construction de codes de recouvrement. Nous envisageons deux modàles de schémas. Ces modàles sont prouvés équivalents aux cette équivalence pour mettre à jour la structure des recouvrements utilisés dans les travaux déjà publiés. Cette équivalence nous sert également à déduire des bornes supérieures sur la capacité des schémas, et en donnant des constructions fondées sur les recouvrements linéaires nous obtenons des bornes inférieures.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Cánovas, Solbes Alejandro. « Diseño y Desarrollo de un Sistema de Gestión Inteligente de QoE para Redes HD y Estereoscópicas IPTV ». Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/65074.

Texte intégral
Résumé :
[EN] Broadband Internet access connections allow Internet Service Providers (ISP) to offer several types of services to home customers such as data, voice over IP (VoIP), Internet protocol television (IPTV) and now 3D Internet protocol television (3D- IPTV). That is why the number of IPTV service providers is increasing conside- rably in recent years. Thanks to the evolution at many levels of the communication systems, communication networks and devices, to deliver these services is possible, but the maximum quality is not always guaranteed. For this reason, one of the main issues to be considered by the IPTV service providers is to guarantee the Quality of Experience (QoE) perceived by the end user. In order to achieve this goal, in this PhD Thesis we propose an intelligent management system based on inductive prediction methods to guarantee the QoE of the end-user. One of the important aspects to be considered in the development of the management system is to include all the parameters that affect the QoE. With this purpose, we will analyze the parameters that affect the degradation of the video stream received by the end user through the IPTV service. At the network level, we will identify the main parameters which affect the Quality of Service (QoS), such as jitter, delay, lost packets and bandwidth. At the user level, these parameters affect to the subjective perception of the user when watching the video. We also checked that effects derived from the compression, quantization, and bitrate affect this perception too.
[ES] Las conexiones de acceso a Internet de banda ancha permiten a los Internet Service Provider (ISP) ofrecer servicios a los hogares tales como datos, Voice on IP (VoIP), Televisión sobre IP (IPTV) y actualmente 3D-TV sobre IP (3D-IPTV). Es por esto que el número de proveedores de servicios de IPTV está aumentando considerable- mente en los últimos años. Gracias a la evolución tanto a nivel de sistemas, como de redes de comunicación como de dispositivos, la entrega de este tipo de servicios es posible pero no siempre con las máximas garantías de calidad. Por este motivo, una de las principales cuestiones a tener en cuenta por parte del proveedor de servicios de IPTV es garantizar la calidad de experiencia (Quality of Experience (QoE)) percibida por el usuario final. Para conseguir este objetivo, en la siguiente tesis doctoral se propone un sistema de gestión inteligente basado en métodos induc- tivos de predicción para garantizar la QoE del usuario final. Uno de los aspectos importantes a tener en cuenta en el desarrollo del sistema de gestión es el incluir los parámetros que afectan a la QoE. Para ello, se analizarán aquellos parámetros que afecten a la degradación del flujo de vídeo recibido por el usuario final a tra- vés del servicio de IPTV. A nivel de red, se identificarán dichos parámetros como aquellos que afectan a la calidad de Servicio (Quality of Service (QoS)) como son el jitter, el retardo, los paquetes perdidos y el ancho de banda principalmente. A nivel de usuario, estos parámetros afectan a la percepción subjetiva del usuario al visualizar el vídeo. Comprobamos como efectos derivados de la compresión, la cuantificación, el bitrate, etc, afectan también a dicha percepción.
[CAT] Les connexions d'accés a Internet de banda ampla permeten als Proveïdors de Ser- vicis d'Internet (ISP) oferir servicis a les llars com ara dades, veu sobre IP (VoIP), Televisió sobre IP (IPTV) i actualment 3D-TV sobre IP (3D-IPTV). És per açò que el nombre de proveïdors de servicis d'IPTV està augmentant considerablement en els últims anys. Gràcies a l'evolució tant a nivell de sistemes, com de xarxes de comunicació com de dispositius, l'entrega d'este tipus de servicis és possible però no sempre amb les màximes garanties de qualitat. Per este motiu, una de les principals qüestions a tindre en compte per part del proveïdor de servicis d'IPTV és garantir la qualitat d'experiència (Quality of Experience, QoE) percebuda per l'usuari final. Per a aconseguir este objectiu, en la següent tesi doctoral es proposa un sistema de gestió intel·ligent basat en mètodes inductius de predicció per a garantir la QoE de l'usuari final. Un dels aspectes importants a tindre en compte en el desenrotllament del sistema de gestió es incloure els paràmetres que afecten la QoE. Per a això, s'analitzaran aquells paràmetres que afecten la degradació del flux de vídeo rebut per l'usuari final a través del servici d'IPTV. A nivell de xar- xa, s'identificaran dits paràmetres com aquells que afecten la qualitat de Servici (Quality of Service, QoS) com són el jitter, el retard, els paquets perduts i l'ample de banda principalment. A nivell d'usuari, estos paràmetres afecten la percepció subjectiva de l'usuari al visualitzar el vídeo. Comprovem com efectes derivats de la compresió, la quantificació, el bitrate, etc, afecten també a dita percepció.
Cánovas Solbes, A. (2016). Diseño y Desarrollo de un Sistema de Gestión Inteligente de QoE para Redes HD y Estereoscópicas IPTV [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/65074
TESIS
Styles APA, Harvard, Vancouver, ISO, etc.
40

Diop, Cheikh Abdoulahat. « La structure multimodale de la distribution de probabilité de la réflectivité radar des précipitations ». Toulouse 3, 2012. http://thesesups.ups-tlse.fr/3089/.

Texte intégral
Résumé :
Un ensemble de données radar collectées sur divers sites du réseau américain de radars bande S, Nexrad (Next Generation Weather Radar), est utilisé pour analyser la fonction de distribution de probabilité (fdp) du facteur de réflectivité radar (Z) des précipitations, soit P(Z). Nous avons étudié et comparé divers types de systèmes précipitants : 1) orages grêlifères sur le site continental de Little Rock (Arkansas), 2) convection péninsulaire et côtière à Miami (Floride), 3) convection côtière et transition terre/mer à Brownsville (Texas) , 4) convection maritime tropicale à Hawaii, 5) convection maritime des latitudes moyennes à Eureka (Californie), 6) neige associée aux systèmes frontaux continentaux d'hiver à New York City (New York) et 7) neige à Middleton Island (Alaska), une zone maritime des hautes latitudes. On montre que chaque type de système précipitant a une signature spécifique au niveau de la forme de P(Z). La distribution P(Z) a une forme complexe. Nous montrons qu'il s'agit d'un mélange de plusieurs composantes gaussiennes, chacune étant attribuable à un type de précipitation. Avec l'algorithme EM (Expectation Maximisation) de Dempster et al. 1977, basé sur la méthode du maximum devraisemblance, on décompose la fdp des systèmes précipitants en quatre compo-santes : 1) le nuage et les précipitations de très faible intensité ou drizzle, 2) les précipitations stratiformes, 3) les précipitations convectives et 4) la grêle. Chaque composante est représentée par une gaussienne définie par sa moyenne, sa variance et la proportion de l'aire qu'elle occupe dans le mélange. On a mis en évidence l'absence de composante grêle dans les P(Z) des cas de systèmes convectifs maritimes et côtiers. Les chutes de neige correspondent à des distributions P(Z) plus régulières. La présence de plusieurs composantes dans P(Z) est liée à des différences dans la dynamique et la microphysique propres à chaque composante. Une combinaison linéaire des différentes composantes gaussiennes a permis d'obtenir un très bon ajustement de P(Z). Nous présentons ensuite une application des résultats de la décomposition de P(Z). Nous avons isolé chaque composante, et pour chacune d'elles, la distribution de réflectivité est convertie en une distribution d'intensité de précipitation (R), soit P(R) ayant comme paramètres µR et sR2 qui sont respectivement la moyenne et la variance. On montre, sur le le graphe (µR ,sR2), que chaque composante occupe une région spécifique, suggérant ainsi que les types de précipitation identifiés constituent des populations distinctes. Par exemple, la position des points représentatifs de la neige montre que cette dernière est statistiquement différente de la pluie. Le coefficient de variation de P(R), CVR = sR /µR est constant pour chaque type de précipitation. Ce résultat implique que la connaissance de CVR et la mesure de l'un des paramètres de P(R) permet de déterminer l'autre et de définir la distributionde l'intensité de précipitation pour chaque composante. L'influence des coefficients a et b de la relation Z = aRb sur P(R) a été également discutée
A set of radar data gathered over various sites of the US Nexrad (Next Generation Weather Radar) S band radar network is used to analyse the probability distribution function (pdf) of the radar reflectivity factor (Z) of precipitation, P(Z). Various storm types are studied and a comparison between them is made: 1) hailstorms at the continental site of Little Rock (Arkansas), 2) peninsular and coastal convection at Miami (Florida), 3) coastal convection and land/sea transition at Brownsville (Texas), 4) tropical maritime convection at Hawaii, 5) midlatitude maritime convection at Eureka (California), 6) snowstorms from winter frontal continental systems at New York City (New York), and 7) high latitude maritime snowstorms at Middleton Island (Alaska). Each storm type has a specific P(Z) signature with a complex shape. It is shown that P(Z) is a mixture of Gaussian components, each of them being attribuable to a precipitation type. Using the EM (Expectation Maximisation) algorithm of Dempster et al. 1977, based on the maximum likelihood method, four main components are categorized in hailstorms: 1) cloud and precipitation of very low intensity or drizzle, 2) stratiform precipitation, 3) convective precipitation, and 4) hail. Each component is described by the fraction of area occupied inside P(Z) and by the two Gaussian parameters, mean and variance. The absence of hail component in maritime and coastal storms is highlighted. For snowstorms, P(Z) has a more regular shape. The presence of several components in P(Z) is linked to some differences in the dynamics and microphysics of each precipitation type. The retrieval of the mixed distribution by a linear combination of the Gaussian components gives a very stisfactory P(Z) fitting. An application of the results of the split-up of P(Z) is then presented. Cloud, rain, and hail components have been isolated and each corresponding P(Z) is converted into a probability distribution of rain rate P(R) which parameters are µR and sR2 , respectively mean and variance. It is shown on the graph (µR ,sR2) that each precipitation type occupies a specific area. This suggests that the identified components are distinct. For example, the location of snowstorms representative points indicates that snow is statistically different from rain. The P(R) variation coefficient, CVR = sR/µR is constant for each precipitation type. This result implies that knowing CVR and measuring only one of the P(R) parameters enable to determine the other one and to define the rain rate probability distribution. The influence of the coefficients a and b of the relation Z = aRb on P(R) is also discussed
Styles APA, Harvard, Vancouver, ISO, etc.
41

Fujdiak, Radek. « Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-358408.

Texte intégral
Résumé :
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Gillis, Bryan. « Group-finding with photometric redshifts : The Photo-z Probability Peaks algorithm ». Thesis, 2010. http://hdl.handle.net/10012/5003.

Texte intégral
Résumé :
We present a galaxy group-finding algorithm, the Photo-z Probability Peaks (P3) algorithm, optimized for locating small galaxy groups using photometric redshift data by searching for peaks in the signal-to-noise of the local overdensity of galaxies in a 3-dimensional grid. This method is an improvement over similar matched-filter methods in reducing background contamination through the use of redshift information, allowing it to accurately detect groups to a much lower size limit. We present the results of tests of our algorithm on galaxy catalogues from the Millennium Simulation. For typical settings of our algorithm and photometric redshift accuracy of sigma_z = 0.05 it attains a purity of 84% and detects ~83 groups/deg.^2 with an average group size of 5.5 members. With photometric redshift accuracy of sigma_z = 0.02, it attains a purity of 94% and detects ~80 groups/deg.^2 with an average group size of 6.3 members. We also test our algorithm on data available for the COSMOS field and the presently-available fields from the CFHTLS-Wide survey, presenting preliminary results of this analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Przywara, Česlav. « Přibližná extrakce frázové tabulky z velkého paralelního korpusu ». Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-321388.

Texte intégral
Résumé :
The aim of this work is to examine the applicability of an algorithm for approximate frequency counting to act as an on-the-fly filter in the process of phrase table extraction in Statistical Machine Translation systems. Its implementation allows for the bulk of extracted phrase pairs to be much reduced with no significant loss to the ultimate quality of the phrase-based translation model as measured by the state-of-the-art evaluation measure BLEU. The result of this implementation is a fully working program, called eppex, capable of acting as an alternative to the existing tools for phrase table creation and filtration that are part of the open-source SMT system Moses. A substantial part of this work is devoted to the benchmarking of both the runtime performance and the quality of produced phrase tables achieved by the program when confronted with parallel training data comprised of 2 billions of words. Powered by TCPDF (www.tcpdf.org)
Styles APA, Harvard, Vancouver, ISO, etc.
44

Skotnica, Michael. « Balancované a téměř balancované prezentace grup z algoritmického pohledu ». Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-388610.

Texte intégral
Résumé :
In this thesis we study algorithmic aspects of balanced group presentations which are finite presentations with the same number of generators and relations. The main motivation is that the decidability of some problems, such as the triviality problem, is open for balanced presentations. First, we summarize known results on decision problems for general finite presen- tations and we show two group properties which are undecidable even for balanced presentations - the property of "being a free group"' and the property of "having a finite presentation with 12 generators". We also show reductions of some graph problems to the triviality problem for group presentations, such as determining whether a graph is connected, k-connected or connected including an odd cycle. Then we show a reduction of the determining whether a graph with the same number of vertices and edges is a cycle to the triviality problem for balanced presentations. On the other hand, there is also a limitation of reduction to balanced presentations. We prove that there is no balanced presentation with two generators a, b|ap(m) bq(m) , ar(m) bs(m) for p(m), q(m), r(m), s(m) ∈ Z[m] which describes the trivial group if and only if m is odd. In the last part of this thesis, we describe a relation between group presentations and topology. In addition,...
Styles APA, Harvard, Vancouver, ISO, etc.
45

Chen, Kuan-Yu, et 陳冠宇. « A Multi-Angle Image Fusion Algorithm for Enhancing the Z-Axis Resolution of Confocal Laser Scanning Microscope ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/92120298900691948847.

Texte intégral
Résumé :
博士
國立清華大學
電機工程學系
101
Confocal laser scanning microscope (CLSM) is a powerful tool for studying biological specimens three-dimensionally. Compared with other microscopes, CLSM can provide images with higher resolution and better contrast. However, the resolution along the Z-axis (the optical axis) is much lower than that along the lateral directions. This phenomenon may hamper the spatial reliability of the reconstructed three dimensional volume data of the specimen. One way to increase the resolution in the Z-axis direction is Tilted-view Microscopy. By rotating the specimen, image stacks from different observation angles can be acquired with conventional CLSM. Missing information that can't be recorded from a single direction, due to the poor Z-axial resolution, can be recorded from other directions. Images derived from different observation angles are then combined to reconstruct one volume data with equal lateral and axial resolutions. We propose an image fusion algorithm for the multi-angle image stacks derived by Tilted-view Microscopy to reconstruct a 3D volume data of the specimen that has equal lateral and axial resolutions. In this algorithm, image stacks are first deconvoluted with a depth-variant deconvolution method to recover the distortions caused by point spread functions. Then, deconvoluted image stacks are integrated through a feature-based registration algorithm. Finally, an intensity-based interpolation is applied to predict the absent information that is not recorded by these multi-angle images. As a result, a 3D volume data of the specimen with equal lateral and axial resolutions, which has the real points from multi-angle images and the predicted points that are not recorded by these images, is reconstructed.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Cai, Jhong-Bin, et 蔡忠彬. « Implementation of a Single-Phase Quasi Z-Source Inverter with Indirect Current Control Algorithm for a Reconfigurable PV System ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/6s77v3.

Texte intégral
Résumé :
碩士
國立高雄應用科技大學
電機工程系博碩士班
102
Based on a single-phase Quasi Z-source inverter and the grid-connected system, a reconfigurable PV system was built. A Digital Signal Processor (DSP) is used to implement the indirect current control algorithm of Quasi Z-source inverter. The DC bus and the output voltage can be effectively regulated by shoot-through and non-shoot-through modes; then, the zero-point detector circuit with PLL control and power control is used to facilitate the parallel connection of the inverter to grid. In addition,when the grid is in fault condition, the system can regulate voltage amplitude and the power angle by transferring the grid-connected mode to the island mode and employing indirect current control algorithm. Through out the simulations by MATLAB/Simulink, the feasibility of the system model and control algorithm were verified. Finally, a prototype of Quasi Z-source inverter was built up and tested. The results illustrate that the parallel connection of inverter to grid can be satisfied, and the independent operation mode can be also implemented through the indirect current control algorithm under the condition of smooth transformation.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Antão, Mário Manuel Neto. « The efficiency of bankruptcy predictive models - genetic algorithms approach ». Master's thesis, 2020. http://hdl.handle.net/10362/109389.

Texte intégral
Résumé :
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business Intelligence
The present dissertation evaluates the contribution of genetic algorithms to improve the performance of bankruptcy prediction models. The state-of-the-art points to a better performance of MDA (Multiple Discriminant Analysis)-based models, which, since 1968, are the most applied in the field of bankruptcy prediction. These models usually recur to ratios commonly used in financial analysis. From the comparative study of (1) logistic regression-based models with the forward stepwise method for feature selection, (2) Altman's Z-Score model (Edward I. Altman, 1983) based on MDA and (3) logistic regression with the contribution of genetic algorithms for variable selection, a clear predominance of the efficiency revealed by the former models can be observed. These new models were developed using 1887 ratios generated a posteriori from 66 known variables, derived from the accounting, financial, operating, and macroeconomic analysis of firms. New models are thus presented, which are very promising for predicting bankruptcy in the medium to long term, in the context of increasing instability surrounding firms for different countries and sectors.
A dissertação realizada avalia a contribuição dos algoritmos genéticos para melhorar a performance dos modelos de previsão de falência. O estado da arte aponta para uma melhor performance dos modelos baseados em MDA (Análise descriminante multivariada) que por isso, desde de 1968, são os mais aplicados no âmbito da previsão de falência. Estes modelos recorrem habitualmente a rácios comumente utlizados em análise financeira. A partir do estudo comparado de modelos baseados em (1) regressão logística com o método forward stepwise para escolha variáveis, (2) o modelo Z-Score de Edward Altman (1983) baseado em MDA e (3) regressão logística com o contributo de algoritmos genéticos para escolha variáveis, observa-se um claro predomínio da eficácia revelada por estes últimos. Estes novos modelos, agora propostos, foram desenvolvidos com recurso a 1887 rácios gerados a posteriori a partir de 66 variáveis conhecidas, oriundas da análise contabilística, financeira, de funcionamento e de enquadramento macroeconómico das empresas. São assim apresentados novos modelos, muito promissores, para a previsão de falência a médio longo prazo em contexto de crescente instabilidade na envolvente das empresas, para diferentes países e sectores.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Pina, Manuel António de. « Sistema Fotovoltaico Ligado à Rede Usando um Conversor Multinível Quasi-Z do Tipo T ». Master's thesis, 2021. http://hdl.handle.net/10400.26/35682.

Texte intégral
Résumé :
Os inversores quasi-Z de três níveis (qZS) baseados na topologia tipo T são especialmente indicados para uso em sistemas fotovoltaicos conectados à rede. Na verdade, apresentam uma característica importante para este tipo de aplicação, pois apresentam a característica redutor/elevador. Além disso, também é caracterizado por alta confiabilidade e operação multinível. Porém, associados a este conversor, devem ser utilizados controladores que garantam seu melhor desempenho. Neste contexto, esta dissertação irá propor um sistema de controlo global para o inversor trifásico tipo T qZS em um sistema fotovoltaico conectado à rede. Assim, será considerado um algoritmo rastreador de máximo ponto de potência (MPPT) baseado em uma abordagem derivada de tempo integral robusta. Para as correntes de saída será utilizado um controlador de corrente desacoplado no qual será associado um modulador por largura de pulso sinusoidal (SPWM). O desempenho do sistema será verificado e testado por estudos de simulação. Os resultados mostrarão que este sistema fornecerá resultados bem-adaptados para esta aplicação.
Three-level quasi-Z-Source (qZS) inverters based on the T-Type topology are especially indicated to be used in grid-connected PV systems. In fact, they present an important feature for this type of application since presents Buck-Boost characteristic. Besides that, is also characterized by high reliability and multilevel operation. However, associated to this converter, it must be used controllers that will ensure their best performance. In this context, this thesis will propose a global control system for the three phase T-Type qZS inverter in a grid-connected PV System. So, it will be considered a MPPT algorithm based on a robust integral time derivative approach. For the output currents will be used a decoupled current controller in which will be associated to a SPWM modulator. The performance of the system will be verified and tested by simulation studies. The results will show that this system will provide results well adapted for this application.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Hušek, Michal. « Dobývání znalostí z textů při analýze sociálních sítí ». Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-373808.

Texte intégral
Résumé :
Title: Text mining in social network analysis Author: Bc. Michal Hušek Department: Department of Theoretical Computer Science and Mathematical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: Nowadays, social networks represent one of the most important sources of valuable information. This work focuses on mining the data provided by social networks. Multiple data mining techniques are discussed and analysed in this work, namely, clustering, neural networks, ranking algorithms and histogram statistics. Most of the mentioned algorithms have been implemented and tested on real-world social network data and the obtained results have been mutually compared against each other whenever it made sense. For computationally demanding tasks, graphic processing units have been used in order to speed up calculations for vast amounts of data, e.g., during clustering. The performed tests have confirmed lower time requirements. All the performed analyses are, however, independent of the actually involved type of social network. Keywords: data mining, social networks, clustering, neural networks, ranking algorithms, CUDA
Styles APA, Harvard, Vancouver, ISO, etc.
50

Natarajan, Hariharan Meyer-Baese Uwe. « Implementation of chirp z discrete fourier transform on virtex II FPGA ». Diss., 2004. http://etd.lib.fsu.edu/theses/available/etd-04202004-002332/.

Texte intégral
Résumé :
Thesis (M.S.)--Florida State University, 2004.
Advisor: Dr. Uwe Meyer-Baese, Florida State University, College of Engineering, Dept. of Electrical and Computer Engineering. Title and description from dissertation home page (viewed Apr. 18, 2005). Document formatted into pages; contains xi, 94 pages. Includes bibliographical references.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie