To see the other types of publications on this topic, follow the link: ICM algorithm.

Dissertations / Theses on the topic 'ICM algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'ICM algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Frontera, Antonio. "ICD Algorithms in the management of arrhythmias : Pitfalls and advancements." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0324.

Full text
Abstract:
L'objectif de ma recherche était d'étudier les méthodes de fonctionnement des dispositifs cliniques, tels que les DAI et la PM, pour détecter les arythmies les plus communs rencontrées dans la pratique clinique. Récemment, des algorithmes spécifiques de discrimination sont implémentés dans les dispositifs actuels. Les pièges de la prise en charge des patients souffrant d'arythmie ne sont pas rares. Fréquemment, il s'agit d'érreurs de détection et de discrimination susceptibles de favoriser ou empirer l'arrythmie ou de déterminer des thérapies inappropriées tels que des chocs. En fait, la discrimination incorrecte des arythmies malignes pourrait avoir un impact significatif sur la morbidité et la mortalité. La meilleure gestion des arythmies devrait envisager des améliorations des algorithmes actuels des DAI propriétaires implantés dans la pratique clinique<br>The objective of my research was to investigate the manner in which clinical devices, such as ICDs and PMs, detect the most common arrhythmias encountered in clinical practice. Nowadays, specific algorithms of discrimination are implemented in current devices. The pitfalls in the management of patients with arrhythmias are not uncommon; most often these include errors in detection and discrimination which may promote and/or perpetuate the arrhythmia or determine inappropriate therapies such as shocks. In fact, the incorrect discrimination of malignant arrhythmias could have a significant impact on morbidity and mortality. The best management of arrhythmias should consider improvements of current algorithms of proprietary based ICDs implanted in the clinical practice
APA, Harvard, Vancouver, ISO, and other styles
2

El, Ghouat Mohamed Abdelwafi. "Classification markovienne pyramidale : adaptation de l'algorithme ICM aux images de télédétection." Sherbrooke : Université de Sherbrooke, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

McDonald, Andrew James. "An ice-tracking algorithm applied to the North water polynya." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/MQ44918.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ibn, Khedher Hatem. "Optimization and virtualization techniques adapted to networking." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0007/document.

Full text
Abstract:
Dans cette thèse, on présente nos travaux sur la virtualisation dans le contexte de la réplication des serveurs de contenu vidéo. Les travaux couvrent la conception d’une architecture de virtualisation pour laquelle on présente aussi plusieurs algorithmes qui peuvent réduire les couts globaux à long terme et améliorer la performance du système. Le travail est divisé en plusieurs parties : les solutions optimales, les solutions heuristiques pour des raisons de passage à l’échelle, l’orchestration des services, l’optimisation multi-objective, la planification de services dans des réseaux actifs et complexes et l'intégration d'algorithmes dans une plate-forme réelle<br>In this thesis, we designed and implemented a tool which performs optimizations that reduce the number of migrations necessary for a delivery task. We present our work on virtualization in the context of replication of video content servers. The work covers the design of a virtualization architecture for which there are also several algorithms that can reduce overall costs and improve system performance. The thesis is divided into several parts: optimal solutions, greedy (heuristic) solutions for reasons of scalability, orchestration of services, multi-objective optimization, service planning in complex active networks, and integration of algorithms in real platform. This thesis is supported by models, implementations and simulations which provide results that showcase our work, quantify the importance of evaluating optimization techniques and analyze the trade-off between reducing operator cost and enhancing end user satisfaction index
APA, Harvard, Vancouver, ISO, and other styles
5

Aydin, Ahmet Tarik. "Orbit selection and EKV guidance for space-based ICBM intercept." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FAydin.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pomerleau, François. "Registration algorithm optimized for simultaneous localization and mapping." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1465.

Full text
Abstract:
Building maps within an unknown environment while keeping track of the current position is a major step to accomplish safe and autonomous robot navigation. Within the last 20 years, Simultaneous Localization And Mapping (SLAM) became a topic of great interest in robotics. The basic idea of this technique is to combine proprioceptive robot motion information with external environmental information to minimize global positioning errors. Because the robot is moving in its environment, exteroceptive data comes from different points of view and must be expressed in the same coordinate system to be combined. The latter process is called registration. Iterative Closest Point (ICP) is a registration algorithm with very good performances in several 3D model reconstruction applications, and was recently applied to SLAM. However, SLAM has specific needs in terms of real-time and robustness comparatively to 3D model reconstructions, leaving room for specialized robotic mapping optimizations in relation to robot mapping. After reviewing existing SLAM approaches, this thesis introduces a new registration variant called Kd-ICP. This referencing technique iteratively decreases the error between misaligned point clouds without extracting specific environmental features. Results demonstrate that the new rejection technique used to achieve mapping registration is more robust to large initial positioning errors. Experiments with simulated and real environments suggest that Kd-ICP is more robust compared to other ICP variants. Moreover, the Kd-ICP is fast enough for real-time applications and is able to deal with sensor occlusions and partially overlapping maps. Realizing fast and robust local map registrations opens the door to new opportunities in SLAM. It becomes feasible to minimize the cumulation of robot positioning errors, to fuse local environmental information, to reduce memory usage when the robot is revisiting the same location. It is also possible to evaluate network constrains needed to minimize global mapping errors.
APA, Harvard, Vancouver, ISO, and other styles
7

Mulligan, Shaun R. "A Comparison of ICA versus genetic algorithm optimized ICA for use in non-invasive muscle tissue EMG." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/13149.

Full text
Abstract:
Includes bibliographical references.<br>The patent developed by Dr. L. John [1] allows for the the detection of deep muscle activation through the combination of specially positioned monopolar surface Electromyography (sEMG) electrodes and a Blind Source Separation algorithm. This concept was then proved by Morowasi and John [2] in a 12 electrode prototype system around the bicep. This proof of concept showed that it was possible to extract the deep tissue activity of the brachialis muscle in the upper arm, however, the effect of surface electrode positioning and effectual number of electrodes on signal quality is still unclear. The hope of this research is to extend this work. In this research, a genetic algorithm (GA) is implemented on top of the Fast Independent Component Analysis (FastICA) algorithm to reduce the number of electrodes needed to isolate the activity from all muscles in the upper arm, including deep tissue. The GA selects electrodes based on the amount of significant information they contribute to the ICA solution and by doing so, a reduced electrode set is generated and alternative electrode positions are identified. This allows a near optimal electrode configuration to be produced for each user. The benefits of this approach are: 1.The generalized electrode array and this algorithm can select the near optimal electrode arrangement with very minimal understanding of the underlying anatomy. 2. It can correct for small anatomical differences between test subjects and act as a calibration phase for individuals. As with any design there are also disadvantages, such as each user needs to have the electrode placement specifically customised for him or her and this process needs to be conducted using a higher number of electrodes to begin with.
APA, Harvard, Vancouver, ISO, and other styles
8

Lillrank, Dan. "Registration algorithms formatching laser scans in robotics application." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234215.

Full text
Abstract:
In this study, we compare different variations of the Iterative ClosestPoint (ICP) algorithm for the purpose of matching laser scans generatedby an indoor robot. The study is mainly focused on investigating maxi-mum difference in the viewpoint the algorithms can handle, and if it canbe used for robot-pose estimation by matching laser scan data generatedat different positions in a home. This study was carried out at Electroluxusing the robotic vacuum cleaner PUREi9 for gathering the dataset tobe used for the comparison.The ICP algorithm and its variations can achieve improved perfor-mance by fine-tuning heuristics and correspondences, which often re-quires substantial manual assistance and the tuning result often varyingcase-by-case. This study limits this fine tuning to standard parametersfor the purpose of comparing standard implementations, and focuses theresult more as a guideline toward what version and format is suitable forour use case.The result confirms the superiority of the Generalized ICP (GICP)version over the other versions compared in this report. The GICP ver-sion performed better for estimating the correct transform for both thetranslation distance and rotational distance between the point clouds.Two data formats were also compared. One with the aim to create adense point cloud and another data format with a more sparse pointcloud. Comparing the result of on these two data formats, we also testedthe implicit assumption of the ICP algorithm that the point cloud have tobe dense for the algorithm to perform well. From the result obtained, weconclude that this implicit assumption does not affect the performanceof the algorithms for our usage.Keywords:, Iterative Closest Point, ICP<br>I den här studien jämför vi de olika variationerna av Iterative closest point- algoritmen för att matcha punkt-molnen genererade av laseravläsningar i olika positioner av en inomhusrobot. Studiens syfte är att undersöka hur stor skillnad i avstånd mellan de två punkt-molnen algoritmerna kan hantera, och om det kan användas för syftet att estimera robotens position genom att matcha laseravläsningar genererade i olika positioner i ett hem. Denna studie utfördes vid Electrolux. Robotdammsugaren PUREi9 av Electrolux användes för att samla datan som användas för jämförelsen.   ICP-algoritmen och dess variation kan uppnå förbättrad prestanda genom finjustering av heuristik och korrespondenser, vilket ofta kräver manuell korrigering och resultatet varierar ofta från fall till fall. Denna studie begränsar finjusteringen till standardparametrar för att jämföra standardimplementeringar och fokuserar på att undersöka vilken version och vilket format som passar vårt användningsfall.   Resultatet bekräftar att Generalized ICP-versionen (GICP) presterar bättre än de andra versionerna som jämfördes i denna rapport. GICP-versionen presterade bättre i att uppskatta den korrekta omvandlingen mellan punktmoln med stora variationer i avstånd och rotation. Genom att jämföra två dataformat, ett punktmoln med hög densitet och det andra ursprungliga (råa) dataformatet med låg densitet testade vi också det implicita antagandet av ICP att punktmoln måste ha hög densitet för att algoritmen ska fungera bra. Av det erhållna resultatet kan vi dra slutsatsen att detta implicita antagande inte påverkar algoritmens prestanda för vår användning.
APA, Harvard, Vancouver, ISO, and other styles
9

Ardam, Nagaraju. "Study of ASA Algorithms." Thesis, Linköpings universitet, Elektroniksystem, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70996.

Full text
Abstract:
Hearing aid devices are used to help people with hearing impairment. The number of people that requires hearingaid devices are possibly constant over the years, however the number of people that now have access to hearing aiddevices increasing rapidly. The hearing aid devices must be small, consume very little power, and be fairly accurate.Even though it is normally more important for the user that hearing impairment look good (are discrete). Once thehearing aid device prescribed to the user, she/he needs to train and adjust the device to compensate for the individualimpairment.We are within the framework of this project researching on hearing aid devices that can be trained by the hearingimpaired person her-/himself. This project is about finding suitable noise cancellation algorithm for the hearing-aiddevice. We consider several types of algorithms like, microphone array signal processing, Independent ComponentAnalysis (ICA) based on double microphone called Blind Source Separation (BSS) and DRNPE algorithm.We run this current and most sophisticated and robust algorithms in certain noise backgrounds like Cocktail noise,street, public places, train, babble situations to test the efficiency. The BSS algorithm was well in some situation andgave average results in some situations. Where one microphone gave steady results in all situations. The output isgood enough to listen targeted audio.The functionality and performance of the proposed algorithm is evaluated with different non-stationary noisebackgrounds. From the performance results it can be concluded that, by using the proposed algorithm we are able toreduce the noise to certain level. SNR, system delay, minimum error and audio perception are the vital parametersconsidered to evaluate the performance of algorithms. Based on these parameters an algorithm is suggested forheairng-aid.<br>Hearing-Aid
APA, Harvard, Vancouver, ISO, and other styles
10

Zamberlan, Pietro. "Quantum software per l'algebra lineare: l'algoritmo HHL e l'IBM quantum experience." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19398/.

Full text
Abstract:
Nei prossimi anni computer quantistici da 50-100 qubit saranno in grado di eseguire compiti che superano le capacita` dei supercomputer classici di oggi, ma fenomeni di rumore nelle porte logiche quantistiche limiteranno la grandezza dei circuiti che si potranno eseguire in maniera affidabile. Attraverso questo tipo di tecnologia sarà possibile svolgere algoritmi nuovi, o già classicamente noti, in maniera più efficiente rispetto ai computer odierni. Ne sono un esempio l’algoritmo di Shor per la fattorizzazione in numeri primi o l’algoritmo di Grover per la ricerca in un database non ordinato. In questa tesi si discute dell’algoritmo HHL (dai propositori: Harrow, Hassidim e Lloyd) per la risoluzione di un sistema lineare, studiando l’algoritmo completo, e le subroutine che lo compongono, sia su simulatori classici che su veri processori quantistici messi a disposizione da IBM Quantum Experience. Se ne ricava che per il caso di una matrice 2 × 2, opportunamente scelta, l’algoritmo restituisce la corretta soluzione con un alto grado di precisione sui simulatori classici (raggiungendo una fidelity del 99%) ma con una più bassa accuratezza sui qubit reali (fidelity del 84%).
APA, Harvard, Vancouver, ISO, and other styles
11

Shahrazad, Mohammad. "Optimal allocation of FACTS devices in power networks using imperialist competitive algorithm (ICA)." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11445.

Full text
Abstract:
Due to the high energy consumption demand and restrictions in the installation of new transmission lines, using Flexible AC Transmission System (FACTS) devices is inevitable. In power system analysis, transferring high-quality power is essential. In fact, one of the important factors that has a special role in terms of efficiency and operation is maximum power transfer capability. FACTS devices are used for controlling the voltage, stability, power flow and security of transmission lines. However, it is necessary to find the optimal location for these devices in power networks. Many optimization techniques have been deployed to find the optimal location for FACTS devices in power networks. There are several varieties of FACTS devices with different characteristics that are used for different purposes. The imperialist competitive algorithm (ICA) is a recently developed optimization technique that is used widely in power systems. This study presents an approach to find the optimal location and size of FACTS devices in power networks using the imperialist competitive algorithm technique. This technique is based on human social evolution. ICA technique is a new heuristic algorithm for global optimization searches that is based on the concept of imperialistic competition. This algorithm is used for mathematical issues; it can be categorized on the same level as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) techniques. Also, in this study, the enhancement of voltage profile, stability and loss reduction and increasing of load-ability were investigated and carried out. In this case, to apply FACTS devices in power networks, the MATLAB program was used. Indeed, in this program all power network parameters were defined and analysed. IEEE 30-bus and IEEE 68-bus with 16 machine systems are used as a case study. All the simulation results, including voltage profile improvement and convergence characteristics, have been illustrated. The results show the advantages of the imperialist competitive algorithm technique over the conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
12

Sanz, Aceituno Angel Luis. "Control algorithms for energy savings in irregularly occupied buildings." Thesis, Högskolan i Gävle, Avdelningen för bygg- energi- och miljöteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-15155.

Full text
Abstract:
The Heating, Ventilation and Air Conditioning (HVAC) systems are nowadays in almost every new building, develop or improve better control strategies for them is very common, looking to have more energy efficiency and require less input parameters from the user. In this project, new control strategies based in previous theory models has been used with a new approach in order to find a good solution for irregular occupied spaces. In this new approach a feed-forward filter with a fixed preheating time, using an algorithm based on an identified model, calculates how much degrees the temperature room can be decreased and regulate the power of the radiators to do it.The results of this project displays that the chosen model have to be changed but the idea is interesting, because the simulations of the reference building give, with a preheating timeof 2 hours, around 3ºC of temperature reduction during 18 days and savings of 33% of the heat energy needed for the whole month.Considering that buildings and the residential sector currently account for 40 percent of Sweden's energy consumption and around 25 percent of other countries like USA or Spain, and that irregular spaces are more or less a 10% of the governmental, institutional, academic or public buildings, the potential savings are not negligible. The evaluation of this control strategy with its mathematical model as well as its resultsduring the month of January and the behavior of the system along the year have been made with the help of IDA program for simulation of the reference building and its energy system.
APA, Harvard, Vancouver, ISO, and other styles
13

Alexander, Jeremy B. "Enhancement of the daytime goes-based aircraft icing potential algorithm using MODIS /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FAlexander.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sothirajah, Shobana. "Clinical Algorithms for Maintaining Asthma Control." Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/3546.

Full text
Abstract:
Rationale: Asthma management aims to achieve optimal control on the minimal effective dose of medication. We assessed the effectiveness of two algorithms to guide ICS dose in well-controlled patients on ICS+LABA in a double-blind study, comparing dose adjustment guided by exhaled nitric oxide (eNO) to clinical care algorithm(CCA) based on symptoms and lung function. Methods: We randomised non-smoking adult asthmatics on minimum FP dose 100μgs daily +LABA to ICS adjustment using eNO or CCA, assessed over 5 visits during 8 months treatment. Primary endpoints were asthma-free days and asthma related quality of life (QOL). Analysis was by mixed model regression and generalised estimating equations with log link. Results: 69 subjects were randomised (eNO:34, CCA:35) and 58 completed the study. At baseline mean FEV1 was 94% pred., mean eNO (200ml/sec) 7.1 ppb, median ACQ6 score 0.33. Median ICS dose was 500 μg (IQR 100-500) at baseline and 100 μg on both eNO (IQR 100-200) and CCA arms (IQR 100–100) at end of study. There were no significant differences between eNO and CCA groups in asthma-free days (RR=0.92, 95% CI 0.8–1.01), AQL (RRAQL<median = 0.95, 95% CI 0.8–1.1) or exacerbation-free days (HR = 1.03, 95%CI 0.6–1.7). Neither clinic FEV1 (overall mean difference FEV1 % pred. -0.24%, 95% CI -2.2–1.7) nor a.m. PEF (mean difference 1.94 L/min (95% CI -2.9–6.8) were significantly different. Similar proportions of subjects were treated for ≥1 exacerbation (eNO: 50%, 95% CI 32.1–67.9; CCA: 60%, 95% CI 43.9–76.2). Conclusion: Substantial reductions in ICS doses were achieved in well controlled asthmatics on ICS+LABA, with no significant differences in outcomes between eNO or clinically based algorithms.
APA, Harvard, Vancouver, ISO, and other styles
15

Sothirajah, Shobana. "Clinical Algorithms for Maintaining Asthma Control." University of Sydney, 2008. http://hdl.handle.net/2123/3546.

Full text
Abstract:
Master of Science in Medicine<br>Rationale: Asthma management aims to achieve optimal control on the minimal effective dose of medication. We assessed the effectiveness of two algorithms to guide ICS dose in well-controlled patients on ICS+LABA in a double-blind study, comparing dose adjustment guided by exhaled nitric oxide (eNO) to clinical care algorithm(CCA) based on symptoms and lung function. Methods: We randomised non-smoking adult asthmatics on minimum FP dose 100μgs daily +LABA to ICS adjustment using eNO or CCA, assessed over 5 visits during 8 months treatment. Primary endpoints were asthma-free days and asthma related quality of life (QOL). Analysis was by mixed model regression and generalised estimating equations with log link. Results: 69 subjects were randomised (eNO:34, CCA:35) and 58 completed the study. At baseline mean FEV1 was 94% pred., mean eNO (200ml/sec) 7.1 ppb, median ACQ6 score 0.33. Median ICS dose was 500 μg (IQR 100-500) at baseline and 100 μg on both eNO (IQR 100-200) and CCA arms (IQR 100–100) at end of study. There were no significant differences between eNO and CCA groups in asthma-free days (RR=0.92, 95% CI 0.8–1.01), AQL (RRAQL<median = 0.95, 95% CI 0.8–1.1) or exacerbation-free days (HR = 1.03, 95%CI 0.6–1.7). Neither clinic FEV1 (overall mean difference FEV1 % pred. -0.24%, 95% CI -2.2–1.7) nor a.m. PEF (mean difference 1.94 L/min (95% CI -2.9–6.8) were significantly different. Similar proportions of subjects were treated for ≥1 exacerbation (eNO: 50%, 95% CI 32.1–67.9; CCA: 60%, 95% CI 43.9–76.2). Conclusion: Substantial reductions in ICS doses were achieved in well controlled asthmatics on ICS+LABA, with no significant differences in outcomes between eNO or clinically based algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Wilson, Katherine Jean. "Ice motion dynamics in the 1998 North Water polynya, NOW, season, validation and monitoring using RADARSAT-1 SCW data and the Canadian Ice Service ice tracking algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ57695.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Francis Carleton University Dissertation Computer Science. "A parallel implementation of the CIS sea-ice motion tracking algorithm for coarse-grained multicomputers." Ottawa, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tanriverdi, Gunes. "Arma Model Based Clutter Estimation And Its Effect On Clutter Supression Algorithms." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614360/index.pdf.

Full text
Abstract:
Radar signal processing techniques aim to suppress clutter to enable target detection. Many clutter suppression techniques have been developed to improve the detection performance in literature. Among these methods, the most widely known is MTI plus coherent integrator, which gives sufficient radar performance in various scenarios. However, when the correlation coefficient of clutter is small or the spectral separation between the target and clutter is small, classical approaches to clutter suppression fall short. In this study, we consider the ARMA spectral estimation performance in sea clutter modelled by compound K-distribution through Monte Carlo simulations. The method is applied for varying conditions of clutter spikiness and auto correlation sequences (ACS) depending on the radar operation. The performance of clutter suppression using ARMA spectral estimator, which will be called ARMA-CS in this work, is analyzed under varying ARMA model orders. To compare the clutter suppression of ARMA-CS with that of conventional methods, we use improvement factor (IF) which is the ratio between the output Signal to Interference Ratio (SIR) and input SIR as performance measure. In all cases, the performance of ARMA-CS method is better than conventional clutter suppression methods when the correlation among clutter samples is small or the spectral separation between target and clutter is small.
APA, Harvard, Vancouver, ISO, and other styles
19

Reiser, Fabian [Verfasser]. "Remote Sensing of Antarctic Sea Ice: A Novel Lead Retrieval Algorithm and Large-Scale Spatio-Temporal Variability of Sea Ice Concentration / Fabian Reiser." Trier : Universität Trier, 2020. http://d-nb.info/1230135065/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ibn, Khedher Hatem. "Optimization and virtualization techniques adapted to networking." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0007.

Full text
Abstract:
Dans cette thèse, on présente nos travaux sur la virtualisation dans le contexte de la réplication des serveurs de contenu vidéo. Les travaux couvrent la conception d’une architecture de virtualisation pour laquelle on présente aussi plusieurs algorithmes qui peuvent réduire les couts globaux à long terme et améliorer la performance du système. Le travail est divisé en plusieurs parties : les solutions optimales, les solutions heuristiques pour des raisons de passage à l’échelle, l’orchestration des services, l’optimisation multi-objective, la planification de services dans des réseaux actifs et complexes et l'intégration d'algorithmes dans une plate-forme réelle<br>In this thesis, we designed and implemented a tool which performs optimizations that reduce the number of migrations necessary for a delivery task. We present our work on virtualization in the context of replication of video content servers. The work covers the design of a virtualization architecture for which there are also several algorithms that can reduce overall costs and improve system performance. The thesis is divided into several parts: optimal solutions, greedy (heuristic) solutions for reasons of scalability, orchestration of services, multi-objective optimization, service planning in complex active networks, and integration of algorithms in real platform. This thesis is supported by models, implementations and simulations which provide results that showcase our work, quantify the importance of evaluating optimization techniques and analyze the trade-off between reducing operator cost and enhancing end user satisfaction index
APA, Harvard, Vancouver, ISO, and other styles
21

Beitsch, Alexander [Verfasser], and Lars [Akademischer Betreuer] Kaleschke. "Uncertainties of a near 90 GHz sea ice concentration retrieval algorithm / Alexander Beitsch. Betreuer: Lars Kaleschke." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2014. http://d-nb.info/1064077005/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Stecík, Július. "Algoritmy ve správě barev." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220212.

Full text
Abstract:
Thesis briefly discusses the issues of color perception and effects associated with it. Further describes color model and its mathematical definition, which are used by color management. Briefly analyzes important elements of ICC profile. In second part two java applications were designed and programmed. First one evaluates visible spectrum and graphically demonstrate procedure for obtaining trichromacy information from this spectrum. Second application analyzes ICC profile and derives gamut of described device.
APA, Harvard, Vancouver, ISO, and other styles
23

Wilson, Katherine Jean Carleton University Dissertation Geography. "Ice motion dynamics in the 1998 North Water polynya (NOW) season; validation and monitoring using RADARSAT-1 SCW data and the Canadian ice Tracking algorithm." Ottawa, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Schmuland, Todd E. "Exploiting Parallel Processing Techniques for Implementation of Wideband MUSIC Algorithm on the IBM Cell Broadband Engine Processor." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1271273869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Guimarães, A. A. R. "Correspondência entre regiões de imagens por meio do algoritmo iterative closet point (ICP)/." reponame:Biblioteca Digital de Teses e Dissertações da FEI, 2015. http://sofia.fei.edu.br:8080/pergamumweb/vinculos/000010/000010fb.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Azzam, Noureddine. "Contribution à l'amélioration de la qualité des états de surfaces des prothèses orthopédiques." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4057/document.

Full text
Abstract:
Une prothèse de genou est généralement, composée de deux parties fixées respectivement sur le fémur et sur le tibia et d’une troisième, dite intercalaire. Durant le processus de fabrication de ces composants des déformations apparaissent au niveau des bruts de fonderie. Les fabricants de prothèses choisissent d’assurer l’épaisseur nominale de la prothèse en enlevant une épaisseur constante sur le brut de fonderie. Cette opération est généralement réalisée manuellement. L’objectif de ces travaux de thèse est de contribuer à l’automatisation de ces opérations en proposant une méthode d’adaptation des trajectoires d’usinage aux variations géométriques de la surface cible. L’objectif de ce travail de recherche est d’adapter une trajectoire d’usinage sur un modèle nominal pour enlever une épaisseur constante sur une surface brute de fonderie mesurée. La méthode proposée commence par une étape d’alignement de la surface mesurée sur la trajectoire nominale en utilisant un algorithme d’ICP. Par la suite, la trajectoire nominale est déformée pour venir enlever l'épaisseur désirée sur la surface brute mesurée. Cette dernière est définie, dans ces travaux, suivant un modèle STL. Naturellement, les discontinuités de ce type de modèle induit une impression des motifs du STL sur la trajectoire adaptée et, donc, sur la pièce usinée. Par la suite, afin de d’atténuer ce problème et d’améliorer la qualité de fabrication, il est proposé de procéder à un lissage de la trajectoire.Afin de valider les développements théoriques de ces travaux, des essais ont été réalisés sur une machine cinq axes pour l’ébauche de composants fémoraux d’une prothèse uni-compartimentale de genou<br>Commonly, knee prostheses are composed of two parts fixed respectively on femur and tibia, and a third one called intercalary. During the manufacturing process, of these components distortions appear on roughcast workpiece geometry. Thus, prosthesis manufacturers choose to ensure the nominal thickness of the prosthesis by removing a constant thickness on the roughcast workpiece. This operation is generally carried out realized manually.The aim of this thesis is to contribute to the automation of these manual operations by providing a method to adapt the machining toolpaths at geometrical variations of the target surface. The aim of this research work is to adapt a machining toolpath computed on a nominal model to remove a constant thickness on a roughcast measured surface. The proposed method starts with an alignment step of the measured surface on the nominal toolpath using an ICP algorithm. Subsequently, the nominal toolpath is deformed to remove the desired thickness of the measured rough surface defined in presented case by a STL model. Naturally, discontinuities of this type of model induce the apparition of pattern for the STL on the adapted toolpath and thus on the machined workpiece. Subsequently, to limit this problem and to improve the quality of realized surface, it is proposed a toolpath smoothing method. To validate theoretical developments of this work, tests were carried out on a five-axis machine for roughing of femoral components of a unicompartmental knee prosthesis
APA, Harvard, Vancouver, ISO, and other styles
27

Fenton, Ronald Christopher. "A Ladar-Based Pose Estimation Algorithm for Determining Relative Motion of a Spacecraft for Autonomous Rendezvous and Dock." DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/69.

Full text
Abstract:
Future autonomous space missions will require autonomous rendezvous and docking operations. The servicing spacecraft must be able to determine the relative 6 degree-of-freedom (6 DOF) motion between the vehicle and the target spacecraft. One method to determine the relative 6 DOF position and attitude is with 3D ladar imaging. Ladar sensor systems can capture close-proximity range images of the target spacecraft, producing 3D point cloud data sets. These sequentially collected point-cloud data sets were then registered with one another using a point correspondence-less variant of the Iterative Closest Points (ICP) algorithm to determine the relative 6 DOF displacements. Simulation experiments were performed and indicated that the mean-squared error (MSE), angular error, mean, and standard deviations for position and orientation estimates did not vary as a function of position and attitude and meet most minimum angular and translational error requirements for rendezvous and dock. Furthermore, the computational times required by this algorithm were comparable to previously reported variants of the point-to-point and point-to-plane-based ICP variants for single iterations when the initialization was already performed.
APA, Harvard, Vancouver, ISO, and other styles
28

Williamson, Andrew Graham. "Remote sensing of rapidly draining supraglacial lakes on the Greenland Ice Sheet." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/276910.

Full text
Abstract:
Supraglacial lakes in the ablation zone of the Greenland Ice Sheet (GrIS) often drain rapidly (in hours to days) by hydraulically-driven fracture (“hydrofracture”) in the summer. Hydrofracture can deliver large meltwater volumes to the ice-bed interface and open-up surface-to-bed connections, thereby routing surface meltwater to the subglacial system, altering basal water pressures and, consequently, the velocity profile of the GrIS. The study of rapidly draining lakes is thus important for developing coupled hydrology and ice-dynamics models, which can help predict the GrIS’s future mass balance. Remote sensing is commonly used to identify the location, timing and magnitude of rapid lake-drainage events for different regions of the GrIS and, with the increased availability of high-quality satellite data, may be able to offer additional insights into the GrIS’s surface hydrology. This study uses new remote-sensing datasets and develops novel analytical techniques to produce improved knowledge of rapidly draining lake behaviour in west Greenland over recent years. While many studies use 250 m MODerate-resolution Imaging Spectroradiometer (MODIS) imagery to monitor intra- and inter-annual changes to lakes on the GrIS, no existing research with MODIS calculates changes to individual and total lake volume using a physically-based method. The first aim of this research is to overcome this shortfall by developing a fully-automated lake area and volume tracking method (“the FAST algorithm”). For this, various methods for automatically calculating lake areas and volumes with MODIS are tested, and the best techniques are incorporated into the FAST algorithm. The FAST algorithm is applied to the land-terminating Paakitsoq and marine-terminating Store Glacier regions of west Greenland to investigate the incidence of rapid lake drainage in summer 2014. The validation and application of the FAST algorithm show that lake areas and volumes (using a physically-based method) can be calculated accurately using MODIS, that the new algorithm can identify rapidly draining lakes reliably, and that it therefore has the potential to be used widely across the GrIS to generate novel insights into rapidly draining lakes. The controls on rapid lake drainage remain unclear, making it difficult to incorporate lake drainage into models of GrIS hydrology. The second aspect of this study therefore investigates whether various hydrological, morphological, glaciological and surface-mass-balance controls can explain the incidence of rapid lake drainage on the GrIS. These potential controlling factors are examined within an Exploratory Data Analysis statistical technique to elicit statistical similarities and differences between the rapidly and non-rapidly draining lake types. The results show that the lake types are statistically indistinguishable for almost all factors, except lake area. It is impossible, therefore, to elicit an empirically-supported, deterministic method for predicting hydrofracture in models of GrIS hydrology. A frequent problem in remote sensing is the need to trade-off high spatial resolution for low temporal resolution, or vice versa. The final element of this thesis overcomes this problem in the context of monitoring lakes on the GrIS by adapting the FAST algorithm (to become “the FASTER algorithm”) to use with a combined Landsat 8 and Sentinel-2 satellite dataset. The FASTER algorithm is applied to a large, predominantly land-terminating region of west Greenland in summers 2016 and 2017 to track changes to lakes, identify rapidly draining lakes, and ascertain the extra quantity of information that can be generated by using the two satellites simultaneously rather than individually. The FASTER algorithm can monitor changes to lakes at both high spatial (10 to 30 m) and temporal (~3 days) resolution, overcoming the limitation of low spatial or temporal resolution associated with previous remote sensing of lakes on the GrIS. The combined dataset identifies many additional rapid lake-drainage events than would be possible with Landsat 8 or Sentinel-2 alone, due to their low temporal resolutions, or with MODIS, due to its inferior spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
29

Elhain, Ahmed M. S. B. "An investigation of the influence of radiographic malpositioning and image processing algorithm selection on ICU/CCU chest radiographs." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/7342.

Full text
Abstract:
Mobile chest radiography remains the most appropriate test for critical care patients with cardiorespiratory changes and with patients who have chest tubes and lines as a monitoring tool, and to detect complications related to their use. However, one of the most frequent issues recognized radiographically with patients in critical care is chest tubes and lines malposition. This can be related to technical quality reasons which can affect their appearance in the chest radiography. This research considers how the technical quality of the ICU/CCU chest radiography can impact upon the appearance of chest tubes/lines and how that appearance can impact on the decision making. Results show that the methods used in the chest phantom experiment to estimate the degree of angulation have a large effect upon the appearance of anatomical structures, but it does not have a particularly large effect upon the apparent changes of tube/line position central venous catheter and endotracheal tube (CVC, ETT). The study also shows that there was a little difference between the two image processing algorithms, apart from the visualisation of sharp reproduction of the trachea and proximal bronchi, which was significantly better using the standard algorithm compared to the inverted algorithm. The two methods used to estimate the degree of angulation and the apparent position of the CVC/ETT on 17 mobile chest radiographs provide limited useful information to the image interpreter in estimating the degree of angulation and degree of malpositioning of the tube and line.
APA, Harvard, Vancouver, ISO, and other styles
30

Martins, Ana Luísa Dine. "Uso do algoritmo ICM adaptativo a descontinuidades para o aumento da resolução de imagens digitais por técnicas de reconstrução por super resolução." Universidade Federal de São Carlos, 2007. https://repositorio.ufscar.br/handle/ufscar/349.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:24Z (GMT). No. of bitstreams: 1 DissALDM.pdf: 3321735 bytes, checksum: b01df738791a5ca4a9c92010d26c994a (MD5) Previous issue date: 2007-05-22<br>Universidade Federal de Minas Gerais<br>Super resolution image reconstruction consists in using a set of low resolution images from the same scene to generate a high resolution estimate of the original scene. For that purpose, all the observed low resolution images need to have sub-pixel displacements among each other. In this way, there is more than just the same information replicated in each image and then the uncertainty inherent to the displacements can be used as additional information to increase the spatial resolution. This master s thesis proposes a Bayesian approach for the super resolution reconstruction problem using Markov Random Fields and the Potts-Straus model for the image characterization. Therefore, it is possible to incorporate previously known context spatial information about the high resolution image to be estimated. Moreover, a discontinuity adaptive ICM algorithm was used to estimate the maximum a posteriori solution. Using an initial high resolution estimate constructed from the registration and interpolation of all the observations made it possible to reconstruct an image that respected the initially presented discontinuities. We also observed that the resulted high resolution image hold finner details when compared to the initial estimation.<br>A Reconstrução por Super Resolução consiste em, utilizando várias imagens de baixa resolução da mesma cena, gerar uma aproximação da cena original, que possua resolução espacial mais alta que a presente em qualquer uma das imagens observadas. Para isso, tais imagens devem possuir algum tipo de deslocamento da ordem sub-pixel uma em relação às demais, de forma que não exista apenas a mesma informação replicada em todas as imagens. Assim, a incerteza inerente a tais deslocamentos pode ser usada como informação adicional no aumento de resolução. Nesse contexto, esta pesquisa propõe uma abordagem Bayesiana do problema, utilizando Campos Aleatórios de Markov e o Modelo de Potts-Strauss na caracterização das imagens. Isso torna possível a imposição de informações espaciais de contexto conhecidas a priori da imagem de alta resolução a ser estimada. A estimativa de Máximo a Posteriori (MAP) de alta resolução é encontrada por meio do algoritmo Iterated Conditional Modes (ICM) adaptativo a descontinuidades. Dessa forma, utilizando como estimativa inicial de alta resolução a imagem resultante do registro e interpolação das imagens de baixa resolução observadas, foi possível reconstruir imagens de maior resolução que respeitassem as descontinuidades inicialmente presentes, e que apresentassem maior riqueza de detalhes.
APA, Harvard, Vancouver, ISO, and other styles
31

Alexander, Jeremy Brandon. "Enhancement of the daytime GOES-based aircraft icing potential algorithm using MODIS." Thesis, Monterey California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2326.

Full text
Abstract:
Approved for public release, distribution is unlimited<br>In this thesis, a fuzzy logic algorithm is developed for the detection of potential aircraft icing conditions using the Moderate-Resolution Imaging Spectroradiometer (MODIS). The fuzzy MODIS algorithm is developed in a manner similar to the cloud mask currently used to process MODIS imagery. The MODIS icing potential detection algorithm uses thresholds for 8 channels in a series of 12 tests to determine the probability of icing conditions being present within a cloud. The MODIS algorithm results were compared to results of the GOES icing potential detection algorithm run on MODIS imagery for 4 cases. When compared to positive and icing pilot reports for the cases, the MODIS algorithm identified regions where icing was encountered more effectively than the GOES algorithm. Furthermore, the use of fuzzy thresholds on MODIS rather than the hard thresholds of the GOES algorithm allowed for less restrictive coverage of potential icing conditions, making the MODIS algorithm more reasonable in assessing all cloud regions for icing potential. The results found here are preliminary, as further statistical analysis with a larger validation dataset would be more effective. Algorithm details are provided in the appendix for reference.<br>Captain, United States Air Force
APA, Harvard, Vancouver, ISO, and other styles
32

Amenta, Valeria Assunta. "Study of an innovative non intrusive load monitoring system for energy emancipation of domestic users: hardware and ICT optimized solutions." Doctoral thesis, Università di Catania, 2017. http://hdl.handle.net/10761/3641.

Full text
Abstract:
Non-intrusive appliance load monitoring (NIALM) is the process of disaggregating a household s total electricity consumption into its contributing appliances. Smart meters are currently being deployed on national scales, providing a platform to collect aggregate household electricity consumption data. Existing approaches to NIALM require a manual training phase in which either sub-metered appliance data is collected or appliance usage is manually labelled. This training data is used to build models of the household appliances, which are subsequently used to disaggregate the household s electricity data. Due to the requirement of such a training phase, existing approaches do not scale automatically to the national scales of smart meter data currently being collected. In this thesis an unsupervised disaggregation method is presented which, unlike existing approaches, does not require a manual training phase. A NIALM system reads real-time data from a smart meter, usually positioned at the point on the public electricity network at which the customers is connected, and uses algorithms not only to quantify how much energy is used in the home, but also to determine what main devices are being operated. NIALM algorithms need a complete load signature and complex optimization algorithms to find the right combination of single loads that fits the real electrical measurements. It is practically impossible to get the detailed signature of all appliances inside a house/building and sophisticated optimization algorithm are not suitable for on-line applications. To do so, we address the following topics. First, a straightforward NIALM algorithm is proposed, it is based on both a simple load signature, rated active and reactive power and a heuristic disaggregation algorithm. Second, on real applications, this approach cannot reach very high performances; this is the reason why an active involvement of users is considered. The users feedback aims to: correct the load signatures, reduce the error of disaggregation algorithm and increase the active participation of users in saving energy politics. Third, the NIALM algorithm has been accurately tested numerically using as input load curves generated randomly but under given constraints. In this way, the causes of inefficiency of the proposed approach are quantitatively analyzed both separately and in different combinations. The above contributions provide a solution which satisfies the requirements of a NIALM method which is both unsupervised (no manual interaction required during training) and uses only smart meter data (no installation of additional hardware is required). When combined, the contributions presented in this thesis represent an advancement in the state of the art in the field of non-intrusive appliance load monitoring, and a step towards increasing the efficiency of energy consumption within households.
APA, Harvard, Vancouver, ISO, and other styles
33

Jacobs, Rodney A. "Data Structures and Algorithms for Efficient Solution of Simultaneous Linear Equations from 3-D Ice Sheet Models." Fogler Library, University of Maine, 2005. http://www.library.umaine.edu/theses/pdf/JacobsRA2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ricci, Francesco. "Un algoritmo per la localizzazione accurata di oggetti in immagini mediante allineamento dei contorni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
In scenari applicativi dove si vuole localizzare in modo accurato un determinato Pattern all’interno di un’immagine è necessario effettuare una fase di raffinamento della posa (Pose Refinement), in modo da incrementare la precisione dell’algoritmo di Pattern Matching. In questo lavoro è stato sviluppato un nuovo algoritmo per il Pose Refinement (denominato PR-ICP) basato esclusivamente sui punti di Edge e quindi strettamente legato al problema della registrazione di punti. Questa tipologia di algoritmo fornisce innumerevoli vantaggi rendendo l’intera operazione di Pattern Matching performante anche in scenari dove i classici approcci basati su correlazione falliscono. D’altra parte utilizzare i punti di Edge introduce diverse problematiche relative alle operazioni di Edge Detection che è necessario effettuare sul Template e sulla Search Image. Rispetto ai classici metodi basati su correlazione, il PR-ICP è più generale e invariante a variazioni di intensità luminosa tra il Template e l’oggetto nella Search Image; grazie agli Score che fornisce in output, inoltre, il PR-ICP è flessibile in quanto può avere un diverso comportamento in base allo specifico scenario applicativo impostando opportuni parametri.
APA, Harvard, Vancouver, ISO, and other styles
35

Soukup, Jiří. "Metody a algoritmy pro rozpoznávání obličejů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-374588.

Full text
Abstract:
This work is describing basic methods of face recognition. The methods PCA, LDA, ICA, trace tranfsorm, elastic bunch graph map, genetic algorithm and neural network are described. In practical part, the PCA, PCA + RBF neural network and genetic algorithms are implemented. The RBF neural network is used in the way of clasificator and genetic algorithm is used for RBF NN training in one case and for selecting eigenvectors from PCA method in the other case. This method, PCA + GA, called EPCA, outperform other methods tested in this work on the ORL testing database.
APA, Harvard, Vancouver, ISO, and other styles
36

Hernandes, Fábio. "Implementação do Algoritmo Paralelo para o Problema de Roteamento de Dados no Computador Paralelo IBM-SP2." Universidade de São Paulo, 1999. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06032018-104542/.

Full text
Abstract:
Nesta dissertação apresentamos e implementamos um método de relaxamento para resolver o problema de roteamento de dados em redes de comutação. Este problema pode ser formulado como um problema de multifluxo, a critério convexo. O algoritmo apresentado resolve iterativamente o problema de multifluxo, decompondo-o da forma mais independente possível em subproblemas de simples fluxo. Esta independência entre os cálculos permite que a resolução dos subproblemas seja simultânea; isto nos permitiu a implementação em paralelo. Os resultados do algoritmo paralelo foram usados para estabelecer uma comparação com o algoritmo seqüencial e assim analisar o speedup. A biblioteca paralela utilizada foi o PVM.<br>In this thesis we present and implement a relaxation method for solving the routing problem in packet-switched communication networks. This problem can be formulated as a multicommodity flow problem, using the convex criterion. The algorithm presented here solves iteratively the multiflow problem, decomposing it in the most independent fonn possible, in subproblems of single flow commodities. That independence between the calculations allows that the resolution of the subproblems be simultaneous, this allowed us the implementation in parallel. The results of the parallel algorithm were used to establish a comparison with the sequencial algorithm and thus to analyse the speedup. The parallel library used was the PVM.
APA, Harvard, Vancouver, ISO, and other styles
37

Tran, Quoc Huy Martin, and Carl Ronström. "Mapping and Visualisation of the Patient Flow from the Emergency Department to the Gastroenterology Department at Södersjukhuset." Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279605.

Full text
Abstract:
The Emergency department at Södersjukhuset currently suffers from very long waiting times. This is partly due to problems within visualisation and mapping of patient data and other information that is fundamental in the handling of patients at the Emergency department. This led to a need in the creation of improvement suggestions to the visualisation of the patient flow between the Emergency department and the Gastroenterology department at Södersjukhuset. During the project, a simulated graphical user interface was created with the purpose of mimicking Södersjukhusets current patient flow. This simulated user interface would visualise the patient flow between the Emergency department and the Gastroenterology department. Additionally, a patient symptoms estimation algorithm was implemented to guess the likelihood of a patient being admitted to a department. The result shows that there are many possible improvements to Södersjukhusets current hospital information system, TakeCare, that would facilitate the care coordinators work and in turn lower the waiting times at the Emergency department.<br>Akutmottagningen på Södersjukhuset har i dagsläget väldigt långa väntetider. Detta beror till viss del utav problem inom visualiseringen och kartläggning av patient data och annan fundamental information för att hantera patienter på akutmottagningen. Detta ledde till att det finns ett behov att skapa förbättringsförslag på visualiseringen av patientflödet mellan akutmottagningen och gastroenterologiavdelningen på Södersjukhuset. Under projektets gång skapades ett simulerat användargränssnitt med syfte att efterlikna Södersjukhusets nuvarande patientflöde. Denna lösning visualiserar patientflödet mellan akutmottagningen och gastroenterologiavdelningen. Dessutom implementerades en enkel sorteringsalgoritm som kan bedöma sannolikheten om en patient skall bli inlagd på en avdelning. Resultatet visar att det finns flera möjliga förbättringar i Södersjukhusets nuvarande elektroniska journalsystemet, TakeCare, som skulle underlätta vårdkoordinatorernas arbete och därmed sänka väntetiderna på akutmottagningen.
APA, Harvard, Vancouver, ISO, and other styles
38

Bokhabrine, Youssef. "Application des techniques de numérisation tridimensionnelle au contrôle de process de pièces de forge." Thesis, Dijon, 2010. http://www.theses.fr/2010DIJOS070.

Full text
Abstract:
L’objectif de ces travaux de thèse est la conception et le développement d’un système de caractérisation tridimensionnelle de pièces forgées de grande dimension portées à haute température. Les travaux se basent sur de nombreuses thématiques telles que l’acquisition tridimensionnelle, l’extraction, la segmentation et le recalage de primitives 3D. Nous présentons tout d’abord les limites des systèmes de caractérisation de pièces forgées cités dans la littérature. Dans la deuxième partie, nous présentons la réalisation du système de caractérisation de pièces forgées, constitué de deux scanners temps de vol (TOF). Nous présentons également le simulateur de numérisation par scanner TOF qui nous permet de nous affranchir des contraintes industrielles (temps, difficulté de manœuvres) pour positionner les deux scanners. La troisième partie est consacrée à l’extraction des primitives 3D. Nous avons traité deux types de primitives : viroles et sphères avec deux approches différentes : méthode supervisée et méthode automatique. La première approche basée sur une méthode de croissance de région et de contour actif, permet d’extraire des formes extrudées complexes. Des problèmes d’ergonomie du système nous ont conduits à développer une deuxième approche, basée sur l’image de Gauss et l’extraction d’ellipse, qui permet l’extraction automatique de formes cylindriques ovales ou circulaires. Nous présentons également quatre méthodes d’extraction automatique de sphères basées sur des approches heuristiques : RANSAC (RANdom SAmple Consensus), algorithme génétique et algorithme génétique par niche. Dans la quatrième partie, nous étudions les différentes approches de recalage de données 3D traitées : le calibrage basé sur les cibles artificielles et le recalage fin basé sur l’algorithme ICP. Pour conclure, nous présentons la réalisation d’un système complet de caractérisation tridimensionnelle de pièces forgées de grande dimension. Ensuite, nous comparons les performances et les limites de ce système avec les systèmes de caractérisation cités dans la littérature<br>The main objective of this Phd project is to conceive a machine vision system for hot cylindrical metallic shells diameters measurement during forging process. The manuscript is structured by developing in the first chapter the state of the art and the limits of hot metallic shells measurement systems suggested in literature. Our implemented system which is based on two conventional Time Of Flight (TOF) laser scanners has been described in the same chapter along, chapter two, with presentation of its respective numerical simulator. Simulation series have been done using the digitizing simulator and were aimed to determine the optimal positions of the two scanners without any industrial constraints (time, difficulty of operations). The third part of the manuscript copes with 3D primitives extraction. Two major types of approaches have been studied according to the primitive’s form (cylinders or spheres) to be extracted: supervised method and automatic method. The first approach, based on a growing region method and active contour, enables to extract complex extruded forms; while problems of ergonomics have been solved using automatic methods that have been carried out along the programme research. The proposed methods consist in automatically extracting: oval or circular cylindrical forms, using Gauss map associated with ellipse extraction techniques : spherical forms, using heuristic approaches such as RANdom SAmple Consensus RANSAC, Genetic Algorithm (GA) and Niche Genetic Algorithm (NGA). Two varieties of 3D data registration approach have been presented and discussed in chapter 4: the registration based on the artificial targets and the fine registration based on algorithm ICP. A complete system for three-dimensional characterization of hot cylindrical metallic shells during forging process has been implemented and then compared with existing systems in order to identify its performances and limits in conclusion
APA, Harvard, Vancouver, ISO, and other styles
39

Chmelíková, Lucie. "Bezkontaktní měření tepové frekvence z obličeje." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241972.

Full text
Abstract:
This thesis deals with the study of contactless and noninvasive methods for estimation of heart rate. Contactless measurement is based on capturing person faces by video camera and from sequences of pictures are estimated values of the heart rate. The theoretical part describes heart rate and methods that are being used to estimate heart rate from color changes in the face. It also contains testing of tracking algorithms. Practical part deals with user interface of program for contactless measurement of heart rate and its software solution. Thesis also contains statistical evaluation of program functionality.
APA, Harvard, Vancouver, ISO, and other styles
40

Bonilla, Naranjo Jose Alejandro. "Registro e alinhamento de imagens de profundidade obtidas com digitalizador para o modelamento de objetos com análise experimental do algoritmo ICP." reponame:Repositório Institucional da UnB, 2012. http://repositorio.unb.br/handle/10482/13030.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2012.<br>Submitted by Albânia Cézar de Melo (albania@bce.unb.br) on 2013-04-12T14:02:16Z No. of bitstreams: 1 2012_JoseAlejandroBonillaNaranjo.pdf: 2703496 bytes, checksum: 6a74687137ee451b79f64e3929ad36d6 (MD5)<br>Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2013-05-07T11:51:33Z (GMT) No. of bitstreams: 1 2012_JoseAlejandroBonillaNaranjo.pdf: 2703496 bytes, checksum: 6a74687137ee451b79f64e3929ad36d6 (MD5)<br>Made available in DSpace on 2013-05-07T11:51:33Z (GMT). No. of bitstreams: 1 2012_JoseAlejandroBonillaNaranjo.pdf: 2703496 bytes, checksum: 6a74687137ee451b79f64e3929ad36d6 (MD5)<br>Este trabalho tem o foco na utilização de técnicas de visão computacional para o desenvolvimento de algoritmos de registro, alinhamento e modelamento de objetos em 3-D. O registro de imagens é realizado utilizando duas metodologias no algoritmo ICP (Iterative Closest Point). Na sua etapa de Busca de correspondências a primeira metodologia, chamada força bruta , encontra os pontos correspondentes com base nas distâncias entre os pontos da nuvem de pontos base com relação à nuvem de pontos modelo, e a segunda, chamada kd tree , acelera a busca do ponto mais próximo entre duas nuvens de pontos. Além disso, é construído um algoritmo que utiliza múltiplas imagens de profundidade para realizar o modelamento 3-D de um objeto utilizando o algoritmo ICP. Este algoritmo requer conhecimento do sistema estudado para realizar o alinhamento prévio das nuvens de pontos e, osteriormente, efetuar o registro de cada uma das nuvens de pontos do mesmo objeto em diferentes posições para reconstrui-lo digitalmente. A convergência do algoritmo ICP é determinada utilizando o Erro Quadrático Médio (Root Mean Square - RMS), é um parâmetro que serve como critério de parada e medição da convergência no registro em cada iteração. O principal aporte desta pesquisa foi oferecer um uso diferente do algoritmo ICP para o modelamento de objetos. Não obstante seja preciso reconhecer que o método desenvolvido neste trabalho é um pouco primitivo no sentido de que ainda depende em muitas etapas da intervenção humana. É o ponto de partida para diversas pesquisas futuras no campo do modelamento de objetos, que atualmente, é considerado a obra-prima da reconstrução de modelos, já que oferece o nível mais alto de desenvolvimento na invenção de técnicas e algoritmos. ______________________________________________________________________________ ABSTRACT<br>This work focuses on the use of computer vision techniques to develop registration algorithms, alignment and 3D object modeling. The image registration process is performed using two methodologies for the ICP algorithm (Iterative Closest Point) during its "Matching" stage. The first methodology, called "brute force", finds the corresponding points based on the distances between the points from the base" point cloud and the "Model" point cloud, the second one, called "KD Tree", accelerates the search of the nearest point between two point clouds. Furthermore, another algorithm is built to perform the object modeling task, which requires knowledge of the system to perform a preliminary alignment of the point clouds to proceed to the registration of a pair of multiple point clouds of the same object from different positions and to digitally reconstruct it. Furthermore, an algorithm is built which uses multiple depth images to perform the 3D modeling of an object using the ICP algorithm. This algorithm requires knowledge of the studied system to perform a previous alignment of the point clouds and, subsequently, perform the registration of each of the point clouds of the same object in different positions to digitally reconstruct it. The convergence of the algorithm ICP is determined using the Root Mean Square, although not an enough condition to ensure the convergence of the algorithm, this is a parameter that serves as a stop criterion and convergence measurement at each iteration record. The main contribution of this research is to offer a different use for the ICP algorithm for object modeling. Nevertheless, it must be recognized that the method developed in this work is primitive in the sense that it still depends on many stages of human intervention. Thus this is the starting point for future research in the field of object modeling, currently considered the "masterpiece" of model reconstruction, given that it offers the highest level of development in the invention of techniques and algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Harman-Clarke, Adam. "Contraintes Topologiques et Ordre dans les Systèmes Modèle pour le Magnétisme Frustré." Thesis, Lyon, École normale supérieure, 2011. http://www.theses.fr/2011ENSL0659.

Full text
Abstract:
Dans cette thèse, l’étude de plusieurs modèles de systèmes magnétiques frustrés a été couverte. Leur racine commune est le modèle de la glace de spin, qui se transforme en modèle de la glace sur réseau kagome (kagome ice) et réseau en damier (square ice) à deux dimensions, et la chaîne d’Ising à une dimension. Ces modèles ont été particulièrement étudiés dans le contexte de transitions de phases avec un ordre magnétique induit par les contraintes du système : en effet, selon la perturbation envisagée, les contraintes topologiques sous-jacentes peuvent provoquer une transition de Kasteleyn dans le kagome ice, ou une transition de type vitreuse dans la square ice, due à l’émergence d’un ordre ferromagnétique dans une chaîne d’Ising induit seulement par des effets de taille fini. Dans tous les cas, une étude détaillée par simulations numériques de type Monte Carlo ont été comparées à des résultats théoriques pour déterminer les propriétés de ces transitions. Les contraintes topologiques du kagome ice ont requis le développement d’un algorithme de vers permettant aux simulations de ne pas quitter l’ensemble des états fondamentaux. Une revue poussée de la thermodynamique et de la réponse de la diffraction de neutrons sur kagome ice sous un champ magnétique planaire arbitraire, nous ont amené à une compréhension plus profonde de la transition de Kasteleyn, et à un modèle numérique capable de prédire les figures de diffraction de neutrons de matériau de kagome ice dans n’importe quelles conditions expérimentales. Sous certaines conditions, ce modèle a révélé des propriétés thermodynamiques quantifiées et devrait fournir un terreau fertile pour de futurs travaux sur les conséquences des contraintes et transitions de phases topologiques. Une étude combinée du square ice et de la chaîne d’Ising a mise en lumière l’apparition d’un ordre sur réseau potentiellement découplé de l’ordre ferromagnétique sous-jacent, et particulièrement pertinent pour les réseaux magnétiques artificiels obtenus par lithographie<br>In this thesis a series of model frustrated magnets have been investigated. Their common parent is the spin ice model, which is transformed into the kagome ice and square ice models in two-dimensions, and an Ising spin chain model in one-dimension. These models have been examined with particular interest in the spin ordering transitions induced by constraints on the system: a topological constraint leads, under appropriate conditions, to the Kasteleyn transition in kagome ice and a lattice freezing transition is observed in square ice which is due to a ferromagnetic ordering transition in an Ising chain induced solely by finite size effects. In all cases detailed Monte Carlo computational simulations have been carried out and compared with theoretical expressions to determine the characteristics of these transitions. In order to correctly simulate the kagome ice model a loop update algorithm has been developed which is compatible with the topological constraints in the system and permits the simulation to remain strictly on the groundstate manifold within the appropriate topological sector of the phase space. A thorough survey of the thermodynamic and neutron scattering response of the kagome ice model influenced by an arbitrary in-plane field has led to a deeper understanding of the Kasteleyn transition, and a computational model that can predict neutron scattering patterns for kagome ice materials under any experimental conditions. This model has also been shown to exhibit quantised thermodynamic properties under appropriate conditions and should provide a fertile testing ground for future work on the consequences of topological constraints and topological phase transitions. A combined investigation into the square ice and Ising chain models has revealed ordering behaviour within the lattice that may be decoupled from underlying ferro- magnetic ordering and is particularly relevant to magnetic nanoarrays
APA, Harvard, Vancouver, ISO, and other styles
42

Svoboda, Jan. "Algoritmy přepočtů gamutů ve správě barev." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220594.

Full text
Abstract:
The thesis deals with colors - their representation in digital devices and how to provide the best color preservation accross different devices. In the first part of the work, the knowledge of colors and human vision is briefly summarized. Then color models and color spaces are elaborated, mainly those device independent. Spectrum of colors viewable or printable on a device - the gamut - is different for every device and there's a need of precise reproduction or record of color. That's why the system of color management is described further and especially the gamut mapping approaches and algorithms are mentioned. In the second part of the work, the implementation of how two algorithms of color gamut mapping (HPMINDE, SCLIP) can be implemented in MATLAB is described. In the third and last part of the work, the results of implemented algorithms are presented and discussed. These results are compared to results of commonly used color gamut mapping technique (Adobe Photoshop).
APA, Harvard, Vancouver, ISO, and other styles
43

Avdiu, Blerta. "Matching Feature Points in 3D World." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Data- och elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23049.

Full text
Abstract:
This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simultaneous localization and mapping (SLAM) technique, ending with a case study on evaluation of the newly developed software solution for SLAM, called slam6d. Slam6d is a tool that registers point clouds into a common coordinate system. It does an automatic high-accurate registration of the laser scans. In the case study the use of slam6d is extended in registering 3D feature point images extracted from a stereo camera and the results of registration are analyzed. In the case study we start with registration of one single 3D feature point image captured from stationary image sensor continuing with registration of multiple images following a trail. Finally the conclusion from the case study results is that slam6d can register non-laser scan extracted feature point images with high-accuracy in case of single image but it introduces some overlapping results in the case of multiple images following a trail.
APA, Harvard, Vancouver, ISO, and other styles
44

Pospíšil, Petr. "Optimalizace predikce pozice v síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217436.

Full text
Abstract:
This work is about position prediction in network, it is focused to find Landmark closest to the Host in the network (with lowest distance vector). The algorithm is based on GNP system. In terms of GNP system simulation was selected method for mathematical part of position prediction. The method was Simplex Downhill. The designed algorithm was implemented in Java. In the first step chose Host continent by meassuring the distance vector. In next step is selected nearest part in the continent. In conclusion estimate Host its position and then closest Landmark. Results from this work is important for designing TTP protocol. The verdict is that the GNP can be used for TTP, but Landmarks must be located in uniform density.
APA, Harvard, Vancouver, ISO, and other styles
45

Šimák, Jan. "Měření vzdáleností mezi stanicemi v IP sítích." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218367.

Full text
Abstract:
This thesis deals with delay prediction issue between nodes on the Internet. Accurate delay prediction helps with choosing of the nearest internet neighbor and contributes to effective usage of network sources. Unnecessary network load is decreased due to algorithms of delay prediction (no need for many latency measuring). The thesis focuses theoretically on the three main algorithms using coordinate systems - GNP, Vivaldi, Lighthouses. Last one is at the same time the main subject of the thesis too. Algorithm Lighthouses is explored in detail theoretically and in practise too. In order to verify the accurate of delay prediction of Lighthouses algorithm the simulation application was developed. The application is able to compute node coordinates of synthetic network using Lighthouses algorithm. Description of simulation application and evaluation of simalution results are part of practice part of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
46

Bokhari, Saniyah S. "Parallel Solution of the Subset-sum Problem: An Empirical Study." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1305898281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tibell, Rasmus. "Training a Multilayer Perceptron to predict the final selling price of an apartment in co-operative housing society sold in Stockholm city with features stemming from open data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-159754.

Full text
Abstract:
The need for a robust model for predicting the value of condominiums and houses are becoming more apparent as further evidence of systematic errors in existing models are presented. Traditional valuation methods fail to produce good predictions of condominium sales prices and systematic patterns in the errors linked to for example the repeat sales methodology and the hedonic pricing model have been pointed out by papers referenced in this thesis. This inability can lead to monetary problems for individuals and in worst-case economic crises for whole societies. In this master thesis paper we present how a predictive model constructed from a multilayer perceptron can predict the price of a condominium in the centre of Stockholm using objective data from sources publicly available. The value produced by the model is enriched with a predictive interval using the Inductive Conformal Prediction algorithm to give a clear view of the quality of the prediction. In addition, the Multilayer Perceptron is compared with the commonly used Support Vector Regression algorithm to underline the hallmark of neural networks handling of a broad spectrum of features. The features used to construct the Multilayer Perceptron model are gathered from multiple “Open Data” sources and includes data as: 5,990 apartment sales prices from 2011- 2013, interest rates for condominium loans from two major banks, national election results from 2010, geographic information and nineteen local features. Several well-known techniques of improving performance of Multilayer Perceptrons are applied and evaluated. A Genetic Algorithm is deployed to facilitate the process of determine appropriate parameters used by the backpropagation algorithm. Finally, we conclude that the model created as a Multilayer Perceptron using backpropagation can produce good predictions and outperforms the results from the Support Vector Regression models and the studies in the referenced papers.<br>Behovet av en robust modell för att förutsäga värdet på bostadsrättslägenheter och hus blir allt mer uppenbart alt eftersom ytterligare bevis på systematiska fel i befintliga modeller läggs fram. I artiklar refererade i denna avhandling påvisas systematiska fel i de estimat som görs av metoder som bygger på priser från repetitiv försäljning och hedoniska prismodeller. Detta tillkortakommandet kan leda till monetära problem för individer och i värsta fall ekonomisk kris för hela samhällen. I detta examensarbete påvisar vi att en prediktiv modell konstruerad utifrån en “Multilayer Perceptron” kan estimera priset på en bostadsrättslägenhet i centrala Stockholm baserad på allmänt tillgängligt data (“Öppen Data”). Modellens resultat har utökats med ett prediktivt intervall beräknat utifrån “Inductive Conformal Prediction”- algoritmen som ger en klar bild över estimatets tillförlitlighet. Utöver detta jämförs “Multilayer Perceptron”-algoritmen med en annan vanlig algoritm för maskinlärande, den så kallade “Support Vector Regression” för att påvisa neurala nätverks kvalité och förmåga att hantera dataset med många variabler. De variabler som används för att konstruera “Multilayer Perceptron”-modellen är sammanställda utifrån allmänt tillgängliga öppna datakällor och innehåller information så som: priser från 5990 sålda lägenheter under perioden 2011- 2013, ränteläget för bostadsrättslån från två av de stora bankerna, valresultat från riksdagsvalet 2010, geografisk information och nitton lokala särdrag. Ett flertal välkända förbättringar för “Multilayer Perceptron”-algoritmen har applicerats och evaluerats. En genetisk algoritm har använts för att stödja processen att hitta lämpliga parametrar till “Backpropagation”-algoritmen. I detta arbete drar vi slutsatsen att modellen kan producera goda förutsägelser med en modell konstruerad utifrån ett neuralt nätverk av typen “Multilayer Perceptron” beräknad med “backpropagation”, och därmed utklassar de resultat som levereras av Support Vector Regression modellen och de studier som refererats i denna avhandling
APA, Harvard, Vancouver, ISO, and other styles
48

SALCEDO, Javier. "DESIGN AND CHARACTERIZATION OF NOVELDEVICES FOR NEW GENERATION OF ELECTROSTATICDISCHARGE (ESD) PROTECTION STRUCTURES." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2812.

Full text
Abstract:
The technology evolution and complexity of new circuit applications involve emerging reliability problems and even more sensitivity of integrated circuits (ICs) to electrostatic discharge (ESD)-induced damage. Regardless of the aggressive evolution in downscaling and subsequent improvement in applications' performance, ICs still should comply with minimum standards of ESD robustness in order to be commercially viable. Although the topic of ESD has received attention industry-wide, the design of robust protection structures and circuits remains challenging because ESD failure mechanisms continue to become more acute and design windows less flexible. The sensitivity of smaller devices, along with a limited understanding of the ESD phenomena and the resulting empirical approach to solving the problem have yielded time consuming, costly and unpredictable design procedures. As turnaround design cycles in new technologies continue to decrease, the traditional trial-and-error design strategy is no longer acceptable, and better analysis capabilities and a systematic design approach are essential to accomplish the increasingly difficult task of adequate ESD protection-circuit design. This dissertation presents a comprehensive design methodology for implementing custom on-chip ESD protection structures in different commercial technologies. First, the ESD topic in the semiconductor industry is revised, as well as ESD standards and commonly used schemes to provide ESD protection in ICs. The general ESD protection approaches are illustrated and discussed using different types of protection components and the concept of the ESD design window. The problem of implementing and assessing ESD protection structures is addressed next, starting from the general discussion of two design methods. The first ESD design method follows an experimental approach, in which design requirements are obtained via fabrication, testing and failure analysis. The second method consists of the technology computer aided design (TCAD)-assisted ESD protection design. This method incorporates numerical simulations in different stages of the ESD design process, and thus results in a more predictable and systematic ESD development strategy. Physical models considered in the device simulation are discussed and subsequently utilized in different ESD designs along this study. The implementation of new custom ESD protection devices and a further integration strategy based on the concept of the high-holding, low-voltage-trigger, silicon controlled rectifier (SCR) (HH-LVTSCR) is demonstrated for implementing ESD solutions in commercial low-voltage digital and mixed-signal applications developed using complementary metal oxide semiconductor (CMOS) and bipolar CMOS (BiCMOS) technologies. This ESD protection concept proposed in this study is also successfully incorporated for implementing a tailored ESD protection solution for an emerging CMOS-based embedded MicroElectroMechanical (MEMS) sensor system-on-a-chip (SoC) technology. Circuit applications that are required to operate at relatively large input/output (I/O) voltage, above/below the VDD/VSS core circuit power supply, introduce further complications in the development and integration of ESD protection solutions. In these applications, the I/O operating voltage can extend over one order of magnitude larger than the safe operating voltage established in advanced technologies, while the IC is also required to comply with stringent ESD robustness requirements. A practical TCAD methodology based on a process- and device- simulation is demonstrated for assessment of the device physics, and subsequent design and implementation of custom P1N1-P2N2 and coupled P1N1-P2N2//N2P3-N3P1 silicon controlled rectifier (SCR)-type devices for ESD protection in different circuit applications, including those applications operating at I/O voltage considerably above/below the VDD/VSS. Results from the TCAD simulations are compared with measurements and used for developing technology- and circuit-adapted protection structures, capable of blocking large voltages and providing versatile dual-polarity symmetric/asymmetric S-type current-voltage characteristics for high ESD protection. The design guidelines introduced in this dissertation are used to optimize and extend the ESD protection capability in existing CMOS/BiCMOS technologies, by implementing smaller and more robust single- or dual-polarity ESD protection structures within the flexibility provided in the specific fabrication process. The ESD design methodologies and characteristics of the developed protection devices are demonstrated via ESD measurements obtained from fabricated stand-alone devices and on-chip ESD protections. The superior ESD protection performance of the devices developed in this study is also successfully verified in IC applications where the standard ESD protection approaches are not suitable to meet the stringent area constraint and performance requirement.<br>Ph.D.<br>School of Electrical Engineering and Computer Science<br>Engineering and Computer Science<br>Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
49

Muharish, Essa Yahya M. "PACKET FILTER APPROACH TO DETECT DENIAL OF SERVICE ATTACKS." CSUSB ScholarWorks, 2016. https://scholarworks.lib.csusb.edu/etd/342.

Full text
Abstract:
Denial of service attacks (DoS) are a common threat to many online services. These attacks aim to overcome the availability of an online service with massive traffic from multiple sources. By spoofing legitimate users, an attacker floods a target system with a high quantity of packets or connections to crash its network resources, bandwidth, equipment, or servers. Packet filtering methods are the most known way to prevent these attacks via identifying and blocking the spoofed attack from reaching its target. In this project, the extent of the DoS attacks problem and attempts to prevent it are explored. The attacks categories and existing countermeasures based on preventing, detecting, and responding are reviewed. Henceforward, a neural network learning algorithms and statistical analysis are utilized into the designing of our proposed packet filtering system.
APA, Harvard, Vancouver, ISO, and other styles
50

Becher, Mike. "Entwicklung des Kommunikationsteilsystems für ein objektorientiertes, verteiltes Betriebssystem." Master's thesis, Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199801481.

Full text
Abstract:
Thema dieser Arbeit ist die Entwicklung eines Kommunikationsteilsystems fuer das Experimentiersystem CHEOPS zur Ermoeglichung einer Interobjektkommunika- tion zwischen Objekten auf dem gleichen bzw. verschiedenen Systemen. Ausgangspunkte stellen dabei eine verfuegbare Implementation eines Ethernet- Treibers der Kartenfamilie WD80x3 fuer MS-DOS, eine geforderte Kommunikations- moeglichkeit mit UNIX-Prozessen sowie die dort benutzbaren Protokoll-Familien dar. Die Arbeit beschaeftigt sich mit der Analyse und Konzipierung des Ethernet- Treibers sowie der Internet-Protokoll-Familie fuer CHEOPS als auch deren Implementation resultierend in einem minimalen Grundsystem. Weiterhin wird ein erster Entwurf fuer ein spaeter weiterzuentwickelndes bzw. zu vervoll- staendigendes Netz-Interface vorgeschlagen und durch eine Beispiel-Implemen- tierung belegt.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography