To see the other types of publications on this topic, follow the link: All detection.

Dissertations / Theses on the topic 'All detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'All detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Guedes, Magno Edgar da Silva. "Vision based obstacle detection for all-terrain robots." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/3650.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores<br>This dissertation presents a solution to the problem of obstacle detection in all-terrain environments,with particular interest for mobile robots equipped with a stereo vision sensor. Despite the advantages of vision, over other kind of sensors, such as low cost, light weight and reduced energetic footprint, its usage still presents a series of challenges. These include the difficulty in dealing with the considerable amount of generated data, and the robustness required to manage high levels of noise. Such problems can be diminished by making hard assumptions, like considering that the terrain in front of the robot is planar. Although computation can be considerably saved, such simplifications are not necessarily acceptable in more complex environments, where the terrain may be considerably uneven. This dissertation proposes to extend a well known obstacle detector that relaxes the aforementioned planar terrain assumption, thus rendering it more adequate for unstructured environments. The proposed extensions involve: (1) the introduction of a visual saliency mechanism to focus the detection in regions most likely to contain obstacles; (2) voting filters to diminish sensibility to noise; and (3) the fusion of the detector with a complementary method to create a hybrid solution, and thus, more robust. Experimental results obtained with demanding all-terrain images show that, with the proposed extensions, an increment in terms of robustness and computational efficiency over the original algorithm is observed
APA, Harvard, Vancouver, ISO, and other styles
2

Alves, Nelson Miguel Rosa. "Vision based trail detection for all-terrain robots." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/5015.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores<br>Esta dissertação propõe um modelo para detecção de trilhos baseado na observação de que estes são estruturas salientes no campo visual do robô. Devido à complexidade dos ambientes naturais, uma aplicação directa dos modelos tradicionais de saliência visual não é suficientemente robusta para prever a localização dos trilhos. Tal como noutras tarefas de detecção, a robustez pode ser aumentada através da modulação da computação da saliência com conhecimento implícito acerca das características visuais (e.g. cor) que permitem uma melhor representação do objecto a encontrar. Esta dissertação propõe o uso da estrutura global do objecto, sendo esta uma característica mais estável e previsível para o caso de trilhos naturais. Esta nova componente de conhecimento implícito é especificada em termos de regras de percepção activa, que controlam o comportamento de agentes simples que se comportam em conjunto para computar o mapa de saliência da imagem de entrada. Para o propósito de acumulação de informação histórica acerca da localização do trilho é utilizado um campo neuronal dinâmico com compensação de movimento. Resultados experimentais num conjunto de dados vasto revelam a habilidade do modelo de produzir uma taxa de sucesso de 91% a 20Hz. O modelo demonstra ser robusto em situações onde outros detectores falhariam, tal como quando o trilho não emerge da parte de baixo da imagem, ou quando se encontra consideravelmente interrompido.
APA, Harvard, Vancouver, ISO, and other styles
3

Saengudomlert, Poompat 1973. "Analysis and detection of jamming attacks in an all-optical network." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47508.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.<br>Includes bibliographical references (p. 123-124).<br>by Poompat Saengudomlert.<br>M.S.
APA, Harvard, Vancouver, ISO, and other styles
4

POCHET, AXELLE DANY JULIETTE. "MODELING OF GEOBODIES: AI FOR SEISMIC FAULT DETECTION AND ALL-QUADRILATERAL MESH GENERATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35861@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>PROGRAMA DE EXCELENCIA ACADEMICA<br>A exploração segura de reservatórios de petróleo necessita uma boa modelagem numérica dos objetos geológicos da sub superfície, que inclui entre outras etapas: interpretação sísmica e geração de malha. Esta tese apresenta um estudo nessas duas áreas. O primeiro estudo é uma contribuição para interpretação de dados sísmicos, que se baseia na detecção automática de falhas sísmicas usando redes neurais profundas. Em particular, usamos Redes Neurais Convolucionais (RNCs) diretamente sobre mapas de amplitude sísmica, com a particularidade de usar dados sintéticos para treinar a rede com o objetivo final de classificar dados reais. Num segundo estudo, propomos um novo algoritmo para geração de malhas bidimensionais de quadrilaterais para estudos geomecânicos, baseado numa abordagem inovadora do método de quadtree: definimos novos padrões de subdivisão para adaptar a malha de maneira eficiente a qualquer geometria de entrada. As malhas obtidas podem ser usadas para simulações com o Método de Elementos Finitos (MEF).<br>Safe oil exploration requires good numerical modeling of the subsurface geobodies, which includes among other steps: seismic interpretation and mesh generation. This thesis presents a study in these two areas. The first study is a contribution to data interpretation, examining the possibilities of automatic seismic fault detection using deep learning methods. In particular, we use Convolutional Neural Networks (CNNs) on seismic amplitude maps, with the particularity to use synthetic data for training with the goal to classify real data. In the second study, we propose a new two-dimensional all-quadrilateral meshing algorithm for geomechanical domains, based on an innovative quadtree approach: we define new subdivision patterns to efficiently adapt the mesh to any input geometry. The resulting mesh is suited for Finite Element Method (FEM) simulations.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Zhe. "Continuous change detection and classification of land cover using all available Landsat data." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12901.

Full text
Abstract:
Thesis (Ph.D.)--Boston University<br>Land cover mapping and monitoring has been widely recognized as important for understanding global change and in particular, human contributions. This research emphasizes the use ofthe time domain for mapping land cover and changes in land cover using satellite images. Unlike most prior methods that compare pairs or sets of images for identifying change, this research compares observations with model predictions. Moreover, instead of classifying satellite images directly, it uses coefficients from time series models as inputs for land cover mapping. The methods developed are capable of detecting many kinds of land cover change as they occur and providing land cover maps for any given time at high temporal frequency. One key processing step of the satellite images is the elimination of "noisy" observations due to clouds, cloud shadows, and snow. I developed a new algorithm called Fmask that processes each Landsat scene individually using an object-based method. For a globally distributed set ofreference data, the overall cloud detection accuracy is 96%. A second step further improves cloud detection by using temporal information. The first application ofthe new methods based on time series analysis found change in forests in an area in Georgia and South Carolina. After the difference between observed and predicted reflectance exceeds a threshold three consecutive times a site is identified as forest disturbance. Accuracy assessment reveals that both the producers and users accuracies are higher than 95% in the spatial domain and approximately 94% in the temporal domain. The second application ofthis new approach extends the algorithm to include identification of a wide variety of land cover changes as well as land cover mapping. In this approach, the entire archive of Landsat imagery is analyzed to produce a comprehensive land cover history ofthe Boston region. The results are accurate for detecting change, with producers accuracy of 98% and users accuracies of 86% in the spatial domain and temporal accuracy of 80%. Overall, this research demonstrates the great potential for use of time series analysis of satellite images to monitor land cover change.
APA, Harvard, Vancouver, ISO, and other styles
6

Parsons, Earl Ryan. "All-Optical Clock Recovery, Photonic Balancing, and Saturated Asymmetric Filtering For Fiber Optic Communication Systems." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194287.

Full text
Abstract:
In this dissertation I investigated a multi-channel and multi-bit rate all-optical clock recovery device. This device, a birefringent Fabry-Perot resonator, had previously been demonstrated to simultaneously recover the clock signal from 10 wavelength channels operating at 10 Gb/s and one channel at 40 Gb/s. Similar to clock signals recovered from a conventional Fabry-Perot resonator, the clock signal from the birefringent resonator suffers from a bit pattern effect. I investigated this bit pattern effect for birefringent resonators numerically and experimentally and found that the bit pattern effect is less prominent than for clock signals from a conventional Fabry-Perot resonator.I also demonstrated photonic balancing which is an all-optical alternative to electrical balanced detection for phase shift keyed signals. An RZ-DPSK data signal was demodulated using a delay interferometer. The two logically opposite outputs from the delay interferometer then counter-propagated in a saturated SOA. This process created a differential signal which used all the signal power present in two consecutive symbols. I showed that this scheme could provide an optical alternative to electrical balanced detection by reducing the required OSNR by 3 dB.I also show how this method can provide amplitude regeneration to a signal after modulation format conversion. In this case an RZ-DPSK signal was converted to an amplitude modulation signal by the delay interferometer. The resulting amplitude modulated signal is degraded by both the amplitude noise and the phase noise of the original signal. The two logically opposite outputs from the delay interferometer again counter-propagated in a saturated SOA. Through limiting amplification and noise modulation this scheme provided amplitude regeneration and improved the Q-factor of the demodulated signal by 3.5 dB.Finally I investigated how SPM provided by the SOA can provide a method to reduce the in-band noise of a communication signal. The marks, which represented data, experienced a spectral shift due to SPM while the spaces, which consisted of noise, did not. A bandpass filter placed after the SOA then selected the signal and filtered out what was originally in-band noise. The receiver sensitivity was improved by 3 dB.
APA, Harvard, Vancouver, ISO, and other styles
7

Shams, Zalia. "Automated Assessment of Student-written Tests Based on Defect-detection Capability." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52024.

Full text
Abstract:
Software testing is important, but judging whether a set of software tests is effective is difficult. This problem also appears in the classroom as educators more frequently include software testing activities in programming assignments. The most common measures used to assess student-written software tests are coverage criteria—tracking how much of the student’s code (in terms of statements, or branches) is exercised by the corresponding tests. However, coverage criteria have limitations and sometimes overestimate the true quality of the tests. This dissertation investigates alternative measures of test quality based on how many defects the tests can detect either from code written by other students—all-pairs execution—or from artificially injected changes—mutation analysis. We also investigate a new potential measure called checked code coverage that calculates coverage from the dynamic backward slices of test oracles, i.e. all statements that contribute to the checked result of any test. Adoption of these alternative approaches in automated classroom grading systems require overcoming a number of technical challenges. This research addresses these challenges and experimentally compares different methods in terms of how well they predict defect-detection capabilities of student-written tests when run against over 36,500 known, authentic, human-written errors. For data collection, we use CS2 assignments and evaluate students’ tests with 10 different measures—all-pairs execution, mutation testing with four different sets of mutation operators, checked code coverage, and four coverage criteria. Experimental results encompassing 1,971,073 test runs show that all-pairs execution is the most accurate predictor of the underlying defect-detection capability of a test suite. The second best predictor is mutation analysis with the statement deletion operator. Further, no strong correlation was found between defect-detection capability and coverage measures.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Loren, Eric Justin. "All optical injection and detection of ballistic charge and spin currents in gallium arsinide, germanium, and silicon." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2742.

Full text
Abstract:
Charge transport and spin transport (spintronics) over nanometer spatial scales are topics of fundamental scientific and technological interest. If the potential of nano-devices and spintronics is to be realized, ways must be developed to inject and control ballistic charge and spin currents, as well as to measure their motion. Here, using novel polarization and phase sensitive optical pump probe techniques, we not only inject ballistic charge and spin currents in GaAs, Ge, and Si but also follow the subsequent carrier motion with < 1 nm spatial and 200 fs temporal resolution. Unlike most free space measurements, the spatial resolution of these techniques is not limited by diffraction, and therefore these techniques provide a unique platform for studying ballistic transport in semiconductors and semiconductor structures. The injection process relies on quantum interference between absorption pathways associated with two-photon absorption of a fundamental optical field and one-photon absorption of the corresponding second harmonic. By utilizing the phase, polarization, photon energy, and intensity of the optical fields we can control the type of current injection (spin current or charge current) and the direction and magnitude. In GaAs we present the first time resolved measurements of charge and spin currents injected by this process and also show the ballistic direct and inverse Spin Hall Effect. These techniques are extended to the more technologically relevant group IV semiconductors Si and Ge. The charge currents injected in these materials show similar qualitative behavior. The electrons and holes are injected with oppositely directed average ballistic velocities that move apart and return to a common position on sub-picosecond time scales. The spin currents however, are very different. The spin up and spin down carrier profiles move apart and remain apart until their spin profiles decay. In GaAs the profile decay on picosecond time scales however, in Ge they decay on femtosecond time scales since the electrons quickly scatter to the side valleys. Unlike GaAs and Ge, the spin orbit coupling in Si is much too small to produce measurable spin currents.
APA, Harvard, Vancouver, ISO, and other styles
9

Reyes, Gomez Juan Pablo. "Astronomical image processing from large all-sky photometric surveys for the detection and measurement of type Ia supernovae." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0144.

Full text
Abstract:
Cette thèse présente plusieurs contributions au software developé pour le traitement d’images dans le cadre du LSST. Notre objectif est d'utiliser le code et les algorithmes LSST existants, afin de créer un pipeline dédié à la détection des supernovae de type Ia. Pour la détection des supernovae nous utilisons une technique appelée soustraction optimale d'images qui implique la construction de coadditions. Nous étudions aussi le comportement des différents objets dans le temps et construisons des courbes de lumière qui représentent leur cycle de vie en fonction de l'intensité lumineuse de chaque détection sur plusieurs nuits. Enfin, pour analyser un nombre excessif de candidats, nous utilisons des algorithmes d'apprentissage machine.Notre première contribution concerne le développement des taches de coaddition automatisée adaptées pour construire des images de référence et de science avec un haut rapport signal-sur-bruit. La contribution suivante est lié à l’addition de mesures et l’étude de résidus des images d’analyse de différence, y-compris la sélection des seuils adaptés et l'étiquetage basée sur les valeurs quantitativess des résidus pour identifier les mauvaises détections, les artéfacts et les flux réellement significatifs. Notre suivante contribution est un algorithme pour sélectionner et générer les courbes de lumière candidates. Finalement, on applique une classification machine learning pour trouver des type Ia supernovae en utilisant la méthode random forest. Ces résultats ont permis l’identification des supernovae de type Ia simulées et réelles parmis les candidats avec une haute précision<br>This thesis will present several contributions to the software developed for the LSST telescope with the purpose of contributing to the detection of type Ia supernovae. Our objective is to use the existing LSST code and algorithms, in order to create a type Ia supernovae detection dedicated pipeline.Since detecting supernovae requires a special type of processing, we use a technique known as the Optimal Image Subtraction which implies the construction of coadditions. Afterwards, we study the behavior of the different objects through time and build light curves that represent their life cycle in terms of the light intensity of each detection on several nights. Lastly, in order to analyze an excessive number of candidates, we employ machine learning algorithms to identify what curves are more probable to be type Ia supernovae. Our first contribution concerns the development of adapted and automatized coaddition tasks for building high signal-to-noise reference and science images. The next contribution is related to the addition of measurements and study of the residuals on difference image analysis, including the selection with adapted thresholding and the assignation of labels. We also propose, as contributions, an algorithm to select and generate the different candidate light curves through the selection of objects with recurrent detections through time and in the different bandpasses. Finally, we apply the machine learning classification approach to find type Ia supernovae by means of using a random forest classifier and based strictly on geometrical features that are present in the light curves
APA, Harvard, Vancouver, ISO, and other styles
10

Wißmeyer, Georg [Verfasser], Vasilis [Akademischer Betreuer] Ntziachristos, Vasilis [Gutachter] Ntziachristos, Axel [Gutachter] Haase, and Christian [Gutachter] Jirauschek. "All-optical Ultrasound Detection for Optoacoustic Imaging / Georg Wißmeyer ; Gutachter: Vasilis Ntziachristos, Axel Haase, Christian Jirauschek ; Betreuer: Vasilis Ntziachristos." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1205462961/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cassidy, Hannah. "It's all in the detail : examining verbal differences between children's true and false reports using cognitive lie detection techniques." Thesis, University of Portsmouth, 2016. https://researchportal.port.ac.uk/portal/en/theses/its-all-in-the-detail(926a1eda-8424-4200-849f-7373c8264aa6).html.

Full text
Abstract:
Police interviewers require a new investigative interviewing tool to facilitate the discrimination between children’s true and false reports. This thesis investigated whether cognitive lie detection techniques could fill this gap in current practice. Chapter 1 introduces the cognitive lie detection paradigm, highlighting the lack of research within the child deception literature and the paradigm’s potential as a means for detecting deceit in children. Chapter 2 explores imposing cognitive load through the use of gaze maintenance to exaggerate differences between child truth-tellers and child lie-tellers. In Experiment 1, maintaining gaze (either with the interviewer’s face or a teddy bear’s face) resulted in truth-tellers providing significantly more detailed reports than lie-tellers. This finding was not apparent for the control condition where children were given no gaze instruction. In Experiment 2, this exaggerated difference between the accounts of the truth- and lie-tellers facilitated deception detection when the children were instructed to look at interviewer’s face, but not at the teddy bear’s face. Poor discrimination for the latter group was discussed with regard to the gaze behaviour of the children being regarded as ‘fishy’ by the evaluators. Chapter 3 investigates whether playing an example of a detailed free recall provided by a peer (referred to as another child’s model statement, AMS) elicits longer statements that contain more cues to deceit in an eyewitness context than when no model statement is used. Both child truth-tellers and child lie-tellers provided more details and more new information following AMS. However, truth-teller accuracy decreased. In Chapter 4, interview clips from Chapter 3 were judged by adult evaluators who found it difficult to differentiate between children’s true and false reports. This could be a consequence of quantity of detail not being a reliable indicator of veracity for this sample of interviews. Chapter 5 tests the use of children’s practice interviews as their own model statements (OMS) compared to AMS and having no model statement (NMS). Only AMS encouraged children to include more details and more new information in their post-model statement true and false reports. Further research is required to understand the socio-cognitive mechanisms that create this behavioural difference. Chapter 6 describes a field study that presented the cognitive lie detection techniques investigated in the previous chapters to police officers who interview child witnesses regularly. Of all the techniques, OMS was considered to be the most viable option, although police officers suggested that all of the interview techniques would require adaptation for use in the real world. The practitioners provided an insightful look at the current child-interviewing context in the UK, which provides a basic framework that could be considered when designing child deception detection strategies in the future. Finally, Chapter 7 summarises the main findings of this doctoral thesis, discusses their theoretical and practical implications, and puts forward ideas for future research.
APA, Harvard, Vancouver, ISO, and other styles
12

Xie, Zhihua. "Fiber-integrated nano-optical antennas and axicons as ultra-compact all-fiber platforms for luminescence detection and imaging down to single nano-emitters." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2046/document.

Full text
Abstract:
Ma thèse concerne le développent de systèmes ultra compactes auto-alignés et à faible coût intégréssur fibre optique monomode pour la collection fibrée de la luminescence locale. Dans un premiertemps, un axicon fibré auto-aligné (AXIGRIN) est proposé permettant de fournir la première imagerierésolue ultra-compacte fibrée de quantum dots PbS infrarouges. Ensuite, la première nano-imagerie(système entièrement fibrée) de quantum dots PbS uniques est réalisée à l’aide d’une nano-antenneà ouverture bowtie intégrée sur pointe fibrée. Enfin, le concept d’≪antenne cornet≫ nano-optiqueest proposé pour le couplage direct et efficace de la luminescence excitée par rayons X à une fibreoptique, dans le but de générer les premiers capteurs et dosimètres fibrés à rayons X<br>My thesis is devoted to develop ultra-compact, plug-and-play and low-cost single-mode optical fibersystems for in-fiber luminescence collection. First, a new fiber self-aligned axicon is proposed toprovide the first resolved infrared fluorescence imaging of PbS quantum dots in far field. Then,all-fiber near-field imaging of single PbS quantum dots is achieved by double resonance bowtienano-aperture antenna (BNA) with nanometer resolution. Finally, the concept of fiber nano-opticalhorn antenna is proposed for in-fiber X-ray excited luminescence out-coupling, with the purpose ofgenerating the first generation of fiber X-ray sensors and dosimeters
APA, Harvard, Vancouver, ISO, and other styles
13

Rodolfo, Barron Jimenez. "Application of an all-solid-state diode-laser-based sensor for carbon monoxide detection by optical absorption in the 4.4 – 4.8 µm spectral region." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1509.

Full text
Abstract:
An all-solid-state continuous-wave (cw) laser system for mid-infrared absorption measurements of the carbon monoxide (CO) molecule has been developed and demonstrated. The single-mode, tunable output of an external-cavity diode laser (ECDL) is difference-frequency mixed (DFM) with the output of a 550-mW diode-pumped cw Nd:YAG laser in a periodically-poled lithium niobate (PPLN) crystal to produce tunable cw radiation in the mid-infrared. The wavelength of the 860-nm ECDL can be coarse tuned between 860.78 to 872.82 nm allowing the sensor to be operated in the 4.4 – 4.8 µm region. Results from single-pass mid-IR direct absorption experiments for CO concentration measurements are discussed. CO measurements were performed in CO/CO2/N2 mixtures in a room temperature gas cell that allowed the evaluation of the sensor operation and data reduction procedures. Field testing was performed at two locations: in the exhaust of a well-stirred reactor (WSR) at Wright-Patterson Air Force Base and the exhaust of a gas turbine at Honeywell Engines and Systems. Field tests demonstrated the feasibility of the sensor for operation in harsh combustion environments but much improvement in the sensor design and operation was required. Experiments in near-adiabatic hydrogen/air CO2-doped flames were performed featuring two-line thermometry in the 4.8 µm spectral region. The sensor concentration measurement uncertainty was estimated at 2% for gas cell testing. CO concentration measurements agreed within 15% of conventional extractive sampling at WSR, and for the flame experiments the repeatability of the peak absorption gives a system uncertainty of 10%. The noise equivalent CO detection limit for these experiments was estimated at 2 ppm per meter, for combustion gas at 1000 K assuming a SNR ratio of 1.
APA, Harvard, Vancouver, ISO, and other styles
14

Firla, Marcin. "Automatic signal processing for wind turbine condition monitoring. Time-frequency cropping, kinematic association, and all-sideband demodulation." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT006/document.

Full text
Abstract:
Cette thèse propose trois méthodes de traitement du signal orientées vers la surveillance d’état et le diagnostic. Les techniques proposées sont surtout adaptées pour la surveillance d’état, effectuée à la base de vibrations, des machines tournantes qui fonctionnent dans des conditions d’opération non-stationnaires comme par exemple les éoliennes mais elles ne sont pas limitées à un tel usage. Toutes les méthodes proposées sont des algorithmes automatiques et gérés par les données.La première technique proposée permet de sélectionner la partie la plus stationnaire d’un signal en cadrant la représentation temps-fréquence d’un signal.La deuxième méthode est un algorithme pour l’association des dispositions spectrales, des séries harmoniques et des séries à bandes latérales avec des fréquences caractéristiques provennant du cinématique d'un système analysé. Cette méthode propose une approche unique dédiée à l’élément roulant du roulement qui permet de surmonter les difficultés causées par le phénomène de glissement.La troisième technique est un algorithme de démodulation de bande latérale entière. Elle fonctionne à la base d’un filtre multiple et propose des indicateurs de santé pour faciliter une évaluation d'état du système sous l’analyse.Dans cette thèse, les méthodes proposées sont validées sur les signaux simulés et réels. Les résultats présentés montrent une bonne performance de toutes les méthodes<br>This thesis proposes a three signal-processing methods oriented towards the condition monitoring and diagnosis. In particular the proposed techniques are suited for vibration-based condition monitoring of rotating machinery which works under highly non-stationary operational condition as wind turbines, but it is not limited to such a usage. All the proposed methods are automatic and data-driven algorithms.The first proposed technique enables a selection of the most stationary part of signal by cropping time-frequency representation of the signal.The second method is an algorithm for association of spectral patterns, harmonics and sidebands series, with characteristic frequencies arising from kinematic of a system under inspection. This method features in a unique approach dedicated for rolling-element bearing which enables to overcome difficulties caused by a slippage phenomenon.The third technique is an all-sideband demodulation algorithm. It features in a multi-rate filter and proposes health indicators to facilitate an evaluation of the condition of the investigated system.In this thesis the proposed methods are validated on both, simulated and real-world signals. The presented results show good performance of all the methods
APA, Harvard, Vancouver, ISO, and other styles
15

Chi, Ying. "Calculating posterior probabilities for EM induction landmine detection using MCMC with thermodynamic integration /." Full text available from ProQuest UM Digital Dissertations, 2005. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?RQT=305&querySyntax=PQ&searchInterface=1&moreOptState=CLOSED&TS=1184862704&h_pubtitle=&h_pmid=&clientId=22256&JSEnabled=1&SQ=chi%2C+ying&DBId=21651&date=ALL&onDate=&beforeDate=&afterDate=&fromDate=&toDate=&TITLE=&author=&SCH=&subject=&LA=any&MTYPE=all&sortby=REVERSE_CHRON&x=0&y=0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Curnow, David. "Validation of the Barkemeyer-Callon-Jones malingering detection scale: The ability of a scale differentiate simulating malingers from controls and prior litigants from those with No litigation experience within a sample of men who have all suffered chronic low back pain." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1998. https://ro.ecu.edu.au/theses/1446.

Full text
Abstract:
Chronic low back pain costs the community, and several authors have suggested that individuals often attempt to exaggerate chronic low back pain. Currently no reliable and valid scale for assessing malingering in chronic pain populations exists, and there is a large difference in opinion on the ability of experts using clinical judgment to detect malingering. The current study seeks to provide a validation for the BarkemeyerCallon-Jones Malingering Detection Scale (MDS) which has purported to be able to identify individuals attempting to malinger neurological conditions and pain. A simulation design was used, as in previous research, because it is difficult to identify actual malingerers in a known groups design. Thirty-two men with chronic low back pain were divided into two groups of sixteen. One group was asked to simulate malingering for the purposes of gaining an increased compensation while the other group is asked to be as honest as possible. The hypotheses tested were whether the responses to the MDS can: discriminate between simulating malingerers and controls, show an increased focus on severity rather than description of pain by simulating malingerers, show a relationship between malingering scores and reported pain levels, show that prior litigation contributes to either MDS scores or reported pain levels. Significance was assessed using chi square, t-test, bivariate correlation and two ANOVAs. While the MDS was able to discriminate to a significant level between participants asked to malinger and those being honest, methodological issues suggest that levels of pre-assessment injury contribute to malingering scores and that conscious intent is what separates malingering from psychological disorders (abnormal illness behaviour) is context bound. Litigation has no effect on reported pain level or MDS scores.
APA, Harvard, Vancouver, ISO, and other styles
17

Lawton, Christopher David. "The development of a position sensitive gamma-ray detector." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

MOBILIO, MARCO. "Software Architectures For Embedded Systems Supporting Assisted Living." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/151641.

Full text
Abstract:
Nei prossimi decenni è previsto che la popolazione subirà una lieve riduzione nei paesi sviluppati, ma sarà in media più anziana. Ciò risulta in un aumento nell’interesse per quelle soluzioni (umane o tecnologiche) in grado di supportare la popolazione anziana nella vita di tutti i giorni. Questa situazione ha portato un aumento nell’interesse riguardo all’Ambient Assisted Living (AAL), che racchiude tutte quelle soluzioni tecnologiche volte al supporto degli anziani nella quotidianità della propria casa. Solitamente i sistemi di AAL includono un livello incaricato di acquisire informazioni dall’ambiente e uno che si occupa di realizzare la logica applicativa. Ad esempio un sistema di rilevazione delle cadute potrebbe acquisire dati accelerometrici ed acustici e sfruttarli per rilevare possibili cadute grazie a tecniche di machine learning. I sistemi di AAL sono solitamente implementati come soluzioni verticali senza una chiara separazione tra i due livelli. Questo prevede diverse problematiche, tra le quali scarse possibilità di riuso dei componenti del sistema, poiché le loro responsabilità e conoscenze sono intrecciate. In aggiunta sono poco flessibili all’evoluzione tecnologica poiché i dati sono fortemente legati alla sorgente fisica e quindi, un cambiamento della sorgente richiederebbe anche la modifica della logica. Per favorire l’evoluzione del sistema e il riuso dei componenti, i sistemi AAL dovrebbero mantenere separate le problematiche legate all’acquisizione dei dati rispetto a quelle sulla logica applicativa. Ne consegue che i dati acquisiti dovrebbero essere slegati dalla loro sorgente fisica. Una tale rappresentazione permette di cambiare le caratteristiche fisiche delle sorgenti di dati, senza andare ad impattare la rappresentazione delle informazioni e quindi lo strato di logica applicativa. Inoltre il livello di acquisizione dovrebbe essere strutturato per far si che i meccanismi di acquisizione siano separati dai componenti software che interagiscono con le specifiche sorgenti (i driver). Questo permette di riutilizzare gli stessi meccanismi e di programmare i driver solo per i sensori necessari. Nel caso nuovi o diversi sensori si rendano necessari, è sufficiente aggiungere o modificare i driver e configurare i meccanismi di base per applicare direttamente le modifiche. L’obiettivo di questo lavoro è quello di proporre un approccio innovativo per il design di un livello di acquisizione in grado di superare le limitazioni delle soluzioni tradizionali. La proposta si compone di due set di astrazioni architetturali: Time Driven Sensor Hub (TDSH) è un set di astrazioni architetturali dedicato allo sviluppo di sistemi di acquisizione temporizzati, di facile configurazione sia per ciò che concerne il tipo di sensori che per quanto riguarda le frequenze di acquisizione. Subjective sPaces Architecture for Contextualising hEterogeneous Sources (SPACES) è un set di astrazioni architetturali volto alla rappresentazione di misurazioni di sensori tali che siano indipendenti dalle caratteristiche fisiche del sensore di acquisizione. Tali astrazioni possono semplificare l’interpretazione dei dati e la data fusion, inoltre favoriscono sia il riuso delle infrastrutture esistenti, sia la trasparenza dei livelli di acquisizione fornendo un framework comune per la rappresentazione dei dati sensoriali. Il risultato del lavoro consiste in due design concreti e relative implementazioni, atte a reificare i modelli definiti da TDSH e SPACES. Uno scenario di test è stato considerato per contestualizzare l’utilità degli approcci proposti e per testare l’effettiva correttezza di ciascun componente. Lo scenario d’esempio è basato sulla rilevazione di cadute e sfrutta un accelerometro e un array microfonico lineare per la rilevazione.<br>In coming decades, population is set to become slightly smaller in more developed countries, but much older. This increase results in a growing need for supports (human or technological) that enables the older population to perform daily activities. This originated an increasing interest in Ambient Assisted Living (AAL), which encompasses technological solutions supporting elderly people in their daily life at their homes. The structure of an AAL system generally includes a layer in charge of acquiring data from the field, and a layer in charge of realising the application logic. For example, a fall detection system, acquires both accelerometer and acoustic data from the field, and exploits them to detect fall by relying on a machine learning technique. Usually, AAL system are implemented as vertical solutions in which often there is not a clear separation between the two main layers. This rises several issues, which include at least a poor reuse of the system components since their responsibilities overlap, and a scarce liability to software evolution mostly because data is strongly coupled with its source, thus changing the source requires modifying the application logic too. To promote reusability and evolution, an AAL system should keep accurately separated issues related to acquisition from those related to reasoning. It follows that data, once acquired, should be completely decoupled from its source. This allows to change the physical characteristics of the sources of information without affecting the application logic layer. Moreover, the acquisition layer, should be structured so that the basic acquisition mechanisms (triggering sources at specified frequencies, and distributing the acquired data) should be kept separated from the part of the software that interacts whit the specific source (i.e., the software driver). This allows to reuse the basic mechanisms and to program the drivers for the needed sensors only. If a new or different sensor is required, it suffices to add/change the sensor driver and to properly configure the basic mechanisms so that the change can actually implemented. The aim of this work is to propose a novel approach to the design of the acquisition layer that overcomes the limitation of traditional solutions. The approach consists of two different sets of architectural abstractions: Time Driven Sensor Hub (TDSH) is a set of architectural abstraction for developing timed acquisition systems that are easily configurable for what concerns both the type of the sensors needed and their acquisition frequencies. Subjective sPaces Architecture for Contextualising hEterogeneous Sources (SPACES) is a set of architectural abstractions aimed at representing sensors measurements that are independent from the sensors characteristics. Such set can reduce the effort for data fusion and interpretation, moreover it enforces both the reuse of existing infrastructure and the openness of the sensing layer by providing a common framework for representing sensors readings. The final result of this work consists in two concrete designs and implementations that reify the TDSH and SPACES models. A test scenario has been considered to contextualise the usefulness of the proposed approaches and to test the actual correctness of each component. The example scenario is built upon the case of fall detection, an application case studied in order to be aware of peculiarities of the chosen domain. The example system is based on the proposed sets of architectural abstraction and exploits an accelerometer and a linear microphonic array to perform fall detection.
APA, Harvard, Vancouver, ISO, and other styles
19

Abdo, Ali [Verfasser]. "Fault Detection Schemes for Switched Systems / Ali Abdo." Aachen : Shaker, 2013. http://d-nb.info/1050343271/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wanotayaroj, Chaowaroj. "Search for a Scalar Partner of the Top Quark in the Jets+ETMiss Final State with the ATLAS detector." Thesis, University of Oregon, 2017. http://hdl.handle.net/1794/22275.

Full text
Abstract:
This dissertation presents searches for direct pair production of a scalar partner of the top quark in events with only jets and \acrlong{met} in proton--proton collisions recorded during LHC Run 1 and Run 2 with the ATLAS detector. In the supersymmetry scenario, the partner is called top squark or stop. The stop ($\stop$) is assumed to decay via $\stop \to t \ninoone$, $\stop\to b\chinoonepm \to b W^{\left(\ast\right)} \ninoone$, or $\stop\ra bW\ninoone$, where $\ninoone$ ($\chinoonepm$) denotes the lightest neutralino (chargino). Exclusion limits are reported in terms of the stop and neutralino masses. The LHC Run 1 analysis uses an integrated luminosity of 20.1~{\ifb} at $\acrshort{sqrts}=8~\tev$ to exclude top squark masses in the range $270$--$645~\GeV$ for $\ninoone$ masses below $30~\GeV$, assuming a 100\% $\stop \to t \ninoone$ \gls{br}. For a \gls{br} of $50\%$ to either $\stop \to t \ninoone$ or $\stop\to b\chinoonepm$, and assuming $m_{\chinoonepm} = 2 m_{\ninoone}$, stop masses in the range $250$--$550~\GeV$ are excluded for $\ninoone$ masses below $60~\GeV$. The LHC Run 2 analysis uses an integrated luminosity of 13.3~{\ifb} at $\acrshort{sqrts}=13~\tev$. Assuming a 100\% $\stop \to t \ninoone$ \gls{br}, stop masses in the range $310$--$820~\GeV$ are excluded for $\ninoone$ masses below $160~\GeV$. For $\mstop\sim m_t+\mLSP$ scenario, the search excludes stop masses between $23$--$380~\GeV$. Additionally, scenarios where stops are produced indirectly through gluino decay but have very low $p_T$ signature due to a very small $\Delta (\mstop, \mLSP)$, have been considered. The result is interpreted as an upper limit for the cross section in terms of the gluino and stop masses. This excludes all models considered which include $m_{\gluino}$ up to 1600~\GeV\ with $\mLSP<560~\GeV$ at 95\% CL. Finally, the analysis strategy from the LHC Run 1 search is applied in the broader scope of supersymmetry called \gls{pmssm}. This dissertation presents a summary of the results that related to the stop search.
APA, Harvard, Vancouver, ISO, and other styles
21

D'Elia, Gianluca <1980&gt. "Fault detection in rotating machines by vibration signal processing techniques." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/952/1/Tesi_Delia_Gianluca.pdf.

Full text
Abstract:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
APA, Harvard, Vancouver, ISO, and other styles
22

D'Elia, Gianluca <1980&gt. "Fault detection in rotating machines by vibration signal processing techniques." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/952/.

Full text
Abstract:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
APA, Harvard, Vancouver, ISO, and other styles
23

TABRIZI, ZARRINGHABAEI ALI AKBAR. "Development of new fault detection methods for rotating machines (roller bearings)." Doctoral thesis, Politecnico di Torino, 2015. http://hdl.handle.net/11583/2598388.

Full text
Abstract:
Abstract Early fault diagnosis of roller bearings is extremely important for rotating machines, especially for high speed, automatic and precise machines. Many research efforts have been focused on fault diagnosis and detection of roller bearings, since they constitute one the most important elements of rotating machinery. In this study a combination method is proposed for early damage detection of roller bearing. Wavelet packet transform (WPT) is applied to the collected data for denoising and the resulting clean data are break-down into some elementary components called Intrinsic mode functions (IMFs) using Ensemble empirical mode decomposition (EEMD) method. The normalized energy of three first IMFs are used as input for Support vector machine (SVM) to recognize whether signals are sorting out from healthy or faulty bearings. Then, since there is no robust guide to determine amplitude of added noise in EEMD technique, a new Performance improved EEMD (PIEEMD) is proposed to determine the appropriate value of added noise. A novel feature extraction method is also proposed for detecting small size defect using Teager-Kaiser energy operator (TKEO). TKEO is applied to IMFs obtained to create new feature vectors as input data for one-class SVM. The results of applying the method to acceleration signals collected from an experimental bearing test rig demonstrated that the method can be successfully used for early damage detection of roller bearings. Most of the diagnostic methods that have been developed up to now can be applied for the case stationary working conditions only (constant speed and load). However, bearings often work at time-varying conditions such as wind turbine supporting bearings, mining excavator bearings, vehicles, robots and all processes with run-up and run-down transients. Damage identification for bearings working under non-stationary operating conditions, especially for early/small defects, requires the use of appropriate techniques, which are generally different from those used for the case of stationary conditions, in order to extract fault-sensitive features which are at the same time insensitive to operational condition variations. Some methods have been proposed for damage detection of bearings working under time-varying speed conditions. However, their application might increase the instrumentation cost because of providing a phase reference signal. Furthermore, some methods such as order tracking methods still can be applied when the speed variation is limited. In this study, a novel combined method based on cointegration is proposed for the development of fault features which are sensitive to the presence of defects while in the same time they are insensitive to changes in the operational conditions. It does not require any additional measurements and can identify defects even for considerable speed variations. The signals acquired during run-up condition are decomposed into IMFs using the performance improved EEMD method. Then, the cointegration method is applied to the intrinsic mode functions to extract stationary residuals. The feature vectors are created by applying the Teager-Kaiser energy operator to the obtained stationary residuals. Finally, the feature vectors of the healthy bearing signals are utilized to construct a separating hyperplane using one-class support vector machine. Eventually the proposed method was applied to vibration signals measured on an experimental bearing test rig. The results verified that the method can successfully distinguish between healthy and faulty bearings even if the shaft speed changes dramatically.
APA, Harvard, Vancouver, ISO, and other styles
24

Schönwald, Arne. "Investigation of all-flavour neutrino fluxes with the IceCube detector using the cascade signature." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17514.

Full text
Abstract:
Das Ziel dieser Dissertation ist die Suche nach dem astrophysikalischen Neutrinofluss in einem IceCube-Datensatz bestehend aus 335 Tagen. IceCube ist ein 1 km$^{3}$ gro{\ss}er Neutrinodetektor, welcher sich am S{\"u}dpol befindet und aus 86 in das Eis eingefrorenen Trossen besteht, von denen jede mit 60 Digitalen Optischen Photomultipliern (DOM) best{\"u}ckt ist. Der Detektor befand sich noch in der Konstruktionsphase, daher bestand er nur aus 59 Trossen (IC59), als die Daten f{\"u}r diese Analyse gewonnen wurden.\newline Die hier behandelte Analysemethode ist empfindlich f{\"u}r alle drei Neutrinoarten. Wenn Neutrinos mit den im Eis vorhandenen Atomkernen wechselwirken, werden geladene Teilchen erzeugt, welche Tscherenkow-Strahlung aussenden, die dann von den DOM registriert und zur Rekonstruktion der Neutrinowechselwirkung verwendet wird. Diese Neutrinoereignisse m{\"u}ssen aus einem gro{\ss}en Untergrund von atmosph{\"a}rischen Myonen, der $10^{8}$ mal mehr Myonen als Neutrinos auf Trigger-Level enth{\"a}lt, gefiltert werden. Atmosph{\"a}rische und astrophysikalische Neutrinos k{\"o}nnen nur auf statistischem Wege auf der Grundlage ihrer rekonstruierten Energien unterschieden werden.\newline Um eine verl{\"a}ssliche Vorhersage f{\"u}r atmosph{\"a}rische Myonen in der finalen Filterstufe zu erreichen, wurde eine gro{\ss}e Anzahl von Myonen simuliert. Die vorgestellte Analyse war die erste, welche eine livetime von {\"u}ber einem Jahr f{\"u}r die Simulation von atmosph{\"a}rischen Myonen erreicht hat (f{\"u}r $E \geq 10$ TeV).\newline Eine erste Analyse z{\"a}hlte die Ereignisse mit einer Energie von $E>38$ TeV und fand 8 Ereignisse mit Energien zwischen 39 TeV und 67 TeV bei einer Untergrunderwartung von $3.6\pm 0.3$ Ereignissen. Dieser {\"U}berschuss wurde mit Hilfe eines Likelihood-Fit mit einer Energieschwelle von 10 TeV genauer untersucht. Es war kein astrophysikalischer Neutrinofluss n{\"o}tig, um den {\"U}berschuss zu beschreiben. Stattdessen wurde der {\"U}berschuss von einer h{\"o}heren Normierung des atmosph{\"a}rischen Neutrinoflusses absorbiert. Wenn keine weiteren Einschr{\"a}nkungen von unabh{\"a}ngigen Messungen oder Modellen des atmosph{\"a}rischen Neutrinoflusses verwendet werden, kann eine 90\% obere Grenze f{\"u}r den astrophysikalischen Neutrinofluss aller Neutrinoarten von $E^{2}\Phi_{astro,\;ul}=1.7\cdot 10^{-8} {\rm GeV}{\rm s}^{-1}{\rm sr}^{-1}{\rm cm}^{-2}$ im Energiebereich von $20\;{\rm TeV} \leq E \leq 3.0\;{\rm PeV}$ berechnet werden. Diese obere Grenze auf den Neutrinofluss liegt deutlich unter denen vorheriger IceCube-Analysen und ist kleiner als der sp{\"a}ter entdeckte astrophysikalische Neutrinofluss. Der atmosph{\"a}rischen Neutrinofluss, der im gleichen Fit bestimmt wurde, liegt deutlich {\"u}ber Modellvorhersagen basierend auf vor kurzem gewonnenen Messdaten. Wenn der atmosph{\"a}rische Neutrinofluss auf das Intervall dieser Modellvorhersagen beschr{\"a}nkt wird, ergibt sich eine obere Grenze f{\"u}r den astrophysikalischen Neutrinofluss aller Neutrinoarten von $E^{2}\Phi_{astro,\;ul}=3.2\cdot 10^{-8} {\rm GeV}{\rm s}^{-1}{\rm sr}^{-1}{\rm cm}^{-2}$ im Energiebereich von $20\;{\rm TeV} \leq E \leq 3.0\;{\rm PeV}$, was vertr{\"a}glich mit dem mittlerweile von IceCube gemessenen Neutrinofluss ist, welcher mit einer Analyse mit zwei Jahre Messzeit des fertiggestellten Ice-Cube-Detektors bestimmt wurde.<br>This thesis presents a search for the diffuse astrophysical neutrino flux in 335 days of IceCube data. IceCube is a 1 km$^{3}$ neutrino detector located at the South Pole, consisting of 86 strings, each equipped with 60 Digital Optical Photomultipliers (DOMs), frozen in the ice. The detector was still in construction when the data used in this analysis was taken, therefore only 59 strings were available (IC59).\newline The analysis presented here is sensitive to all three neutrino flavors. Neutrinos interacting with nuclei in the ice produce charged particles which emit Cherenkov light. This light is recorded by the DOMs and used for the event reconstruction. These neutrino events must be extracted from the huge background of atmospheric muons, which is $10^{8}$ times more common than neutrino events at trigger level. Finally, atmospheric and astrophysical neutrinos need to be distinguished statistically, based on the reconstructed neutrino energies.\newline To obtain a robust prediction of atmospheric muon events at the final level of the event selection, a huge simulation sample of atmospheric muons has been produced. This analysis was the first to achieve a livetime of more than one year of simulated atmospheric muon events with $E \geq 10$ TeV.\newline A first analysis counting the number of events with an energy $E>38$ TeV found 8 events with energies between 39 TeV and 67 TeV for a background prediction of $3.6\pm 0.3$ events. This excess was further investigated with a maximum likelihood fit with an energy threshold of 10 TeV. No astrophysical neutrino flux was required to describe the excess in the data. Instead, it was absorbed by a higher normalization of the atmospheric neutrino flux. If no constraints from independent measurements or models of the atmospheric neutrino flux are applied, a 90\% upper limit on the all-flavor astrophysical neutrino flux of $E^{2}\Phi_{astro,\;ul}=1.7\cdot 10^{-8} {\rm GeV}{\rm s}^{-1}{\rm sr}^{-1}{\rm cm}^{-2}$ in the energy range of $20\;{\rm TeV} \leq E \leq 3.0\;{\rm PeV}$ can be derived. This upper limit is considerably lower than earlier IceCube limits, and lower than the astrophysical neutrino flux discovered later. However, the atmospheric flux that is obtained in the same fit is considerably higher than model predictions based on recent measurement. If the atmospheric flux is constrained to the range of these model predictions, the upper limit is $E^{2}\Phi_{astro,\;ul} = 3.2\cdot 10^{-8}\; {\rm GeV}{\rm s}^{-1}{\rm sr}^{-1}{\rm cm}^{-2}$, which is compatible with the astrophysical neutrino flux finally detected by IceCube using two years of data from the completed IceCube detector.
APA, Harvard, Vancouver, ISO, and other styles
25

Fouquet, Kevin. "Detection of life habits evolution of frail people in a smart dwelling." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG075.

Full text
Abstract:
Face à l’augmentation du nombre de personnes fragiles dû au vieillissement mondial de la population, des solutions innovantes sont explorées afin d’assurer un accès de soin efficace aux personnes restant à leur domicile.Le domaine scientifique de l’Ambient Assisted Living (AAL) exploite les technologies des habitats intelligents afin de faciliter le vieillissement à domicile et d’offrir un niveau de sécurité satisfaisant.En particulier, de nombreux travaux existants exploitent des capteurs embarqués sur la personne afin de surveiller leurs constantes vitales : température, rythme cardiaque, etc.Ces capteurs offrent des informations pertinentes au corps médical, mais certains troubles tels que le déclin physique, ou les troubles cognitifs, entrainent en premier lieu un changement du comportement de l’habitant, ce qui entraine alors dans un second temps des altérations dans ses constantes vitales.Cette situation est particulièrement complexe pour le corps médical, puisque l’observation seule des signes vitaux entraine une détection tardive et difficile du trouble responsable.Ainsi, cette thèse propose une démarche pour la surveillance de comportement, c’est-à-dire la manière dont l’habitant conduit ses activités quotidiennes, afin de détecter des déviations de comportement et d’en renseigner le corps médical afin d’assister sa prise de décision.Cette approche est rendue possible grâce aux travaux récents concernant la reconnaissance des activités qui permettent d’identifier l’activité menée par l’habitant en fonction des évènements capteurs générés à son domicile.Une revue de la littérature médicale et concernant l’AAL a été menée afin d’identifier les troubles médicaux et symptômes associés d’intérêt pour le corps médical, et les caractéristiques du comportement qu’ils affectent.Deux caractéristiques ont été identifiées comme particulièrement pertinentes du fait de leur couverture : l’ordonnancement et la durée des activités menés par l’habitant.De plus, deux types de déviation de comportement ont également été identifiés comme pertinent à détecter : les anomalies de comportement, c’est-à-dire les changements brusques de comportement qui peuvent être dues à un accident ou une maladie se déclarant du jour au lendemain, et les déviations de long terme qui résultent de changements lents et progressifs du comportement dus à des troubles dégénératifs.Les travaux présentés dans cette thèse visent à détecter ces deux types de déviations de comportement concernant les caractéristiques de comportement identifiées comme pertinentes chez un habitant vivant seul au sein d’un habitat instrumenté.Seules les informations binaires sont considérées, permettant d’utiliser tout type de capteur et proposant ainsi les solutions les plus versatiles, et rendant possibles des approches plus respectueuses de la vie privée de l’habitant.Les travaux de cette thèse se décomposent en trois parties.Afin de détecter des changements de comportement, une approche basée modèle est proposée.La première contribution de cette thèse est un modèle d’automate à temporisation stochastique (STA) représentant les habitudes de vie de l’habitant et sa génération durant une phase d’entraînement.Dans un second temps, le modèle est employé pour détecter des anomalies au sein du comportement de l’habitant durant une phase d’observation.La troisième contribution de cette thèse vise à employer le modèle afin de détecter des déviations de long terme, signe de l’apparition de troubles dégénératifs.À chaque contribution, deux études de cas sont proposées, l’un permettant de soumettre les contributions à des scénarios artificiels exigeants, le second permettant d’apprécier la méthodologie sur un cas réel.Enfin, puisque les données manipulées dans cette thèse sont particulièrement sensibles, une réflexion concernant les potentiels impacts éthiques et les méthodes pour les évaluer et réduire leur portée est proposée en annexe<br>To face the increase of the number of frail people due to global population ageing, innovative solutions are explored to ensure to people staying in at home a satisfying quality of health.Ambient Assisted Living (AAL) scientific field aims at exploiting smart home technologies to ease ageing at home and to offer satisfying health and living conditions.In particular, numerous existing works exploit wearable sensors in order to monitor their vital signs: temperature, heart rate, blood pressure, etc.These sensors offer relevant information to medical staff to assess the health status of an individual.However, some health troubles such as physical decline or cognitive impairments trigger first behavior changes, which trigger in a second step alterations of vital signs.This situation is particularly complex for medical staff as observing vital signs alone consequently leads to late and complex diagnostic of the original disease.As these diseases typically impair the elderly, this problem is particularly critical.Therefore, this thesis proposes an approach for smart home inhabitant behavior monitoring.Behavior refers to the way the inhabitant carries his everyday tasks.The objective is to detect behavioral deviations and to inform medical staff in order to help them in their prognosis and diagnosis.This methodology is enabled thanks to recent existing works in activity recognition, allowing to know which activity the inhabitant is carrying according to the sensors he triggers.Human behavior is extremely rich.An extensive literature review is proposed, concerning both the medical and AAL scientific field, in order to identify the health trouble and symptoms of interest for medical staff, and the way they impact patient behavior.Two behavioral features were identified as relevant due to their wide coverage: activity ordering, and activity duration.Moreover, the behavior of an individual might be impacted in two different manners: behavioral anomalies which correspond to brutal behavioral changes due to accident or sudden disease, and long-term deviations which are slow and progressive changes of behavior mainly due to degenerative troubles.The work presented in this thesis aims at detecting these two types of behavioral deviations regarding the two identified features.It focus on a single smart home inhabitant, and consider binary information only.This way, any sensor type can be used, including the most respectful of life privacy.The contributions of this thesis can be discomposed into three parts.In order to detect behavior deviations, model-based approach is proposed.Therefore, the first contribution is a Stochastic Timed Automaton (STA) model which represents the usual life habits of the inhabitant after a training phase.In a second step, this model is exploited in order to detect anomalies within inhabitant's behavior during a monitoring phase.Lastly, the model is used to detect long-term deviations through data forecasting in order to detect potential degenerative troubles.For each of these contributions, two case studies are proposed.The first one is based on artificial data generated from a real smart home in order to test challenging scenarios, while the second scenario is proposed to assess the relevancy of the proposed approach on a real scenario.Finally, as the handle data are particularly sensitive, a reflection about the potential negative ethical impacts and a method to evaluate their seriousness is proposed in an appendix, along with consideration to decrease their severity
APA, Harvard, Vancouver, ISO, and other styles
26

Oliva, Alessandro. "Presidi di Sicurezza per il Contrasto alla Pandemia mediante Integrazione e Ingegnerizzazione di Soluzioni di Visione Artificiale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
La pandemia di COronaVIrus Disease 2019 (COVID-19), tuttora in corso con la diffusione di nuove varianti, porta alla necessità di disporre di presidi a tutela della sicurezza delle persone ricorrendo a soluzioni di computer vision. Queste soluzioni possono essere integrate a soluzioni di sicurezza sul lavoro, al fine di ottimizzare l'impiego delle risorse disponibili dagli enti che effettuano questi monitoraggi, come aziende o enti pubblici. Il lavoro proposto ha lo scopo di sviluppare soluzioni di rilevamento di mascherine indossate correttamente, conteggio di persone all'interno di una stanza passando attraverso una porta, rilevamento dei dispositivi di sicurezza indosso al lavoratore e classificazione della corretta esecuzione di una attività di manutenzione (in questo caso su una bicicletta). Le specifiche di queste soluzioni devono massimizzare lo sfruttamento dell'hardware a disposizione, permettendo di essere distribuite su piattaforme con risorse limitate mantenendo il giusto compromesso tra prestazioni e qualità dei modelli. Sono state sperimentate diverse architetture per le varie soluzioni basate su YOLO, SSD e Transformer, le quali, anziché essere implementate con diversi framework come spesso avviene, sono state ripensate per utilizzare una base software comune per favorire il risparmio di risorse. I modelli sono stati inoltre resi fruibili attraverso una applicazione web dedicata, permettendo agli utenti di interfacciarsi con i modelli attraverso dispositivi di acquisizione o file video.
APA, Harvard, Vancouver, ISO, and other styles
27

Grandi, Luca. "Studio di un'infrastruttura di supporto al controllo e alla gestione della circolazione dei treni nelle stazioni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6837/.

Full text
Abstract:
Il sistema ferroviario ha sempre ricoperto un ruolo rilevante nel nostro Paese sia per il trasporto di persone, sia per il trasporto di merci: risulta, quindi, essenziale per il commercio e per il turismo. A differenza della strada in cui i veicoli circolano “a vista”, una ferrovia richiede che i sistemi di distanziamento dei treni siano indipendenti dalla visibilità dei veicoli, poiché gli spazi di frenatura sono solitamente molto maggiori della distanza di visibilità stessa. Per questo motivo i sistemi di segnalamento e sicurezza ricoprono un ruolo di primo piano. Nel tempo sono stati effettuati ingenti investimenti che hanno portato all'impiego di nuove tecnologie le quali hanno permesso la progettazione di sistemi safety critical contenenti componenti informatici hardware e software. La caratteristica principale di tali sistemi è la proprietà di non arrecare danno alla vita umana o all'ambiente: tale proprietà viene comunemente associata al termine anglosassone safety per distinguerla dall’accezione di "protezione da violazioni all'integrità del sistema" che il termine "sicurezza" usualmente assume. Lo sviluppo economico e tecnologico a cui abbiamo assistito nell’ultimo ventennio ha inevitabilmente reso tali sistemi ancora più sofisticati e di conseguenza complessi, richiedendo allo stesso tempo requisiti e garanzie di buon funzionamento sempre più marcati ed articolati. È proprio a questi motivi che si devono gli studi su quella che viene definita la dependability dei sistemi di computazione, verso cui si concentrano e convogliano buona parte degli sforzi e delle risorse in fase di ricerca e progettazione. Il lavoro di tesi che segue è stato svolto in collaborazione con due grandi imprese del territorio nazionale: RFI (Reti Ferroviarie Italiane) e Sirti. Inizialmente abbiamo interagito con RFI per entrare nell’ambiente ferroviario ed assimilarne il lessico e i bisogni. All’interno di RFI è stato effettuato un tirocinio nel quale ci siamo occupati del “processo off-line” riguardante la gestione in sicurezza di una stazione; tale attività deve essere effettuata da RFI prima della messa in esercizio di una nuova stazione. Per far questo abbiamo dovuto utilizzare i programmi di preparazione dei dati messi a disposizione da Sirti. In un secondo momento abbiamo approfondito l’argomentazione della safety interfacciandoci con Sirti, una delle società che forniscono sistemi safety critical computerizzati per il controllo delle stazioni. In collaborazione con essa ci siamo addentrati nel loro sistema scoprendo le loro scelte implementative e come hanno raggiunto i loro obiettivi di safety. Infine, ci siamo occupati dell'inserimento nel sistema di una nuova funzionalità, per aumentarne l’affidabilità e la sicurezza, e delle problematiche relative all'impiego del componente che la realizza.
APA, Harvard, Vancouver, ISO, and other styles
28

GANDINO, EDOARDO. "Diagnostics of machines and structures: dynamic identification and damage detection." Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2506356.

Full text
Abstract:
This research work deals with damage detection of engineering machines and structures. This topic, developed in particular for bearing diagnostics in the first part of the work, is strictly related to dynamic identification when structures are considered. Thus, subspace-based methods are investigated in the second part of the work, with particular attention to nonlinear system identification. Changes in operational and environmental conditions for structures (such as air temperature, temperature gradients, humidity, wind, etc.) or machines (such as oil temperature, loads, rotating regimes, etc.) are known to have considerable effects on signal features and, consequently, on the reliability of diagnostics. Useful tools for eliminating this influence are provided by a Principal Component Analysis (PCA)-based method for damage detection. The same way as many published works applied PCA-based diagnostics of structures, in this research work a bearing diagnostic application is considered. After a detailed description of the test rig, the huge amount of acquired data, on several different damaged bearings, is investigated. Results are useful for giving an overview on how the PCA-based method for damage detection can be applied on a complicated real-life machine. In general cases of real structures, the application of efficient identification techniques is crucial for correctly exploiting the capabilities of the PCA-based method for damage detection. Moreover, in many cases damage causes a structure that initially behaves in a predominantly linear manner to exhibit nonlinear response: the application of nonlinear system identification methods to the feature-extraction process can also be used as a direct detection of damage. For these reasons, a detailed study of the nonlinear subspace-based identification methods is presented in the second part of this work. Since the classical data-driven subspace method can in some cases be affected by memory limitation problems, two alternative techniques are developed and demonstrated on numerical and experimental applications. Moreover, a modal counterpart of the nonlinear subspace identification method is introduced, to extend its relevance also to realistic large engineering structures. In a conclusive application, two of the main sources of non-stationary dynamics, namely the time-variability and the presence of nonlinearity, are analysed through the analytical and experimental study of a time-varying inertia pendulum, having a nonlinear equation of motion due to its large swinging amplitudes.
APA, Harvard, Vancouver, ISO, and other styles
29

Fracchia, Silvia. "Search for third-generation squarks in all-hadronic final states at the LHC with the ATLAS detector." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/400577.

Full text
Abstract:
La tesis doctoral se centra en la búsqueda de sbottom squark ligeros en estados finales con dos b-jets y alto momento transverso faltante. Asumiendo la conservación de R-Parity, se considera un modelo simplificado para la producción directa de pares de sbottoms, cada uno de ellos decayendo en un quark bottom y el neutralino más ligero. Se presenta la búsqueda de sbottom squarks en el Run-2, usando los datos de colisiones de protones acumulador por el experimento ATLAS en 2015 con una energía en el centro de masas de √s =13 TeV, y que corresponden a una luminosidad integrada de 3.2 fb−1 . Se han definido diferentes regiones de señal, optimizadas para tener sensi-bilidad en un amplio rango de modelos supersimétricos con diferentes masas para el sbottom y el neutralino. Se han considerado cuidadosamente los fondos del Modelo Estándar, que contribuyen al estado final de interés, para lo que aquellos dominantes se han determinado en regiones de control y usando herramientas estadísticas basadas en técnicas de máxima verosimilitud. Los datos están en buen acuerdo con las predicciones del Modelo Estándar. Los resultado han sido interpretados en términos de límites superiores (95% CL) a la sección eficaz visible de producción de una seal genérica de nueva física. Valores de la sección eficaz en el rango en tres 3.38 fb y 1.23 fb se excluyen en las diferentes selecciones de sucesos. Se calculan límites de exclusión (95% CL) en el plano de masa sbottom-neutralino. Masas de sbottom hasta 800 GeV se excluyen para neutralinos con masa por debajo de 360 GeV (840 GeV en el caso de neutralinos con masa por debajo de 100 GeV). Así mismo, diferencias de masa entre sbottom y neutralino por encima de 100 GeV se excluyen para sbottoms con masas de hasta 500 GeV. Los resultados de exclusión obtenidos en esta tesis extienden significa-tivamente aquellos obtenidos en las búsquedas realizadas con los datos del Run-1 del LHC. Los resultados han sido publicados por la colaboración ATLAS, dando lugar a un artículo en revistas científicas y dos notas públicas en conferencias. Finalmente, se hacen proyecciones para futuras búsquedas que indican que en los próximos años, con más datos del LHC, será posible verificar o excluir de forma concluyente la presencia de sbottoms con masas por encima del TeV. Las mismas consideraciones aplican al caso de stops. Los próximos años del LHC serán cruciales para concluir sobre la idoneidad del modelo de SUSY y del concepto de naturalidad.<br>This thesis is mainly devoted to the search for the lightest bottom squark, performed in final states with large missing transverse momentum and two bjets. In the assumption of R-parity conservation, a simplified signal mo del is considered consisting in the direct production of a pair of bottom squarks, each decaying exclusively into a b-quark and the lightest neutra- lino. The search for the bottom squark in Run-2 is presented, using the data from pp collisions collected by the ATLAS Experiment in 2015 at √s =13 TeV, corresponding to an integrated luminosity of 3.2 fb-1. Different signal regions are defined in this search, optimized in order to have sensitivity to a broad range of signal models with different sbottom and neutralino masses. The Standard Model background processes contributing to the targeted final states are considered, for which the dominant back- grounds are constrained in dedicated control regions by means of a profile likelihood fit. The observed data are found to be in agreement with the SM predictions. The results are interpreted in terms of model independent 95% confidence level upper limits on the visible cross section. Values in the range between 3.38 fb and 1.23 fb are found to be excluded for the different selections. Exclusion limits at 95% confidence level are finally placed on the sbottom- neutralino mass plane. Sbottom masses up to 800 GeV are excluded for neu- tralino masses below 360 GeV (840 GeV for neutralino masses below 100 GeV). Differences in mass above 100 GeV between the sbottom and the neutralino are excluded up to sbottom masses of 500 GeV. The exclusion limits obtained in this thesis extend significantly the results obtained from the Run-1 search. The results were published by the ATLAS Collaboration, leading to one article in a journal and two public notes for conferences. Prospects for future searches are also given, showing that in the next years, with more data delivered by the LHC, it will be possible to verify or exclude the existence of the bottom squark beyond the TeV-scale. Similar considerations hold for the searches for the top squark. Altogether, the next years of the LHC will be crucial for saying a final word on natural SUSY.
APA, Harvard, Vancouver, ISO, and other styles
30

De, Feudis Mary. "Diamonds : synthesis and contacting for detector applications." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD005/document.

Full text
Abstract:
Ce travail de doctorat a été réalisé dans le cadre d'un accord de cotutelle international entre l'Université de Salento (L3, Italie) et l'Université de Paris 13 (LSPM, France). L'objectif principal était la fabrication de contacts ohmiques sur des surfaces de diamant pour des applications telles que les détecteurs et les dispositifs de l’électronique. Les travaux au L3 ont été consacrés à l'étude du processus de graphitisation du diamant induit par laser afin de produire des électrodes de graphite sur des diamants intrinsèques. L'étude se concentre en particulier sur le développement d’un appareil expérimental pour l’écriture laser sur diamant tant sur les aspects matériel que logiciel, et un protocole a ainsi été développé pour la fabrication de contacts graphitiques segmentés sur de larges surfaces de diamant (cm²). Des travaux approfondis de caractérisation ont démontré la transition de phase diamant-graphite et le comportement ohmique pour les contacts électriques avec une résistivité de l'ordre de 10⁻⁵ Ω.m. Des détecteurs tout-carbone ont ainsi été développés et testés avec des faisceaux électroniques et positroniques de 450 MeV. Ils permettent d’ouvrir des perspectives en tant que cible active pour de nouvelles expériences de physique des hautes énergies (PADME) dans le cadre de l’étude de la matière noire. Le travail au LSPM a été consacré au développement d'un protocole permettant d'obtenir des contacts ohmiques sur des films diamant faiblement dopé au bore et terminé oxygène, élaborés par MPACVD. Les procédés de fabrication de contacts métalliques Ti/Au sur une structure mesa ainsi que l’implantation par des ions He, ont été développés afin d'induire une couche de graphite juste en dessous de la surface de diamant. Les mesures électriques sur des diamants légèrement dopés ([B] = 4 × 10¹⁷ cm⁻³) avec seulement des contacts métalliques ou graphitiques / métalliques ont montré que la présence de la couche graphitique rend les contacts ohmiques et conduisent à une résistance spécifique de contact égale à 3.3 × 10⁻⁴ Ω.cm²<br>This PhD work has been carried out in international cotutelle agreement between the University of Salento (L3, Italy) and the University of Paris 13 (LSPM, France). The main aim was the manufacturing of ohmic contacts on diamond surface for detector and electronic device applications. The work at L3 was dedicated to the laser-induced diamond graphitization process in order to produce graphitic electrodes on intrinsic diamonds. An experimental set-up dedicated to the laser writing technique on diamond has been developed in both hardware and software aspects and a protocol for the manufacturing of segmented graphitic contacts on diamond surface of large scale (cm²) has been implemented. An extensive characterization work has demonstrated the diamond-graphite phase transition and an ohmic electrical behaviour for the contacts with a resistivity of the order of ≈ 10⁻⁵ Ω.m. Eventually, an all-carbon detector has been developed and tested with 450 MeV electron and positron beams proving to be a good candidate in the role of active target for a new high-energy experiment (PADME) in the framework of the dark matter. The work at LSPM has been dedicated to the development of a protocol allowing reaching ohmic contacts on lightly boron doped diamond with oxygenated surface grown by MPACVD. The fabrication of Ti/Au metallic contact above a mesa structure has relied on a He ion implantation treatment to induce a graphitic layer underneath the diamond surface. The electrical measurements on lightly doped diamonds ([B] = 4 × 10¹⁷ cm⁻³) with metal or graphite / metal contacts have shown that the graphitic layer makes ohmic the contacts leading to a specific contact resistance as low as 3.3 × 10⁻⁴ Ω.cm²<br>Questo dottorato di ricerca è stato svolto in convenzione di cotutela internazionale tra l’Università del Salento (L3, Italia) e l’Università di Parigi 13 (LSPM, Francia). Il principale obiettivo è stato la fabbricazione di contatti ohmici su superficie di diamante per applicazioni come rivelatori e dispositivi elettronici. Il lavoro a L3 è stato dedicato allo studio del processo di grafitizzazione del diamante indotto da laser al fine di produrre elettrodi grafitici su diamanti intrinseci. In particolare, è stato sviluppato un apparato sperimentale dedicato alla tecnica di scrittura laser su diamante sia nelle componenti hardware che software, ed è stato realizzato un protocollo per la fabbricazione di contatti grafitici segmentati su superfici di diamante di grande scala (cm²). Un ampio lavoro di caratterizzazione ha dimostrato la transizione di fase diamante-grafite e il comportamento ohmico per i contatti elettrici con una resistività dell’ordine di 10⁻⁵ Ω.m. Pertanto, un rivelatore costituito solo di carbonio è stato sviluppato e testato con fasci elettronici e positronici di 450 MeV risultando essere un buon candidato nel ruolo di bersaglio attivo per un nuovo esperimento di fisica delle alte energie (PADME) nel contesto della materia oscura. Il lavoro a LSPM è stato dedicato allo sviluppo di un protocollo che ha consentito di ottenere contatti ohmici su diamanti leggermente drogati con boro e con superficie terminata con ossigeno, cresciuti mediante MPACVD. I processi di fabbricazione di contatti metallici Ti/Au sopra una struttura mesa sono stati sviluppato così come un trattamento di impiantazione a base di ioni di He al fine di indurre uno strato grafitico appena sotto la superficie del diamante. Le misure elettriche su diamanti leggermente drogati ([B] = 4 × 10¹⁷ cm⁻³) con contatti o solo metallici o grafitici / metallici hanno dimostrato che la presenza dello strato grafitico rende i contatti ohmici e comporta una resistenza specifica di contatto pari a 3.3 × 10⁻⁴ Ω.cm²
APA, Harvard, Vancouver, ISO, and other styles
31

MALAGO', Marco. "FAULT DETECTION IN HEAVY DUTY WHEELS BY ADVANCED VIBRATION PROCESSING TECHNIQUES AND NUMERICAL MODELLING." Doctoral thesis, Università degli studi di Ferrara, 2012. http://hdl.handle.net/11392/2389246.

Full text
Abstract:
The research work reported in this thesis aims at developing a methodology and a procedure for the condition monitoring and diagnostics of heavy-duty wheels based on vibration measurements at the end of the production line. The early detection of manufacturing anomalies is necessary to sensibly reduce the time/money lost due to possible problems that can rise up during the operating phases. Heavy-duty wheels are used in applications as automatic vehicles and motor trucks and are mainly composed of a polyurethane tread glued to a cast iron hub. The adhesive application between tread and hub is the most critical assembly phase, since it is completely made by an operator and a contamination of the link area between polyurethane and cast iron may happen. Furthermore the presence of rust on the hub surface can contribute to worsen the adherence interface and to reduce the operating life. As the author is aware, studies by other researchers concerning the fault detection in heavy-duty wheels are not present in literature. In order to develop a detection procedure, several wheels with different types of faults have been manufactured “ad hoc” with anomalies similar to real ones. Such anomalies consist of incorrectly adherence zones between tread and hub as well as localized or distributed rust on the hub surface. Numerous experimental tests have been carried out in order to identify the vibration effects of these defects as a function of fault type and dimensions. The thesis concerns the detection and diagnostic capability of different vibration processing techniques using well-suited indicators and determining pass/fail decision thresholds through the Tukey’s non-statistical method. Contemporary, an accurate dynamic analysis of this mechanical system has been conducted - both experimentally through modal analysis techniques and numerically through finite element method - in order to establish the influence of the dynamic properties of the system components (namely heavy-duty wheel, support, frame of the test set up) on the measured vibratory signal. Based on this dynamic characterization, a multibody model of the system has been developed: the heavy-duty wheel is considered as rigid and the yielding part is focused in the contact patch between wheel and drum. A non-linear elastic contact algorithm is adopted, based on stiffness properties previously extracted from static tests conducted on both material specimens and complete components. The model makes it possible to reproduce the vibration effects of the defects and to simulate signal modifications due to different component materials and design. as Synchronous Average and Cyclostationarity Analysis,
APA, Harvard, Vancouver, ISO, and other styles
32

Gianessi, Mattia. "Robotica e intelligenza artificiale applicate alla validazione automotive." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
Questo lavoro rappresenta il resoconto di un tirocinio svolto presso il dipartimento di sviluppo elettronico, team di Integrazione e Validazione, di Automobili Lamborghini durante il quale si è progettato e implementato un sistema autonomo, flessibile e facilmente integrabile con le tecnologie già in uso presso il dipartimento; tale sistema deve essere in grado di riconoscere e interagire con le componenti di input e output HMI (switch, pulsantiere e display touchscreen) delle vetture, permettendo di eseguire e valutare il successo, o il fallimento, di una serie di test case proposti da un utilizzatore esterno. La componente del sistema adibita al riconoscimento di oggetti è costituita da una rete neurale realizzata tramite la libreria software TensorFlow, mentre quella utilizzata per interagire con tali oggetti è un braccio robotico LBR iiwa dotato di un carrello mobile KMR. Il risultato finale ottenuto è stato poi integrato in un framework più ampio che coinvolge altri stakeholders del flusso di sviluppo e validazione prodotto di Automobili Lamborghini S.p.A.
APA, Harvard, Vancouver, ISO, and other styles
33

關子祺 and Tsz-ki Kwan. "The detection of BCR-ABL kinase domain mutation in the management of chronic myeloid leukemia." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40738358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kwan, Tsz-ki. "The detection of BCR-ABL kinase domain mutation in the management of chronic myeloid leukemia." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40738358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Rivera, Ivan Fernando. "RF MEMS Resonators for Mass Sensing Applications." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5817.

Full text
Abstract:
Sensing devices developed upon resonant microelectromechanical and nanoelectromechanical (M/NEMS) system technology have become one of the most attractive areas of research over the past decade. These devices make exceptional sensing platforms because of their miniscule dimensions and resonant modes of operation, which are found to be extremely sensitive to added mass. Along their unique sensing attributes, they also offer foundry compatible microfabrication processes, low DC power consumption, and CMOS integration compatibility. In this work, electrostatically and piezoelectrically actuated RF MEMS bulk resonators have been investigated for mass sensing applications. The capacitively-transduced resonators employed electrostatic actuation to achieve desired resonance mode shapes. These devices were fabricated on silicon-on-insulator (SOI) substrates with a device layer resistivity ranging from 0.005 Ω cm to 0.020 Ω cm. The electrode-to-resonator capacitive gap was defined by two different techniques: oxidation enabled gap reduction and sacrificial atomic layer deposition (ALD). For oxidation enabled gap reduction, a hard mask composed of silicon nitride and polysilicon is deposited, patterned, and defined using standard MEMS thin-film layer deposition and fabrication techniques. The initial lithographically-defined capacitive gap of 1 μm is further reduced to ~300 nm by a wet furnace oxidation process. Subsequently, the reduced gap is transferred to the device layer using a customized dry high-aspect-ratio dry etching technique. For sacrificial approach, a ~100 nm-thin ALD aluminum oxide sidewall spacer is chemically etched away as the last microfabrication step to define the ~100 nm capacitive gap. Small capacitive gaps developed in this work results in small motional resistance (Rm) values, which relax the need of the read-out circuitry by enhancing the signal transduction. Piezoelectrically-actuated resonators were developed using thin-film bulk acoustic resonant (FBAR or TFBAR) and thin-film piezoelectric-on-substrate (TPoS) technologies with reported Q factors and resonant frequencies as high as 10,638 and 776.54 MHz, respectively, along with measured motional resistance values as low as 169Ω. To the best of our knowledge, this work is the first one that demonstrated TPoS resonators using LPCVD polysilicon as an alternative low loss structural layer to single-crystal silicon with Q factors as high as ~3,000 (in air) and measured motional resistance values as low as 6 kΩ with an equivalent acoustic velocity of 6,912 m s-1 for a 7 μm thick layer. Polysilicon based TPoS single devices were measured with the coefficient of resonant frequency of -3.77 ppm/°C, which was the lowest ever reported for this type of devices. Also a novel releasing process, thin-piezo on single crystal reactive etched (TPoSCRE), allows us to develop of TPoS resonators without the need to SOI wafers. The fabricated devices using this technique were reported with Q factor exceeding ~1,000 and measured motional resistance values as low as 9 kΩ. The sensitivity of a fourth-order contour mode ZnO-on-SOI disk resonator based mass sensor was determined by performing multiple depositions of platinum micro-pallets using a focus ion beam (FIB) equipped with gas injection system on strategically-chosen locations. It was found out that the sensitivity of the resonator on its maximal and minimal displacement points was of 1.17 Hz fg-1 and 0.334 Hz fg-1, respectively. Also, the estimated limit of detection of the resonator was found to be a record breaking 367 ag (1 ag = 10-18g) compared to devices with similar modes of resonance. Lastly, a lateral-extensional resonator was used to measure the weight of HKUST-1 MOF crystal cluster. The weight of it was found to be 24.75 pg and 31.19 pg by operating two lateral resonant modes, respectively.
APA, Harvard, Vancouver, ISO, and other styles
36

Jonsson, Christian. "Detection of annual rings in wood." Thesis, Linköping University, Department of Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15804.

Full text
Abstract:
<p>This report describes an annual line detection algorithm for the WoodEye quality control system. The goal with the algorithm is to find the positions of annual lines on the four surfaces of a board. The purpose is to use this result to find the inner annual ring structure of the board. The work was done using image processing techniques to analyze images collected with WoodEye. The report gives the reader an insight in the requirements of quality control systems in the woodworking industry and the benefits of automated quality control versus manual inspection. The appearance and formation of annual lines are explained on a detailed level to provide insight on how the problem should be approached. A comparison between annual rings and fingerprints are made to see if ideas from this area of pattern recognition can be adapted to annual line detection. This comparison together with a study of existing methods led to the implementation of a fingerprint enhancement method. This method became a central part of the annual line detection algorithm. The annual line detection algorithm consists of two main steps; enhancing the edges of the annual rings, and tracking along the edges to form lines. Different solutions for components of the algorithm were tested to compare performance. The final algorithm was tested with different input images to find if the annual line detection algorithm works best with images from a grayscale or an RGB camera.</p>
APA, Harvard, Vancouver, ISO, and other styles
37

DAGA, ALESSANDRO PAOLO. "Vibration Monitoring: Gearbox identification and faults detection." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2763473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Xi, Min. "Image sequence guidance for mobile robot navigation." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36082/1/36082_Xi_1998.pdf.

Full text
Abstract:
Vision based mobile robot navigation is a changeling issue in automated robot control. Using a camera as an active sensor requires the processing of a huge amount of visual data that is captured as image sequence. The relevant visual information for a robot navigation system needs to be extracted from the visual data and used for a real-time control. Several questions need to be answered including:1) What is the relevant information and how to extract it from a sequence of 2D images? 2) How to recognise the 3D surrounding environment from the extracted images? 3) How to generate a collision-free path for robot navigation? This thesis discusses all three questions and presents the design of a complete vision based mobile robot navigation system for an a priori unknown indoor environment. The image sequence is captured continuously via an on-board camera during robotnavigation. The movement of the robot with mounted camera causes an optical flow of image points which are utilised for extraction of three dimensional information and the estimation of robot motion in the scene. The developed algorithm of image sequence processing is designed with emphasis on speed such that the system can be fast enough to meet the real-time control requirement. The introduction of a reference image enables the prediction of regions of interest in the image sequence for reducing computational complexity. The system is able to recognise three-dimensional surroundings from the image sequence and to reconstruct them into atwo-dimensional map with information about the location of obstacles. From this map, a collision-free path is generated with the grid potential algorithm used for robot navigation. Furthermore, the system has the capabilities of learning and establishing the geometric construction of the building by exploration which is the first step in building a preliminary artificial intelligent mobile robot.
APA, Harvard, Vancouver, ISO, and other styles
39

Nguyen, Trung-Hiên. "Theoretical and experimental study of optical solutions for analog-to-digital conversion of high bit-rate signals." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S110/document.

Full text
Abstract:
Les formats de modulation bidimensionnels (i.e. basés sur l’amplitude et la phase de l’onde porteuse) ont gagné depuis peu le domaine des transmissions par fibre optique grâce aux progrès conjoints de l’électronique rapide et du traitement du signal, indispensables pour réaliser les récepteurs opto-électroniques utilisant la détection cohérente des signaux optiques. Pour pallier les limites actuelles en rapidité de commutation des circuits intégrés électroniques, une voie de recherche a été ouverte il y a quelques années, consistant à utiliser des technologies optiques pour faciliter la parallélisation du traitement du signal, notamment dans l’étape d’échantillonnage ultra-rapide du signal rendu possible par des horloges optiques très performantes. Le thème principal de cette thèse concerne l’étude théorique et expérimentale de la fonction de conversion analogique-numérique (ADC) de signaux optiques par un récepteur opto-électronique cohérent, associant les étapes d’échantillonnage optique linéaire, de conversion analogique-numérique et de traitement du signal. Un prototype, utilisant une solution originale pour la source d’échantillonnage, est modélisé, réalisé et caractérisé, permettant la reconstruction temporelle de signaux optiques modulés selon divers formats : NRZ, QPSK, 16-QAM. Les limitations optiques et électroniques du système sont analysées, notamment l’impact sur la reconstruction des signaux de divers paramètres : le taux d’extinction de la source optique, les paramètres de l’ADC (bande passante BW, temps d’intégration et nombre effectif de bits ENOB). Par ailleurs, de nouveaux algorithmes de traitement du signal sont proposés dans le cadre de la transmission optique cohérente à haut débit utilisant des formats de modulation bidimensionnels (amplitude et phase) : deux solutions sont proposées pour la compensation du déséquilibre de quadrature IQ dans les transmissions mono-porteuses: une méthode originale de l’estimation du maximum du rapport signal sur bruit ainsi qu’une nouvelle structure de compensation et d’égalisation conjointes; ces deux méthodes sont validées expérimentalement et numériquement avec un signal 16-QAM. Par ailleurs, une solution améliorée de récupération de porteuse (décalage de fréquence et estimation de la phase), basée sur une décomposition harmonique circulaire de la fonction de maximum de vraisemblance logarithmique, est validée numériquement pour la première fois dans le contexte des transmissions optiques (jusqu’à une modulation de 128-QAM). Enfin les outils développés dans ce travail ont finalement permis la démonstration d’une transmission sur 100 km d’un signal QPSK à 10 Gbaud fortement limité par un bruit de phase non linéaire et régénéré optiquement à l’aide d’un limiteur de puissance préservant la phase basé sur une nanocavité de cristal photonique<br>Bi-dimensional modulation formats based on amplitude and phase signal modulation, are now commonly used in optical communications thanks to breakthroughs in the field of electronic and digital signal processing (DSP) required in coherent optical receivers. Photonic solutions could compensate for nowadays limitations of electrical circuits bandwidth by facilitating the signal processing parallelization. Photonic is particularly interesting for signal sampling thanks to available stable optical clocks. The heart of the present work concerns analog-to-digital conversion (ADC) as a key element in coherent detection. A prototype of linear optical sampling using an original solution for the optical sampling source, is built and validated with the successful equivalent time reconstruction of NRZ, QPSK and 16-QAM signals. Some optical and electrical limitations of the system are experimentally and numerically analyzed, notably the extinction ratio of the optical source or the ADC parameters (bandwidth, integration time, effective number of bits ENOB). Moreover, some new DSPs tools are developed for optical transmission using bi-dimensional modulation formats (amplitude and phase). Two solutions are proposed for IQ quadrature imbalance compensation in single carrier optical coherent transmission: an original method of maximum signal-to-noise ratio estimation (MSEM) and a new structure for joint compensation and equalization; these methods are experimentally and numerically validated with 16-QAM signals. Moreover, an improved solution for carrier recovery (frequency offset and phase estimation) based on a circular harmonic expansion of a maximum loglikelihood function is studied for the first time in the context of optical telecommunications. This solution which can operate with any kind of bi-dimensional modulation format signal is numerically validated up to 128-QAM. All the DSP tools developed in this work are finally used in a demonstration of a 10 Gbaud QPSK 100 km transmission experiment, featuring a strong non-linear phase noise limitation and regenerated using a phase preserving and power limiting function based on a photonic crystal nanocavity
APA, Harvard, Vancouver, ISO, and other styles
40

Mathias, Bryn Lugh Shorney. "Search for supersymmetry in pp collisions with all-hadronic final states using the αT variable with the CMS detector at the LHC". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/17844.

Full text
Abstract:
A search for supersymmetry in the exclusive hadronic and missing energy channel is presented on 5 fb-1 of data collected using the CMS detector at the LHC. The data were produced at a center-of-mass energy of 7 TeV. The kinematic discriminator αT is used to select signal events which are then binned in terms of the visible energy per event. The efficiency of the hadronic level one triggers is measured though-out the data taking period and a scheme to reduce the effects of multiple collisions per bunch crossing on the cross section of the trigger paths is studied, implemented and tested in situ. These efficiency measurements are considered in the development of an analysis specific trigger, the performance of which is measured in situ, with the final efficiencies taken into account in the presented analysis. A data driven background estimation method is used to predict the expected yield in the signal regions from Standard Model processes. In the absence of an observed excess, limits are set to the 95% confidence level on the production cross section and masses of new particles. In the context of the Constrained Minimal Supersymmetric Model (CMSSM), squarks and gluinos with a mass of up to 1 TeV are excluded. In terms of simplified models with various light and heavy flavour final states, squarks and gluinos are excluded at a mass of ≈1 TeV for a Lightest Supersymmetric Particle (LSP) mass of up to ≈500 GeV. Natural units (h = c = 1) are used though-out.
APA, Harvard, Vancouver, ISO, and other styles
41

Rajamani, Sathish. "Small molecule signaling and detection systems in protists and bacteria." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155564098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ali, Abdallah Abdel-Megid Mohamad [Verfasser]. "Rapid detection and quantification of Cercospora beticola in soil using PCR and ELISA assays / Abdallah Abdel-Megid Mohamad Ali." Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/1020283440/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dahal, Rohini. "Bilateral Thermographic Image Comparison Software Tool for Pathology Detection in Canines with Application to Anterior Cruciate Ligament (ACL) Rupture." Thesis, Southern Illinois University at Edwardsville, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10276314.

Full text
Abstract:
<p> <b>Introduction:</b> The bilaterally symmetry property in animals can be used to detect pathologies where body parts on both sides can be compared. For any pathological disorder, thermal patterns differ compared to the normal body parts. A software application for veterinary clinics is under development to input two thermograms of body parts on both sides, one normal and the other unknown, and the application compares them on the basis of extracted features and appropriate similarity and difference measures and outputs the likelihood of pathology. Previous research has been used to determine the appropriate image processing, feature extraction and comparison metrics to be used. The comparison metrics used are the vector inner product, Tanimoto, Euclidean, city block, Minkowski and maximum value metric. Also, results from experiments with comparison tests are used to derive a potential threshold values which will separate normal from abnormal images for a specific pathology. </p><p> <b>Objectives:</b> The main objective of this research is to build a comparison software tool application by combining the concepts of bilateral symmetrical property in animals and IR thermography that can be for prescreening in veterinary clinics. </p><p> <b>Comparison Software Tool Development:</b> The comparison software tool was developed for veterinary clinics as a prescreening tool for pathology detection using the concepts of thermography and bilateral symmetry property in animals. The software tool has a graphical user interface (GUI) that allows ease of use for the clinical technician. The technician inputs images or raw temperature data csv files and compares thermographic images of bilateral body parts. The software extracts features from the images and calculates the difference between the feature vectors with distance and/or similarity metrics. Based upon these metrics, the percentage deviation is calculated which provides the deviation of the unknown (test) image from the known image. The percentage deviation between the thermograms of the same body parts on either side provides an indication regarding the extent and impact of the disease [Poudel; 2015]. The previous research in veterinary thermography [Liu; 2012; Subedi; 2014, Fu; 2014, Poudel; 2015] has been combined with the real world veterinary clinical scenario to develop a software tool that can be helpful for researchers as well as for the clinical technicians in prescreening of pathologies. </p><p> <b>Experimental Results and Discussion:</b> Experiments were performed on ACL thermograms to determine a threshold that can separate normal and abnormal ACL images. 18-colored Meditherm images had poor results and could not suggest any threshold value. But results were positive for temperature remapped 256 gray level Meditherm images which suggested the 40% of percentage deviation could produce a separation. The total number of Normal - Normal pairs were greater than total number of Normal &ndash; Abnormal pairs below 40% deviation. Similarly, total number of Normal &ndash;Abnormal pairs of images were greater than total number of Normal &ndash; Normal pairs above 40%. This trend was consistent for Euclidean distance, maximum value distance and Minkowski distance for both texture distances of 6 and 10. The performance in terms of sensitivity and specificity was poor. The best sensitivity of 55% and best specificity of 67% was achieved. This indicates better results for predicting the absence of ACL rupture then actually finding the disease. In this case the software could be used by the clinician in conjunction with other diagnostic methods. </p><p> <b>Conclusion:</b> The Experiments, results and analysis show that the comparison software tool can be used in veterinary clinics for the pre-screening of diseases in canines and felines to estimate the extent and impact of the disease based upon the percentage deviation. However, more research is necessary to examine its efficacy for specific pathologies. Note that the software can be used by researchers to compare any two images of any formats. For ACL experimentation, there are indication that a threshold value is possible to separate normal from abnormal and the spectral, texture and spectral features suggested by researches [Subedi; 2014, Liu; 2012, Fu; 2014, Poudel; 2015] are not sufficient to determine that threshold with the given image database.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
44

Schönwald, Arne [Verfasser], Hermann [Gutachter] Kolanoski, Marek [Gutachter] Kowalski, and Sebastian [Gutachter] Böser. "Investigation of all-flavour neutrino fluxes with the IceCube detector using the cascade signature / Arne Schönwald. Gutachter: Hermann Kolanoski ; Marek Kowalski ; Sebastian Böser." Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://d-nb.info/1102992852/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

TETTAMANZI, VERONICA. "Development of an innovative molecular assay for the simultaneous detection of the BCR-ABL Major and Minor fusion transcripts by the use of Loop Mediated Isothermal Amplification reaction (Q-LAMP)." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/103108.

Full text
Abstract:
Leukemias are diseases characterized by an uncontrolled proliferation of malignant hematopoietic stem cells. The fusion gene BCR-ABL is the result of a reciprocal translocation between chromosome 9 and chromosome 22 t (9;22) and encodes an oncogenic tyrosine kinase protein responsible for the neoplastic transformation observed in chronic myeloid leukemia (CML) and acute lymphoblastic leukemia (ALL). The isoform p210 is the hallmark of CML, detectable in more than 95% of cases, while in LAL, the fusion gene BCR-ABL may be present in both isoforms p190 (60% of cases) and p210 (40 % of cases). The molecular detection of BCR-ABL is essential to diagnose CML and Philadelphia positive ALL, making possible the administration of proper treatment. Moreover, the discrimination of the isoform, only possible using molecular methods, facilitates the choice of the specific quantitative test for the molecular monitoring during the treatment. To date, the most widely used molecular techniques for the detection of the transcripts are based on the RT-PCR method (Reverse Transcription-Polymerase Chain Reaction). It is a time-consuming procedure consisting of several steps, retro transcription - amplification - gel detection, which must be performed by skilled personnel and in equipped laboratories. The risk of cross-contamination, due to the multistep feature of the technique, and the absence of an internal reaction control, may also lead to false positive or false negative signals generation. The PhD project presented in this thesis describes the development and optimization of an innovative molecular diagnostic test, based on RT-QLAMP® technology (Reverse Transcription Loop-Mediated Isothermal Amplification), and its application in the diagnostic field for the simultaneous detection of BCR-ABL p210 and p190 transcripts. The optimization was performed on both plasmid controls and RNA extracted from cell lines. Moreover the final assay was validated on clinical samples. The LAMP technology has proven to be very sensitive and specific, simple and rapid. The amplification and detection of the transcripts occur in real-time, inside a single tube and starting directly from RNA. The negative clinical samples were validated by the amplification of the internal control and the possibility of cross-contamination of the sample was dramatically decreased. In addition, the method has proved to be very robust due to the insensitivity to the major PCR inhibitors. The new diagnostic assay overcomes all the limitations of the currently used diagnostic methods and represents a valid alternative for the molecular diagnosis of chronic myeloid leukemia and acute lymphoblastic leukemia.
APA, Harvard, Vancouver, ISO, and other styles
46

Nascimento, Guilherme Antonio Gomes do. "Verificação da aplicabilidade de dados obtidos por sistema LASER batimétrico aerotransportado à cartografia náutica /." Presidente Prudente, 2019. http://hdl.handle.net/11449/181407.

Full text
Abstract:
Orientador: Mauricio Galo<br>Resumo: Um Levantamento Hidrográfico (LH) tem como principal meta a obtenção de dados para a edição e atualização de documentos náuticos, estes, voltados à segurança das atividades de navegação. Objetivando padronizar parâmetros de incerteza das cartas náuticas, a Organização Hidrográfica Internacional (OHI) define níveis mínimos de confiança para diferentes ordens. A sugestão dessas especificações foi internalizada pela Marinha do Brasil, responsável pela produção das cartas náuticas brasileiras, na NORMAM-25. Um desses parâmetros é a Incerteza Vertical Total máxima permitida, um indicador de qualidade da medição da profundidade. A informação de profundidade influencia no calado máximo permitido a uma embarcação para transitar em uma região com segurança, o que pode impactar inclusive nas limitações de transações comerciais em terminais portuários, uma vez que as profundidades estimadas com acurácia potencializam os parâmetros de operação dos portos. Por se tratar de um ambiente dinâmico, seja por ação da própria natureza ou devido a atividades antrópicas, a atualização de uma carta náutica deve ser uma preocupação constante. Como complemento à tradicional técnica de levantamento por meio de um ecobatímetro acoplado a embarcações, há a opção de se realizar um LH com o emprego da tecnologia LiDAR (Light Detection And Ranging) a partir de aeronaves, por meio de um aerolevantamento batimétrico por LiDAR (ALB – Airborne LASER Bathymetry), que operam com pulsos LASER na região verde do e... (Resumo completo, clicar acesso eletrônico abaixo)<br>Abstract: A Hydrographic Survey (HS) has as main goal to obtain data for editing and updating nautical documents, these, focused on the safety of navigation. In order to establish a standard of uncertainty parameters for nautical charts, the International Hydrographic Organization (IHO) defines minimum levels of confidence for different orders. The suggestion of these specifications was acknowledged by the Brazilian Navy, institution responsible to produce Brazilian nautical charts, as described in NORMAM-25. One such parameter is the maximum allowed Total Vertical Uncertainty, a quality indicator of the depth measurement. Depth information influences the maximum operational draft for a vessel to safely travel in a region, causing impact on port operations and limiting the commercial transactions. Accurately estimated depths enhance the operational parameters of the ports. Due to the aim of representing a dynamic environment, whether as consequence of the action of nature itself or because of anthropic activities, updating a nautical chart must be a constant concern. As a complement to the traditional survey technique conducted with a boat-coupled echosounder, there is the option of performing a HS using LiDAR (Light Detection And Ranging) technology from aircraft, through LiDAR aerial bathymetry (ALB - Airborne LASER Bathymetry), which operate with LASER pulses in the green region of the electromagnetic spectrum. Considering these points, this work analyzed the differences between the... (Complete abstract click electronic access below)<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
47

Monti, Lorenzo. "Sistema di monitoraggio modulare attraverso tecnologie mobile per la sicurezza in ambiente domestico." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6618/.

Full text
Abstract:
Con questo progetto ci si propone di creare un'applicazione per i device mobili intelligenti, in particolare smartphone e smartwatch, con sistema operativo Android, in grado di rilevare una caduta con il conseguente invio automatico di un messaggio di allarme mirato a richiamare i soccorsi in tempo reale. Data la funzionalità, l'app è stata principalmente progettata per un utente di età avanzata.
APA, Harvard, Vancouver, ISO, and other styles
48

Mazzini, Pietro. "Analisi di integrazione su sistemi di Intrusion Detection e Incident Handling in ambito enterprise." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21851/.

Full text
Abstract:
Questa tesi ha l'obiettivo di presentare un sistema di Intrusion Detection, Incident Handling e Response nei suoi processi produttivi, organizzativi e manageriali e in quelli puramente pratici ed implementativi. Il nome di questo progetto è OSSIHR (Open Source System for Incident Handling and Response). La tesi è composta da quattro capitoli. Il capitolo 1 contiene un'introduzione ai concetti, alle sigle ed ai processi che caratterizzano le discipline di Intrusion Detection, Incident Handling e Incident Management. Nel capitolo 2 è analizzato lo stato dell'arte sulla materia e vengono definiti i meccanismi di un sistema di Incident Handling che possa essere adottato in ambito enterprise. Le integrazioni dei software che sono stati utilizzati e l'architettura di OSSIHR sono documentati ed approfonditi nel capitolo 3. I margini di miglioramento e le criticità del sistema in oggetto sono evidenziate nel capitolo 4 che include anche uno studio di paragone fra il sistema open source proposto ed altri sistemi closed source.
APA, Harvard, Vancouver, ISO, and other styles
49

Mobarez, Sarah Nagy Ali [Verfasser], Axel [Akademischer Betreuer] Dürkop, and Antje J. [Akademischer Betreuer] Bäumner. "Development of a Micro-total Analysis System for the detection of Biogenic Amines / Sarah Nagy Ali Mobarez ; Axel Dürkop, Antje J. Bäumner." Regensburg : Universitätsbibliothek Regensburg, 2020. http://d-nb.info/1217481419/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lawlor, Sarah E. "Using Advanced Land Imager (ALI) and Landsat Thematic Mapper (TM) for the detection of the invasive shrub Lonicera maackii in southwestern Ohio forests." Miami University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=miami1303831778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!