To see the other types of publications on this topic, follow the link: Optical detectors – Mathematical models.

Dissertations / Theses on the topic 'Optical detectors – Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optical detectors – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Furness, Charles Zachary. "Parameter identification of a flexible beam using a modal domain optical fiber sensor." Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/42058.

Full text
Abstract:
<p>An optical fiber sensor is used for identification of a cantilevered beam under conditions of various concentrated mass loadings. A model of the sensor as well as the dynamic system is developed and used to test the reliability of the identification. Input/output data from an experiment is gathered and used in the identification. A survey of the existing areas of damage detection and parameter identification is included, along with suggestions for incorporating fiber optic sensors into existing techniques. The goal of this research was to show that the fiber sensor can be used for identification purposes, and that it is sensitive to parameter changes within the system (in this case concentrated mass changes).</p><br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Franceschiello, Benedetta. "Cortical based mathematical models of geometric optical illusions." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066131/document.

Full text
Abstract:
Cette thèse présente des modèles mathématiques pour la perception visuelle et s'occupe des phénomènes où on reconnait une brèche entre ce qui est représenté et ce qui est perçu. La complétion amodale consiste en percevoir un complètement d'un object qui est partiellement occlus, en opposition avec la complétion modale, dans laquelle on perçoit un object même si ses contours ne sont pas présents dans l'image [Gestalt, 99]. Ces contours, appelés illusoires, sont reconstruits par notre système visuelle et ils sont traités par les cortex visuels primaires (V1/V2) [93]. Des modèles géométriques de l'architecture fonctionnelle de V1 on le retrouve dans le travail de Hoffman [86]. Dans [139] Petitot propose un modèle pour le complètement de contours, équivalent neurale du modèle proposé par Mumford [125]. Dans cet environnement Citti et Sarti introduisent un modèle basé sur l'architecture fonctionnelle de la cortex visuel [28], qui justifie les illusions à un niveau neurale et envisage un modèle neuro-géometrique pour V1. Une autre classe sont les illusions d'optique géométriques (GOI), découvertes dans le XIX siècle [83, 190], qui apparaissent en présence d'une incompatibilité entre ce qui est présent dans l'espace object et le percept. L'idée fondamentale développée ici est que les GOIs se produisent suite à une polarisation de la connectivité de V1/V2, responsable de l'illusion. A partir de [28], où la connectivité qui construit les contours en V1 est modelée avec une métrique sub-Riemannian, on étend cela en disant que pour le GOIs la réponse corticale du stimule initial module la connectivité, en devenant un coefficient pour la métrique. GOIs seront testés avec ce modèle<br>This thesis presents mathematical models for visual perception and deals with such phenomena in which there is a visible gap between what is represented and what we perceive. A phenomenon which drew the interest most is amodal completion, consisting in perceiving a completion of a partially occluded object, in contrast with the modal completion, where we perceive an object even though its boundaries are not present [Gestalt theory, 99]. Such boundaries reconstructed by our visual system are called illusory contours, and their neural processing is performed by the primary visual cortices (V1/V2), [93]. Geometric models of the functional architecture of primary visual areas date back to Hoffman [86]. In [139] Petitot proposed a model of single boundaries completion through constraint minimization, neural counterpart of the model of Mumford [125]. In this setting Citti and Sarti introduced a cortical based model [28], which justifies the illusions at a neural level and provides a neurogeometrical model for V1. Another class of phenomena are Geometric optical illusions (GOIs), discovered in the XIX century [83, 190], arising in presence of a mismatch of geometrical properties between an item in object space and its associated percept. The fundamental idea developed here is these phenomena arise due to a polarization of the connectivity of V1/V2, responsible for the misperception. Starting from [28] in which the connectivity building contours in V1 is modeled as a sub-Riemannian metric, we extend it claiming that in GOIs the cortical response to the stimulus modulates the connectivity of the cortex, becoming a coefficient for the metric. GOIs will be tested through this model
APA, Harvard, Vancouver, ISO, and other styles
3

CHENG, YEOU-YEN. "MULTIPLE-WAVELENGTH PHASE SHIFTING INTERFEROMETRY (OPTICAL-TESTING, ASPHERIC SURFACE)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187928.

Full text
Abstract:
The problems of combining ideas of phase shifting interferometry (PSI) and synthetic-wavelength techniques to extend the phase measurement range of conventional single-wavelength PSI are investigated. This combination of PSI and synthetic-wavelengths gives multiple-wavelength phase-shifting interferometry the advantages of: (1) larger phase measurement range and (2) higher accuracy of phase measurement. Advantages, error sources, and limitations of single-wavelength PSI are discussed. Some practical methods to calibrate the piezoelectric transducer (PZT), used to phase shift the reference beam, are presented with experimental results. Two methods of two-wavelength PSI are used to solve the 2π ambiguity problem of single-wavelength PSI. For the first method, two sets of phase data (with 2π ambiguities) for shorter wavelengths are calculated and stored in the computer which calculates the new phase data for the equivalent-wavelength λ(eq). The "error magnification effect," which reduces the measurement precision of the first method, is then investigated. The second, more accurate method, uses the results of the first method as a reference to correct the 2π ambiguities in the single-wavelength phase data. Experimental results are included to confirm theoretical predictions. The enhancement of two-wavelength PSI is investigated, and requires the phase data of a third wavelength. Experiments are performed to verify the capability of multiple-wavelength PSI. For the wavefront being measured, the difference of the optical-path-difference (OPD) between adjacent pixels is as large as 3.3 waves. After temporal averaging of five sets of data, the repeatability of the measurement is better than 2.5 nm (0.0025%) rms (λ = 632.8 nm). This work concludes with recommendations for future work that should make the MWLPSI a more practical technique for the testing of steep aspheric surfaces.
APA, Harvard, Vancouver, ISO, and other styles
4

HAMMEL, STEPHEN MARK. "A DISSIPATIVE MAP OF THE PLANE--A MODEL FOR OPTICAL BISTABILITY (DYNAMICAL SYSTEMS)." Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/188149.

Full text
Abstract:
We analyze a dissipative map of the plane. The map was initially defined by Ikeda as a model for bistable behavior in an optical ring cavity. Our analysis is based upon an examination of attracting sets and basins of attraction. The primary tools utilized in the analysis are stable and unstable manifolds of fixed and periodic saddle points. These manifolds determine boundaries of basins of attraction, and the extent and evolution of attracting sets. We perform extensive numerical iterations of the map with a central focus on sudden changes in the topological nature of attractors and basins. Our analysis concentrates on the destruction of the lower branch attractor as a prominent example of attractor/basin interaction. This involves an examination of a possible link between two fixed points L and M, namely the heteroclinic connection Wᵘ(L) ∩ Wˢ(M) ≠ 0. We use two different methods to approach this question. Although the Ikeda map is used as the working model throughout, both of the techniques apply to a more general class of dissipative maps satisfying certain hypotheses. The first of these techniques analyzes Wˢ(M) when Wᵘ(M) ∩ Wˢ(M) ≠ 0, with the result that Wˢ(M) is found to invade some minimum limiting region for Wᵘ(M) ∩ Wˢ(M) ≠ 0 arbitrarily close to tangency. The second approach is more topological in nature. We define a mesh of subregions to bridge the spatial gap between the points L and M, and concentrate on the occurrence of Wᵘ(L) ∩ Wˢ(M) ≠ 0 (destruction of the attractor). The first main result is a necessary condition for the heteroclinic connection in terms of the behavior of the map on these subregions. The second result is a sequence of sufficient conditions for this link. There remains a gap between these two conditions, and in the final sections we present numerical investigations indicating that the concept of intersection links between subregions is useful to resolve cases near the boundary of the destruction region.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Bo. "Design, modelling and simulation of a novel micro-electro-mechanical gyroscope with optical readouts." Thesis, Cape Peninsula University of Technology, 2007. http://hdl.handle.net/20.500.11838/1101.

Full text
Abstract:
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2007<br>Micro Electro-Machnical Systems (MEMS) applications are fastest development technology present. MEMS processes leverage mainstream IC technologies to achieve on chip sensor interface and signal processing circuitry, multi-vendor accessibility, short design cycles, more on-chip functions and low cost. MEMS fabrications are based on thin-film surface microstructures, bulk micromaching, and LIGA processes. This thesis centered on developing optical micromaching inertial sensors based on MEMS fabrication technology which incorporates bulk Si into microstructures. Micromachined inertial sensors, consisting of the accelerometers and gyroscopes, are one of the most important types of silicon-based sensors. Microaccelerometers alone have the second largest sales volume after pressure sensors, and it is believed that gyroscopes will soon be mass produced at the similar volumes occupied by traditional gyroscopes. A traditional gyroscope is a device for measuring or maintaining orientation, based on the principle of conservation of angular momentum. The essence of the gyroscope machine is a spinning wheel on an axle. The device, once spinning, tends to resist changes to its orientation due to the angular momentum of the wheel. In physics this phenomenon is also known as gyroscopic inertia or rigidity in space. The applications are limited by the huge volume. MEMS Gyroscopes, which are using the MEMS fabrication technology to minimize the size of gyroscope systems, are of great importance in commercial, medical, automotive and military fields. They can be used in cars for ASS systems, for anti-roll devices and for navigation in tall buildings areas where the GPS system might fail. They can also be used for the navigation of robots in tunnels or pipings, for leading capsules containing medicines or diagnostic equipment in the human body, or as 3-D computer mice. The MEMS gyroscope chips are limited by high precision measurement because of the unprecision electrical readout system. The market is in need for highly accurate, high-G-sustainable inertial measuring units (IMU's). The approach optical sensors have been around for a while now and because of the performance, the mall volume, the simplicity has been popular. However the production cost of optical applications is not satisfaction with consumer. Therefore, the MEMS fabrication technology makes the possibility for the low cost and micro optical devices like light sources, the waveguide, the high thin fiber optical, the micro photodetector, and vary demodulation measurement methods. Optic sensors may be defined as a means through which a measurand interacts with light guided in an optical fiber (an intrinsic sensor) or guided to (and returned from) an interaction region (an extrinsic sensor) by an optical fiber to produce an optical signal related to the parameter of interest. During its over 30 years of history, fiber optic sensor technology has been successfully applied by laboratories and industries worldwide in the detection of a large number of mechanical, thermal, electromagnetic, radiation, chemical, motion, flow and turbulence of fluids, and biomedical parameters. The fiber optic sensors provided advantages over conventional electronic sensors, of survivability in harsh environments, immunity to Electro Magnetic Interference (EMI), light weight, small size, compatibility with optical fiber communication systems, high sensitivity for many measurands, and good potential of multiplexing. In general, the transducers used in these fiber optic sensor systems are either an intensity-modulator or a phase-modulator. The optical interferometers, such as Mach-Zehnder, Michelson, Sagnac and Fabry-Perot interferometers, have become widely accepted as a phase modulator in optical sensors for the ultimate sensitivity to a range of weak signals. According to the light source being used, the interferometric sensors can be simply classified as either a coherence interferometric sensor if a the interferometer is interrogated by a coherent light source, such as a laser or a monochromatic light, or a lowcoherence interferometric sensor when a broadband source a light emitting diode (LED) or a superluminescent diode (SLD), is used. This thesis proposed a novel micro electro-mechanical gyroscope system with optical interferometer readout system and fabricated by MEMS technology, which is an original contribution in design and research on micro opto-electro-mechanical gyroscope systems (MOEMS) to provide the better performances than the current MEMS gyroscope. Fiber optical interferometric sensors have been proved more sensitive, precision than other electrical counterparts at the measurement micro distance. The MOMES gyroscope system design is based on the existing successful MEMS vibratory gyroscope and micro fiber optical interferometer distances sensor, which avoid large size, heavy weight and complex fabrication processes comparing with fiber optical gyroscope using Sagnac effect. The research starts from the fiber optical gyroscope based on Sagnac effect and existing MEMS gyroscopes, then moving to the novel design about MOEMS gyroscope system to discuss the operation principles and the structures. In this thesis, the operation principles, mathematics models and performances simulation of the MOEMS gyroscope are introduced, and the suitable MEMS fabrication processes will be discussed and presented. The first prototype model will be sent and fabricated by the manufacture for the further real time performance testing. There are a lot of inventions, further research and optimize around this novel MOEMS gyroscope chip. In future studying, the research will be putted on integration three axis Gyroscopes in one micro structure by optical sensor multiplexing principles, and the new optical devices like more powerful light source, photosensitive materials etc., and new demodulation processes, which can improve the performance and the interface to co-operate with other inertial sensors and navigation system.
APA, Harvard, Vancouver, ISO, and other styles
6

Tchikanda, Serge William. "Modeling for high-speed high-strength precision optical fiber drawing." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/20051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fugita, Romário Keiti Pizzatto. "Prior de regularização para problema de demosaicing com aplicação em CFA’s variados." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1841.

Full text
Abstract:
CNPq<br>Este trabalho tem por objetivo apresentar uma nova proposta aos algoritmos de Demosaicing existentes, utilizando uma abordagem mais flexível quanto ao uso do Color filter array (CFA) em imagens coloridas de único sensor. O algoritmo proposto tem como base a estrutura de problemas inversos, cujo funcionamento utiliza um modelo de operação matriz-vetor que é adaptável ao CFA empregado. A partir deste conceito, o algoritmo trata o problema de Demosaicing como o de minimização de função custo, com um termo referente à dependência da estimativa com os dados obtidos e com o modelo de captura, o outro termo é relacionado aos conhecimentos observados em imagens que podem ser explorados para uma estimativa mais precisa, tal elemento é chamado de Prior. A proposta estabelecida tem como base algoritmos de regularização com foco na alta correlação presente entre os canais de cor (R, G e B), e na suavidade local de regiões uniformes, essa base formaliza o Prior empregado no trabalho. A minimização da proposta é atingida iterativamente através do IRLS-CG, que é a combinação de dois algoritmos de minimização eficientes, que apresenta rápidas respostas, e a capacidade de trabalhar com a norma L1 em conjunto com a norma L2. Com o intuito de atestar a qualidade do algoritmo proposto, foi elaborado um experimento em que o mesmo foi testado com diferentes CFAs e em situação com ruído gaussiano de 35dB e sem ruído algum em imagens da base de dados da Kodak, e os resultados comparados com algoritmos do estado-da-arte, no qual o desempenho da proposta apresentou resultados excelentes, inclusive em CFAs que destoam do padrão Bayer, que é o mais comumente usado na atualidade.<br>This research presents a new proposal to Demosaicing algorithms, using a more flexible approach to deal with the Color filter array (CFA) in single sensor color imaging. The proposed algorithm is structured in the inverse problems model, whose functions employ a CFA adaptive matrix-vector operational model. From this concept, the Demosaicing problem is treated as a cost function minimization with two terms, one referring to the dependence between the estimation and the data provided by the acquisition model, and other term related to features observed in images, which can be explored to form a more precise estimation, this last term is known as Prior. The established proposal is applied in regularization algorithms with focus on the high correlation among color channels (R, G, and B), and in the local smoothness of uniform regions. Both characteristics organize the Prior employed in this work. The minimization proposed is iteratively achieved through IRLS-CG, which is the combination of two efficient minimization algorithms, that presents quick responses, and the capacity to deal with L1 and L2 norm at the same time. The quality of the proposed algorithm is verified in an experiment in which varous CFA were used and a situation with 35dB gaussian noise and another one with no noise applied to the Kodak dataset, and the results were compared with state-of-the-art algorithms, in which the performance of the proposed Prior showed excellent results, including when the CFA is different from Bayer’s, which is the most commonly used pattern.
APA, Harvard, Vancouver, ISO, and other styles
8

McGarry, Stephen. "Irradiated silicon particle detectors." Thesis, Lancaster University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bergstrom, Peter D. Jr. "Markov chain models for all-optical shared memory packet switches." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chou, Chia-Peng. "A mathematical model of building daylighting based on first principles of astrometry, solid geometry and optical radiation transfer." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/82904.

Full text
Abstract:
There is a growing recognition in design professions that lighting is a significant factor in energy consideration. This has generated an interest in daylighting; the bringing of direct and diffuse daylight into buildings to reduce the use of artificial lighting. Many methods exist for quantifying diffuse daylight distribution for use in the design of buildings, but the methods vary widely both in technique and capability. Moreover, no present method deals with direct daylight (sunshine) distribution. Additionally, none have taken advantage of improvements in computer technology that make feasible more complex mathematical computational models for dealing with direct and diffuse daylight together. This dissertation describes the theoretical development and computer implementation of a new mathematical approach to analyzing the distribution of direct and diffuse daylight. This approach examines light transfer from extraterrestrial space to the inside of a room based on the principles of astrometry, solid geometry, and radiation transfer. This study discusses and analyzes certain aspects critical to develop a mathematical model for evaluating daylight performance and compares the results of the proposed model with 48 scale model studies to determine the validity of using this mathematical model to predict the daylight distribution of a room. Subsequent analysis revealed no significant variation between scale model studies and this computer simulation. Consequently, this mathematical model with the attendant computer program, has demonstrated the ability to predict direct and diffuse daylight distribution. Thus, this approach does indeed have the potential for allowing designers to predict the effect of daylight performance in the schematic design stage. A microcomputer program has been developed to calculate the diffuse daylight distribution. The computation procedures of the program use the proposed mathematical model method. The program was developed with a menu-driven format, where the input data can be easily chosen, stored, and changed to determine the effects of different parameters. Results can be obtained through two formats. One data format provides complete material for analyzing the aperture size and location, glass transmission, reflectance factors, and room orientation. The other provides the graphic displays which represent the illuminance in plan, section, and 3-dimensional contour. The program not only offers a design tool for determining the effects of various daylighting options quickly and accurately in the early design stage, but also presents the daylight distribution with less explanation and with more rapid communication with the clients. The program is written in BASICA language and can be used with the IBM microcomputer system.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Foster, David H. "Fabry-Perot and Whispering Gallery Modes In Realistic Resonator Models." Thesis, view abstract or download file of text, 2006. http://wwwlib.umi.com/cr/uoregon/fullcit?p3211216.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.<br>Typescript. Includes vita and abstract. Includes bibliographical references (leaves 204-213). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
12

梁耀祥 and Yiu-cheung Leung. "A reconfigurable neural network for industrial sensory systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rajah, Christopher. "Chereme-based recognition of isolated, dynamic gestures from South African sign language with Hidden Markov Models." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_4979_1183461652.

Full text
Abstract:
<p>Much work has been done in building systems that can recognize gestures, e.g. as a component of sign language recognition systems. These systems typically use whole gestures as the smallest unit for recognition. Although high recognition rates have been reported, these systems do not scale well and are computationally intensive. The reason why these systems generally scale poorly is that they recognize gestures by building individual models for each separate gesture<br>as the number of gestures grows, so does the required number of models. Beyond a certain threshold number of gestures to be recognized, this approach become infeasible. This work proposed that similarly good recognition rates can be achieved by building models for subcomponents of whole gestures, so-called cheremes. Instead of building models for entire gestures, we build models for cheremes and recognize gestures as sequences of such cheremes. The assumption is that many gestures share cheremes and that the number of cheremes necessary to describe gestures is much smaller than the number of gestures. This small number of cheremes then makes it possible to recognized a large number of gestures with a small number of chereme models. This approach is akin to phoneme-based speech recognition systems where utterances are recognized as phonemes which in turn are combined into words.</p>
APA, Harvard, Vancouver, ISO, and other styles
14

He, Jianqing. "Finite difference time domain simulation of subpicosecond semiconductor optical devices." Diss., This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-05042006-164534/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Welch, Gisele Sawaya. "Application of coherence theory to enhanced backscatter and superresovling optical imaging systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/13705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wehner, Justin. "Investigation of resonant-cavity-enhanced mercury cadmium telluride infrared detectors." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0148.

Full text
Abstract:
[Truncated abstract] Infrared (IR) detectors have many applications, from homeland security and defense, to medical imaging, to environmental monitoring, to astronomy, etc. Increasingly, the wave- length dependence of the IR radiation is becoming important in many applications, not just the total intensity of infrared radiation. There are many types of infrared detectors that can be broadly categorized as either photon detectors (narrow band-gap materials or quantum structures that provide the necessary energy transitions to generate free car- riers) or thermal detectors. Photon detectors generally provide the highest sensitivity, however the small transition energy of the detector also means cooling is required to limit the noise due to intrinsic thermal generation. This thesis is concerned with the tech- nique of resonant-cavity-enhancement of detectors, which is the process of placing the detector within an optically resonant cavity. Resonant-cavity-enhanced detectors have many favourable properties including a reduced detector volume, which allows improved operating temperature, or an improved signal to noise ratio (or some balance between the two), along with a narrow spectral bandwidth. ... Responsivity of another sample annealed for 20 hours at 250C in a Hg atmosphere (ex-situ) also shows resonant performance, but indicates significant shunting due the mirror layers. There is good agreement with model data, and the peak responsivity due to the absorber layer is 9.5×103 V/W for a 100 'm ×100 'm photoconductor at 80K. An effective lifetime of 50.4 ns is extracted for this responsivity measurement. The responsivity was measured as a function of varying field, and sweepout was observed for bias fields greater than 50 V/cm. The effective lifetime extracted from this measurement was 224 ns, but is an over estimate. Photodiodes were also fabricated by annealing p-type Hg(1x)Cd(x)Te for 10 hours at 250C in vacuum and type converting in a CH4/H2 reactive ion etch plasma process to form the n-p junction. There is some degradation to the mirror structure due to the anneal in vacuum, but a clear region of high reflection is observed. Measurements of current-voltage characteristics at various temperatures show diode-like characteristics with a peak R0 of 10 G measured at 80K (corresponding to an R0A of approximately 104 cm2. There was significant signal from the mirror layers, however only negligible signal from the absorber layer, and no conclusive resonant peaks.
APA, Harvard, Vancouver, ISO, and other styles
17

Nayfeh, Samir Ali. "Nonlinear dynamics of systems involving widely spaced frequencies." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-06302009-040426/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Webb, M. R. "Millimetre wave quasi-optical signal processing systems." Thesis, University of St Andrews, 1993. http://hdl.handle.net/10023/2827.

Full text
Abstract:
The development of spatial signal processing techniques at millimetre wavelengths represents an area of science and technology that is new. At optical wavelengths, spatial signal processing techniques are well developed and are being applied to a variety of situations. In particular they are being used in pattern recognition systems with a great deal of success. At millimetre wavelengths, the kind of technology used for signal transport and processing is typically either waveguide based or quasi-optically based, or some hybrid of the two. It is the use of quasi-optical methods that opens up the possibility of applying some of the spatial signal processing techiques that up to the present time have almost exclusively been used at optical wavelengths. A generic device that opens up this dimension of spatial signal processing to millimetre wave quasi-optical systems is at the heart of the work described within this thesis. The device could be suitably called a millimetre wave quasi-optical spatial light modulator (8LM), and is identical in operation to the spatial light modulators used in many optical signal processing systems. Within this thesis both a theoretical and an experimental analysis of a specific millimetre wave quasi-optical spatial light modulator is undertaken. This thesis thus represents an attempt to open up this new area of research and development, and to establish for it, a helpful theoretical and experimental foundation. It is an area that involves a heterogeneous mix of various technologies, and it is an area that is full of potential. The development of the experimental method for measuring the beam patterns produced by millimetre wave quasi-optical spatial light modulators involved the separate development of two other components. Firstly, a sensitive, low-cost millimetre wave pyroelectric detector has been developed and characterised. And secondly, a high performance quasi-optical Faraday rotator (a polarisation rotator) has been developed and characterised. The polarisation state of a quasi-optical beam is the parameter most often exploited for signal processing applications in millimetre wave quasi-optical systems, and thus a high performance polarisation rotator has readily found many opportunities for use.
APA, Harvard, Vancouver, ISO, and other styles
19

MADI, TUFIC. "Desenvolvimento de detector de neutrons usando sensor tipo barreira de superficie com conversor (n,p) e conversor (n,alpha)." reponame:Repositório Institucional do IPEN, 1999. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10736.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:43:27Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T13:56:10Z (GMT). No. of bitstreams: 1 06629.pdf: 11734475 bytes, checksum: 26ac38190c26794def0e5ba95d87d535 (MD5)<br>Tese (Doutoramento)<br>IPEN/T<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
20

Paulley, Alan. "Model calculations of the optical absorption of poly(p-phenylene)." Thesis, University of Sheffield, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Manger, Ryan Paul. "Assessing the dose received by the victims of a radiological dispersal device with Geiger-Mueller detectors." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Males, Mladen. "Suppression of transient gain excursions in an erbium-doped fibre amplifier." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0157.

Full text
Abstract:
This thesis reports original work on suppression of transient gain excursions in an erbium-doped fibre amplifier (EDFA). The work presented in this thesis is a detailed investigation of four closed-loop systems that control the EDFA gain dynamically. The performance of the four closed-loop systems is evaluated by analytical work, supplemented by computer simulations and insystem measurements performed on a hardware EDFA. In addition, a stability analysis of the four closed-loop systems is presented. In the stability analysis presented in this thesis, nonlinear nature of the four closed-loop systems is taken into consideration. In the stability analysis, in addition to proving that the four closed-loop systems considered are stable, it is proven that for any practical values of the EDFA gain at the initial time of observation, the EDFA gain is restored to the desired value in steady state. These outcomes of the stability analysis are supported by simulation results and experimental results. Errors in system modelling, change in the operating point of the nonlinear closed-loop system, and measurement noise are important aspects of practical implementations of systems that control the EDFA gain dynamically. A detailed analysis of the effects these practical aspects have on the performance of the four closed-loop systems considered is presented. The analysis is validated using computer simulations and experimental measurements. In most of the work reported in the literature on controlling the EDFA gain, controllers that include feedforward and/or feedback components are employed. In the traditional approaches to combining the feedforward and the feedback components, large transient excursions of the EDFA gain can still occur due to errors in the control provided by the feedforward component. In this thesis, a novel approach to combining the feedforward and the feedback components of the controller is presented. Based on the analytical work, the computer simulations and the experimental work presented in this thesis, the novel approach provides a significant reduction in the excursions of the EDFA gain in the transient period.
APA, Harvard, Vancouver, ISO, and other styles
23

Naidoo, Nathan Lyle. "South African sign language recognition using feature vectors and Hidden Markov Models." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8533_1297923615.

Full text
Abstract:
<p>This thesis presents a system for performing whole gesture recognition for South African Sign Language. The system uses feature vectors combined with Hidden Markov models. In order to constuct a feature vector, dynamic segmentation must occur to extract the signer&rsquo<br>s hand movements. Techniques and methods for normalising variations that occur when recording a signer performing a gesture, are investigated. The system has a classification rate of 69%</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Von, Eden Elric Omar. "Optical arbitrary waveform generation using chromatic dispersion in silica fibers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/24780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mathis, Andrew Wiley. "Electromagnetic modeling of interconnects incorporating perforated ground planes." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/14822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

WEST, KAREN FRANCES. "AN EXTENSION TO THE ANALYSIS OF THE SHIFT-AND-ADD METHOD: THEORY AND SIMULATION (SPECKLE, ATMOSPHERIC TURBULENCE, IMAGE RESTORATION)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188021.

Full text
Abstract:
The turbulent atmosphere degrades images of objects viewed through it by introducing random amplitude and phase errors into the optical wavefront. Various methods have been devised to obtain true images of such objects, including the shift-and-add method, which is examined in detail in this work. It is shown theoretically that shift-and-add processing may preserve diffraction-limited information in the resulting image, both in the point source and extended object cases, and the probability of ghost peaks in the case of an object consisting of two point sources is discussed. Also, a convergence rate for the shift-and-add algorithm is established and simulation results are presented. The combination of shift-and-add processing and Wiener filtering is shown to provide excellent image restorations.
APA, Harvard, Vancouver, ISO, and other styles
27

Hill, Deborah Ann. "X-ray excited optical luminescence (XEOL) and its application to porous silicon." Thesis, University of Warwick, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Krumpholc, Lukáš. "Metody segmentace biomedicinských obrazových signálů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-218063.

Full text
Abstract:
This work deals with methods of segmentation of biomedical image signals. It describes, sums up and compares representative methods of digital image processing. Segmentation based on parametric representation is one of the mentioned methods. So as the basic parameter can be chosen for example luminance and the final binary image is obtained by thresholding. Next described method is segmentation based on edge representation. This method can be divided into edge detection by the help of edge detectors and of Hough transformation. Edge detectors work with the first and second derivation. The following method is region-based segmentation, which can be used for a image with noise. This category can be divided into three parts. The first one is segmentation via splitting and merging regions, when the image is split and the created regions are tested on a defined condition. If the condition is satisfied, the region merges and doesn’t continue splitting. The second one is region growing segmentation, when adjacent pixels with a similar intensity of luminance are grouped together and create a segmentated region. Third one is watershed segmentation algorithm based on the idea of water diffusion on uneven surface. The last group of methods is segmentation via flexible and active contours. Here is described an active shape model proceeding from a possibility to deform models so that they match with sample shapes. Next I also describe method Snakes, where occurs gradual contour shaping up to the edge of the object in the image. For the final editing is used mathematical morphology of segmentated images. I aimed to meet methods of image signals segmentation, to cover the chosen methods as a script in programming language Matlab and to check their properties on images.
APA, Harvard, Vancouver, ISO, and other styles
29

Harmer, Stuart William. "Enhanced absorptance photocathodes." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311348.

Full text
Abstract:
This thesis addresses one of the major limiting factors in the performance of photomultipliers, that is that the photocathodes employed often only absorb a small fraction, typically less than 25%, of the power in the incident light. Current photocathodes are almost exclusively planar and the starting point of the thesis is the mathematical modelling of both, semitransparent and reflective planar photocathodes. The analysis shows that the absorptance of semitransparent photocathodes increases for light incident beyond the critical angle needed for Attenuated Total Internal Reflection (ATIR). Reflective type planar photocathodes could certainly have their absorptance enhanced by use of silver rather than nickel substrates, as increases in absorptance of 2-3 times are possible for red light. The proposed method for remedying the inherent loss in sensitivity of photomultipliers caused by the non-total absorption of light in the photocathode was to employ a ridged substrate in the photocathode. The ridged substrate, glass or metal for semitransparent and reflective type photocathodes respectively, allows the light multiple interactions with the photoemissive layer. In the case of semitransparent photocathodes ATIR would mean no power is transmitted for those interactions that take place beyond the critical angle of incidence. The mathematical modelling and subsequent analysis of ridged photocathodes show enhanced absorptance (20-30 fold improvements are certainly achievable), especially for light in the red end of the operating spectral range. Further gains in quantum efficiency can follow by the reduction of the optimum photocathode thickness, resulting from the structure, while maintaining high absorptance. Some subwavelength structures are also modelled and analysed to ascertain whether this route could be used to improve the absorptance of photocathodes, the results are inconclusive but generally indicate anti-reflective, rather than absorbing properties. Finally the extremely sparse nature of published permitivity data has been rectified by our own measurements for the permitivities of certain photocathodes over a wide wavelength range.
APA, Harvard, Vancouver, ISO, and other styles
30

Pimpalkhare, Mangesh S. "Linearly repeatered communication systems using optical amplifiers." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-05042010-020243/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Kai-Tien. "Predictive model for plume opacity." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53886.

Full text
Abstract:
In recent years, as control systems for boiler emissions have been upgraded, some utility sources have experienced increased plume opacity. Cases of plume opacity exceeding in-stack opacity are due to 1) the aerosol formed by condensation of primary sulfuric acid and water vapor onto polydisperse plume particles and 2) the presence of fine particles which grow into the visual size range by heterogeneous condensation and coagulation processes as the plume is cooled and diluted by mixing with the ambient air. In order to better understand the factors leading up to acid plume formation, a computer simulation model has been developed. This plume opacity model has been utilized to simulate sulfuric acid aerosol formation and growth. These processes result from homogeneous nucleation, condensation and coagulation which substantially increase the concentration of submicrometer sized aerosols. These phenomena bring about significant increases in plume opacity. Theoretical relationships have been derived and transformed into 21 computer model to predict plume opacity at various downwind distances resulting from pulverized coal combustion operations. This model consists of relatively independent components-such as an optics module, a bimodal particle size distribution module, a polydisperse coagulation module, a vapor condensation and nucleation module and a plume dispersion module-which are linked together to relate specific flue gas emissions and meterological conditions to plume opacity. This unique, near-stack, plume-opacity-model approach provides an excellent tool for understanding and dealing with such complex issues as: • increasing plume opacity observed for emissions containing sulfuric acid aerosols, • explaining the correlation between primary particle size distribution and light—scattering effects, • predicting the opacity level resulting from combustion of various coal types, • predicting control equipment effects on plume opacity.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Albert, Jacques. "Characterizations and design of planar optical waveguides and directional couplers by two-step K+ -Na+ ion-exchange in glass." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75759.

Full text
Abstract:
Planar optical waveguides fabricated by K$ sp+$-Na$ sp+$ ion-exchange in soda-lime glass substrates are investigated.<br>Experimental characterizations of planar waveguide with respect to a wide range of fabrication conditions have been carried out, including detailed measurements of the refractive index anisotropy resulting from the large induced surface stresses.<br>Parallel to this, the non-linear diffusion process of ion-exchange was simulated numerically to provide, along with the results of the characterizations, a complete description of the refractive index profile from any set of fabrication conditions.<br>The magnitude of the maximum surface index change observed was shown theoretically to be almost entirely due to the induced stress at the surface of the substrate, arising from the presence of the larger potassium ions.<br>Finally, a novel class of single-mode channel waveguides, made by a "two-step" ion-exchange was analyzed. A simple model for these waveguides was developed and used in the design of two directional coupler structures which were fabricated and measured.<br>The two-step process was conceived because it relaxes waveguides' dimensional control, yielding single-mode guides of larger size, better suited for low-loss connections to optical fibers. It also provides an additional degree of freedom to adjust device properties.
APA, Harvard, Vancouver, ISO, and other styles
33

Andrawis, Alfred S. "A new compound modulation technique for multi-channel analog video transmission on fiber." Diss., Virginia Tech, 1991. http://hdl.handle.net/10919/39877.

Full text
Abstract:
Present analog optical fiber multi-channel video transmission systems are very sensitive to laser nonlinearities and are consequently limited in the optical modulation depth (OMD) that may be used. This, in turn limits the power budget achievable, signal-to-noise ratio, and the channel capacity. In this dissertation a new analog transmission technique for multi-channel TV transmission on fiber USIng frequency modulation/pulse amplitude modulation/time division multiplexing (FM/TDM) is described and compared with present digital and analog systems. Parameters for the proposed system are selected and the relationship between the performance and parameter values is discussed. Analysis and simulations indicate that the proposed system has a very low sensitivity to nonlinearities and is similar to that of digital systems, and much better than current Frequency Modulated/Frequency Division Multiplexed (FM/FDM) systems. This permits the use of higher OMD (as high as in digital systems), which results in achieving a high signal-to-noise ratio and a large power budget. Analysis of the number of channels as a function of adjacent channel intersymbol interference indicates that the proposed system has a better spectral efficiency than present analog systems. Simulations are also used to predict the performance of the proposed system with laser diodes poorer than the ones presently used for multi-channel analog systems. Considerably poorer lasers may be used while achieving acceptable transmission quality. Finally, carrier-to-noise penalty caused by timing errors and jitter effects are analyzed.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Amara, Pavan Kumar. "Towards a Unilateral Sensing System for Detecting Person-to-Person Contacts." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1703441/.

Full text
Abstract:
The contact patterns among individuals can significantly affect the progress of an infectious outbreak within a population. Gathering data about these interaction and mixing patterns is essential to assess computational modeling of infectious diseases. Various self-report approaches have been designed in different studies to collect data about contact rates and patterns. Recent advances in sensing technology provide researchers with a bilateral automated data collection devices to facilitate contact gathering overcoming the disadvantages of previous approaches. In this study, a novel unilateral wearable sensing architecture has been proposed that overcome the limitations of the bi-lateral sensing. Our unilateral wearable sensing system gather contact data using hybrid sensor arrays embedded in wearable shirt. A smartphone application has been used to transfer the collected sensors data to the cloud and apply deep learning model to estimate the number of human contacts and the results are stored in the cloud database. The deep learning model has been developed on the hand labelled data over multiple experiments. This model has been tested and evaluated, and these results were reported in the study. Sensitivity analysis has been performed to choose the most suitable image resolution and format for the model to estimate contacts and to analyze the model's consumption of computer resources.
APA, Harvard, Vancouver, ISO, and other styles
35

VENTURINI, LUZIA. "Estudo de incertezas no monitoramento in vivo utilizando a tecnica de Monte Carlo." reponame:Repositório Institucional do IPEN, 2004. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11188.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:49:19Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T14:01:17Z (GMT). No. of bitstreams: 1 09672.pdf: 6449551 bytes, checksum: fc741e642f1069dc9671f312a9c4532b (MD5)<br>Tese (Doutoramento)<br>IPEN/T<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
36

Shell, Michael David. "Cascaded All-Optical Shared-Memory Architecture Packet Switches Using Channel Grouping Under Bursty Traffic." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4892.

Full text
Abstract:
This work develops an exact logical operation model to predict the performance of the all-optical shared-memory architecture (OSMA) class of packet switches and provides a means to obtain a reasonable approximation of OSMA switch performance within certain types of networks, including the Banyan family. All-optical packet switches have the potential to far exceed the bandwidth capability of their current electronic counterparts. However, all-optical switching technology is currently not mature. Consequently, all-optical switch fabrics and buffers are more constrained in size and can cost several orders of magnitude more than those of electronic switches. The use of shared-memory buffers and/or links with multiple parallel channels (channel grouping) have been suggested as ways to maximize switch performance with buffers of limited size. However, analysis of shared-memory switches is far more difficult than for other commonly used buffering strategies. Obtaining packet loss performance by simulation is often not a viable alternative to modeling if low loss rates or large networks are encountered. Published models of electronic shared-memory packet switches (ESMP) have primarily involved approximate models to allow analysis of switches with a large number of ports and/or buffer cells. Because most ESMP models become inaccurate for small switches, and OSMA switches, unlike ESMP switches, do not buffer packets unless contention occurs, existing ESMP models cannot be applied to OSMA switches. Previous models of OSMA switches were confined to isolated (non-networked), symmetric OSMA switches using channel grouping under random traffic. This work is far more general in that it also encompasses OSMA switches that (1) are subjected to bursty traffic and/or with input links that have arbitrary occupancy probability distributions, (2) are interconnected to form a network and (3) are asymmetric.
APA, Harvard, Vancouver, ISO, and other styles
37

Parekh, Siddharth Avinash. "A comparison of image processing algorithms for edge detection, corner detection and thinning." University of Western Australia. Centre for Intelligent Information Processing Systems, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0073.

Full text
Abstract:
Image processing plays a key role in vision systems. Its function is to extract and enhance pertinent information from raw data. In robotics, processing of real-time data is constrained by limited resources. Thus, it is important to understand and analyse image processing algorithms for accuracy, speed, and quality. The theme of this thesis is an implementation and comparative study of algorithms related to various image processing techniques like edge detection, corner detection and thinning. A re-interpretation of a standard technique, non-maxima suppression for corner detectors was attempted. In addition, a thinning filter, Hall-Guo, was modified to achieve better results. Generally, real time data is corrupted with noise. This thesis also incorporates few smoothing filters that help in noise reduction. Apart from comparing and analysing algorithms for these techniques, an attempt was made to implement correlation-based optic flow
APA, Harvard, Vancouver, ISO, and other styles
38

Pinheiro, Helder Fleury 1967. "The application of Trefftz-FLAME to electromagnetic wave problems /." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115703.

Full text
Abstract:
Numerical analysis of the electromagnetic fields in large, complex structures is very challenging due to the high computational overhead. Recently, it has been shown that a new method called Trefftz-FLAME ( Flexible Local Approximation MEthod) is suitable for problems where there exist a large number of similar structures.<br>This thesis develops Trefftz-FLAME in two areas. First, a novel 2D Trefftz-FLAME method incorporates the modal analysis and port boundary condition that are essential to an accurate calculation of reflection and transmission coefficients for photonic crystal devices. The new technique outperforms existing methods in both accuracy and computational cost.<br>The second area pertains to the 3D, vector problem of electromagnetic wave scattering by aggregates of identical dielectric particles. A methodology for the development of local basis functions is introduced, applicable to particles of any shape and composition. Boundary conditions on the surface of the finite FLAME domain are described, capable of representing the incident wave and absorbing the outgoing radiation. A series of problems involving dielectric spheres is solved to validate the new method. Comparison with exact solutions is possible in some cases and shows that the method is able to produce accurate near-field results even when the computational grid spacing is equal to the radius of the spheres.
APA, Harvard, Vancouver, ISO, and other styles
39

Hill, Evelyn June. "Applying statistical and syntactic pattern recognition techniques to the detection of fish in digital images." University of Western Australia. School of Mathematics and Statistics, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0070.

Full text
Abstract:
This study is an attempt to simulate aspects of human visual perception by automating the detection of specific types of objects in digital images. The success of the methods attempted here was measured by how well results of experiments corresponded to what a typical human’s assessment of the data might be. The subject of the study was images of live fish taken underwater by digital video or digital still cameras. It is desirable to be able to automate the processing of such data for efficient stock assessment for fisheries management. In this study some well known statistical pattern classification techniques were tested and new syntactical/ structural pattern recognition techniques were developed. For testing of statistical pattern classification, the pixels belonging to fish were separated from the background pixels and the EM algorithm for Gaussian mixture models was used to locate clusters of pixels. The means and the covariance matrices for the components of the model were used to indicate the location, size and shape of the clusters. Because the number of components in the mixture is unknown, the EM algorithm has to be run a number of times with different numbers of components and then the best model chosen using a model selection criterion. The AIC (Akaike Information Criterion) and the MDL (Minimum Description Length) were tested.The MDL was found to estimate the numbers of clusters of pixels more accurately than the AIC, which tended to overestimate cluster numbers. In order to reduce problems caused by initialisation of the EM algorithm (i.e. starting positions of mixtures and number of mixtures), the Dynamic Cluster Finding algorithm (DCF) was developed (based on the Dog-Rabbit strategy). This algorithm can produce an estimate of the locations and numbers of clusters of pixels. The Dog-Rabbit strategy is based on early studies of learning behaviour in neurons. The main difference between Dog-Rabbit and DCF is that DCF is based on a toroidal topology which removes the tendency of cluster locators to migrate to the centre of mass of the data set and miss clusters near the edges of the image. In the second approach to the problem, data was extracted from the image using an edge detector. The edges from a reference object were compared with the edges from a new image to determine if the object occurred in the new image. In order to compare edges, the edge pixels were first assembled into curves using an UpWrite procedure; then the curves were smoothed by fitting parametric cubic polynomials. Finally the curves were converted to arrays of numbers which represented the signed curvature of the curves at regular intervals. Sets of curves from different images can be compared by comparing the arrays of signed curvature values, as well as the relative orientations and locations of the curves. Discrepancy values were calculated to indicate how well curves and sets of curves matched the reference object. The total length of all matched curves was used to indicate what fraction of the reference object was found in the new image. The curve matching procedure gave results which corresponded well with what a human being being might observe.
APA, Harvard, Vancouver, ISO, and other styles
40

De, Vega Rodrigo Miguel. "Modeling future all-optical networks without buffering capabilities." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210455.

Full text
Abstract:
In this thesis we provide a model for a bufferless optical burst switching (OBS) and an optical packet switching (OPS) network. The thesis is divided in three parts. <p><p>In the first part we introduce the basic functionality and structure of OBS and OPS networks. We identify the blocking probability as the main performance parameter of interest. <p><p>In the second part we study the statistical properties of the traffic that will likely run through these networks. We use for this purpose a set of traffic traces obtained from the Universidad Politécnica de Catalunya. Our conclusion is that traffic entering the optical domain in future OBS/OPS networks will be long-range dependent (LRD). <p><p>In the third part we present the model for bufferless OBS/OPS networks. This model takes into account the results from the second part of the thesis concerning the LRD nature of traffic. It also takes into account specific issues concerning the functionality of a typical bufferless packet-switching network. The resulting model presents scalability problems, so we propose an approximative method to compute the blocking probability from it. We empirically evaluate the accuracy of this method, as well as its scalability.<br>Doctorat en Sciences de l'ingénieur<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
41

Amara, Pavan Kumar. "Towards a Unilateral Sensor Architecture for Detecting Person-to-Person Contacts." Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc1703441/.

Full text
Abstract:
The contact patterns among individuals can significantly affect the progress of an infectious outbreak within a population. Gathering data about these interaction and mixing patterns is essential to assess computational modeling of infectious diseases. Various self-report approaches have been designed in different studies to collect data about contact rates and patterns. Recent advances in sensing technology provide researchers with a bilateral automated data collection devices to facilitate contact gathering overcoming the disadvantages of previous approaches. In this study, a novel unilateral wearable sensing architecture has been proposed that overcome the limitations of the bi-lateral sensing. Our unilateral wearable sensing system gather contact data using hybrid sensor arrays embedded in wearable shirt. A smartphone application has been used to transfer the collected sensors data to the cloud and apply deep learning model to estimate the number of human contacts and the results are stored in the cloud database. The deep learning model has been developed on the hand labelled data over multiple experiments. This model has been tested and evaluated, and these results were reported in the study. Sensitivity analysis has been performed to choose the most suitable image resolution and format for the model to estimate contacts and to analyze the model's consumption of computer resources.
APA, Harvard, Vancouver, ISO, and other styles
42

Reis, Diego Dias dos. "Desenvolvimento de sensores planares em tecnologia de circuitos impressos para detecção de umidade em madeiras e presença de água em dutos hidráulicos." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1172.

Full text
Abstract:
CNPq; Capes<br>Neste trabalho é utilizado a tecnologia convencional de fabricação de circuitos impressos, que possuem vantagens de ser de baixo custo, facilidade de produção, tecnologia conhecida e divulgada, para se desenvolver dois tipos básicos de sensores planares. O primeiro um capacitivo que opera pela detecção da variação da permissividade do meio e um segundo do tipo passivo, wireless e ressonante (PWR) que varia a frequência ressonante em função da permissividade ou permeabilidade do meio. Além disso, são apresentadas análises matemáticas, simulações e medições. Com os sensores foram realizados testes para a detecção da presença de água em dutos hidráulicos e umidade em amostras de madeiras, cujos resultados indicam que os sensores atendem ao propósito de medição e podem ser utilizados para os seus fins específicos.<br>In this work it was used the conventional manufacturing technology for printed circuits, which presents advantages of being low cost, known technology, for developing two basic types of planar sensors. The first one operates by capacitive measurements of variation of permittivity of medium and a second type of passive, wireless, resonant (PWR) which changes the resonant frequency as a function of permittivity or permeability. Furthermore, two types of sensors are described from the point of view of mathematical analysis, simulations, and measurements. Tests were performed to detect the presence of water in hydraulic ducts and moisture of wood samples. The results indicate that evaluated sensors serve the purpose and can be used for their specific applications.
APA, Harvard, Vancouver, ISO, and other styles
43

Ramesh, Sathya. "High Resolution Satellite Images and LiDAR Data for Small-Area Building Extraction and Population Estimation." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12188/.

Full text
Abstract:
Population estimation in inter-censual years has many important applications. In this research, high-resolution pan-sharpened IKONOS image, LiDAR data, and parcel data are used to estimate small-area population in the eastern part of the city of Denton, Texas. Residential buildings are extracted through object-based classification techniques supported by shape indices and spectral signatures. Three population indicators -building count, building volume and building area at block level are derived using spatial joining and zonal statistics in GIS. Linear regression and geographically weighted regression (GWR) models generated using the three variables and the census data are used to estimate population at the census block level. The maximum total estimation accuracy that can be attained by the models is 94.21%. Accuracy assessments suggest that the GWR models outperformed linear regression models due to their better handling of spatial heterogeneity. Models generated from building volume and area gave better results. The models have lower accuracy in both densely populated census blocks and sparsely populated census blocks, which could be partly attributed to the lower accuracy of the LiDAR data used.
APA, Harvard, Vancouver, ISO, and other styles
44

Moura, Hector Lise de. "Reconstrução de imagens em tomografia de capacitância elétrica por representações esparsas." Universidade Tecnológica Federal do Paraná, 2018. http://repositorio.utfpr.edu.br/jspui/handle/1/3151.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)<br>A Tomografia de Processos é uma importante ferramenta para diversos setores da indústria. Tal importância vem da necessidade de obter informações sobre determinada propriedade física em regiões de complicado acesso, por exemplo, o interior de um duto. A tomografia é uma ferramenta muito versátil, podendo ser adaptada para investigar diversas propriedades físicas. Entre as diversas modalidades tomográficas está a elétrica, conhecida como Tomografia de Impedância Elétrica (EIT). A EIT pode ainda ser dividida em duas partes: Tomografia de Resistência Elétrica (ERT) e Tomografia de Capacitância Elétrica (ECT). Enquanto a ERT é capaz de distinguir materiais condutivos de não-condutivos, a ECT é capaz de diferenciar dois materiais não-condutivos pela sua permissividade elétrica. A modalidade de tomografia elétrica possui vantagens como: baixo tempo de aquisição, baixo custo e não-radioatividade. Os principais desafios enfrentados na tomografia elétrica são: a dependência da trajetória do campo em relação ao meio (efeito de campo mole) e a pouca quantidade de eletrodos disponı́veis para medições devido às dimensões dos mesmos. Em decorrência do efeito de campo mole, a soma da contribuição individual de cada pixel em uma região é diferente da contribuição real da região, em outras palavras, é um problema não-linear. Devido a pequena quantidade de eletrodos, em geral 8 ou 12, reconstruir uma imagem com resolução prática é um problema mal-posto. Muitos métodos foram propostos para contornar essas dificuldades, grande parte se baseia em um modelo linearizado do sistema e na resolução de um problema inverso. Neste trabalho é proposto um método de reconstrução de imagens com representação esparsa, no qual busca-se reconstruir uma imagem composta de poucos elementos de uma base redundante. Esses elementos são aprendidos a partir de sinais de treinamento e usados como entrada para um modelo de ECT. As respostas, em capacitância, desse modelo formam uma matriz de sensibilidade redundante. Tal matriz pode ser interpretada como uma linearização por partes do problema direto. Para validação desse algoritmo foram realizados experimentos em escoamentos bifásicos ar-água. Os sinais de treinamento foram obtidos com o uso de um sensor de ECT em conjunto com um sensor wire-mesh capacitivo. Os resultados obtidos demonstram a capacidade do método proposto em reconstruir imagens a partir de 8 medições de capacitâncias. As imagens reconstruı́das apresentam melhores resultados, segundo diferentes métricas, quando comparados a outros métodos com representações esparsas.<br>Process Tomography is an important tool for many sectors of industry. Such importance comes from the necessity of obtaining knowledge of physical properties from hard reaching places, as the interior of a solid object or pipe. Tomography is a very versatile tool, it can be adapted for investigating different physical properties. Among the many tomographic modalities is the electrical, know as Electrical Impedance Tomography (EIT). The EIT can also be divided in two: Electrical Resistance Tomography (ERT) and Electrical Capacitance Tomography (ECT). While the ERT is capable of distinguishing conducting materials from non-conducting ones, the ECT is capable of distinguishing two non-conducting materials by their electrical permittivity. The electrical modality has advantages such as: low acquisition time, low cost and non-radioactive. The main challenges of electrical tomography are: dependency of the trajectory of the field in the medium (effect know as soft-field) and the low number of electrodes available for measurement due to their sizes. As a result of the soft-field effect, the sum of individual contributions of small discrete segments in a given region is different from the contribution of the entire region as one. In other words, the relation between the electrical property and the electrical measurements are non-linear. Due to the small number of measuring electrodes, commonly 8 or 12, reconstructing images with practical resolution is an ill-posed problem. In order to overcome these obstacles, many methods were proposed and the majority are based on the resolution of an inverse problem of a linear model. This work proposes a method of image reconstruction with sparse inducing regularization that seeks to obtain an image representation with only few elements of a redundant basis. The elements of this basis are obtained from training images and used as input of an ECT simulation. The output capacitances of the model make up the columns of a redundant sensitivity matrix. Such matrix can be viewed as a piecewise linearization of the direct problem. For validation purposes, experimental tests were conducted on two-phase flows (air-water). The training signals were obtained from an experiment with a capacitive wire-mesh sensor along with an ECT sensor. The results obtained show that the proposed method is capable of reconstructing images from a set of only 8 capacitance measurements. The reconstructed images show better results, according to different metrics, when compared to other methods that also use sparse representations.
APA, Harvard, Vancouver, ISO, and other styles
45

Zambrano, Martínez Jorge Luis. "Efficient Traffic Management in Urban Environments." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/129865.

Full text
Abstract:
[ES] En la actualidad, uno de los principales desafíos a los que se enfrentan las grandes áreas metropolitanas es la congestión provocada por el tráfico, la cual se ha convertido en un problema importante al que se enfrentan las autoridades de cada ciudad. Para abordar este problema es necesario implementar una solución eficiente para controlar el tráfico que genere beneficios para los ciudadanos, como reducir los tiempos de viaje de los vehículos y, en consecuencia, el consumo de combustible, el ruido, y la contaminación ambiental. De hecho, al analizar adecuadamente la demanda de tráfico, es posible predecir las condiciones futuras del tráfico, y utilizar esa información para la optimización de las rutas tomadas por los vehículos. Este enfoque puede ser especialmente efectivo si se aplica en el contexto de los vehículos autónomos, que tienen un comportamiento más predecible, lo cual permite a los administradores de la ciudad mitigar los efectos de la congestión, como es la contaminación, al mejorar el flujo de tráfico de manera totalmente centralizada. La validación de este enfoque generalmente requiere el uso de simulaciones que deberían ser lo más realistas posible. Sin embargo, lograr altos grados de realismo puede ser complejo cuando los patrones de tráfico reales, definidos a través de una matriz de Origen/Destino (O-D) para los vehículos en una ciudad, son desconocidos, como ocurre la mayoría de las veces. Por lo tanto, la primera contribución de esta tesis es desarrollar una heurística iterativa para mejorar el modelado de la congestión de tráfico; a partir de las mediciones de bucle de inducción reales hechas por el Ayuntamiento de Valencia (España), pudimos generar una matriz O-D para la simulación de tráfico que se asemeja a la distribución de tráfico real. Si fuera posible caracterizar el estado del tráfico prediciendo las condiciones futuras del tráfico para optimizar la ruta de los vehículos automatizados, y si se pudieran tomar estas medidas para mitigar de manera preventiva los efectos de la congestión con sus problemas relacionados, se podría mejorar el flujo de tráfico en general. Por lo tanto, la segunda contribución de esta tesis es desarrollar una Ecuación de Predicción de Tráfico para caracterizar el comportamiento en las diferentes calles de la ciudad en términos de tiempo de viaje con respecto al volumen de tráfico, y aplicar una regresión logística a esos datos para predecir las condiciones futuras del tráfico. La tercera y última contribución de esta tesis apunta directamente al nuevo paradigma de gestión de tráfico previsto, tratándose de un servidor de rutas capaz de manejar todo el tráfico en una ciudad, y equilibrar los flujos de tráfico teniendo en cuenta las condiciones de congestión del tráfico presentes y futuras. Por lo tanto, realizamos un estudio de simulación con datos reales de congestión de tráfico en la ciudad de Valencia (España), para demostrar cómo se puede mejorar el flujo de tráfico en un día típico mediante la solución propuesta. Los resultados experimentales muestran que nuestra solución, combinada con una actualización frecuente de las condiciones del tráfico en el servidor de rutas, es capaz de lograr mejoras sustanciales en términos de velocidad promedio y tiempo de trayecto, ambos indicadores de un menor grado de congestión y de una mejor fluidez del tráfico.<br>[CA] En l'actualitat, un dels principals desafiaments als quals s'enfronten les grans àrees metropolitanes és la congestió provocada pel trànsit, que s'ha convertit en un problema important al qual s'enfronten les autoritats de cada ciutat. Per a abordar aquest problema és necessari implementar una solució eficient per a controlar el trànsit que genere beneficis per als ciutadans, com reduir els temps de viatge dels vehicles i, en conseqüència, el consum de combustible, el soroll, i la contaminació ambiental. De fet, en analitzar adequadament la demanda de trànsit, és possible predir les condicions futures del trànsit, i utilitzar aqueixa informació per a l'optimització de les rutes preses pels vehicles. Aquest enfocament pot ser especialment efectiu si s'aplica en el context dels vehicles autònoms, que tenen un comportament més predictible, i això permet als administradors de la ciutat mitigar els efectes de la congestió, com és la contaminació, en millorar el flux de trànsit de manera totalment centralitzada. La validació d'aquest enfocament generalment requereix l'ús de simulacions que haurien de ser el més realistes possible. No obstant això, aconseguir alts graus de realisme pot ser complex quan els patrons de trànsit reals, definits a través d'una matriu d'Origen/Destinació (O-D) per als vehicles en una ciutat, són desconeguts, com ocorre la majoria de les vegades. Per tant, la primera contribució d'aquesta tesi és desenvolupar una heurística iterativa per a millorar el modelatge de la congestió de trànsit; a partir dels mesuraments de bucle d'inducció reals fetes per l'Ajuntament de València (Espanya), vam poder generar una matriu O-D per a la simulació de trànsit que s'assembla a la distribució de trànsit real. Si fóra possible caracteritzar l'estat del trànsit predient les condicions futures del trànsit per a optimitzar la ruta dels vehicles automatitzats, i si es pogueren prendre aquestes mesures per a mitigar de manera preventiva els efectes de la congestió amb els seus problemes relacionats, es podria millorar el flux de trànsit en general. Per tant, la segona contribució d'aquesta tesi és desenvolupar una Equació de Predicció de Trànsit per a caracteritzar el comportament en els diferents carrers de la ciutat en termes de temps de viatge respecte al volum de trànsit, i aplicar una regressió logística a aqueixes dades per a predir les condicions futures del trànsit. La tercera i última contribució d'aquesta tesi apunta directament al nou paradigma de gestió de trànsit previst. Es tracta d'un servidor de rutes capaç de manejar tot el trànsit en una ciutat, i equilibrar els fluxos de trànsit tenint en compte les condicions de congestió del trànsit presents i futures. Per tant, realitzem un estudi de simulació amb dades reals de congestió de trànsit a la ciutat de València (Espanya), per a demostrar com es pot millorar el flux de trànsit en un dia típic mitjançant la solució proposada. Els resultats experimentals mostren que la nostra solució, combinada amb una actualització freqüent de les condicions del trànsit en el servidor de rutes, és capaç d'aconseguir millores substancials en termes de velocitat faig una mitjana i de temps de trajecte, tots dos indicadors d'un grau menor de congestió i d'una fluïdesa millor del trànsit.<br>[EN] Currently, one of the main challenges that large metropolitan areas have to face is traffic congestion, which has become an important problem faced by city authorities. To address this problem, it becomes necessary to implement an efficient solution to control traffic that generates benefits for citizens, such as reducing vehicle journey times and, consequently, use of fuel, noise and environmental pollution. In fact, by properly analyzing traffic demand, it becomes possible to predict future traffic conditions, and to use that information for the optimization of the routes taken by vehicles. Such an approach becomes especially effective if applied in the context of autonomous vehicles, which have a more predictable behavior, thus enabling city management entities to mitigate the effects of traffic congestion and pollution by improving the traffic flow in a city in a fully centralized manner. Validating this approach typically requires the use of simulations, which should be as realistic as possible. However, achieving high degrees of realism can be complex when the actual traffic patterns, defined through an Origin/Destination (O-D) matrix for the vehicles in a city, are unknown, as occurs most of the times. Thus, the first contribution of this thesis is to develop an iterative heuristic for improving traffic congestion modeling; starting from real induction loop measurements made available by the City Hall of Valencia, Spain, we were able to generate an O-D matrix for traffic simulation that resembles the real traffic distribution. If it were possible to characterize the state of traffic by predicting future traffic conditions for optimizing the route of automated vehicles, and if these measures could be taken to preventively mitigate the effects of congestion with its related problems, the overall traffic flow could be improved. Thereby, the second contribution of this thesis was to develop a Traffic Prediction Equation to characterize the different streets of a city in terms of travel time with respect to the vehicle load, and applying logistic regression to those data to predict future traffic conditions. The third and last contribution of this thesis towards our envisioned traffic management paradigm was a route server capable of handling all the traffic in a city, and balancing traffic flows by accounting for present and future traffic congestion conditions. Thus, we perform a simulation study using real data of traffic congestion in the city of Valencia, Spain, to demonstrate how the traffic flow in a typical day can be improved using our proposed solution. Experimental results show that our proposed solution, combined with frequent updating of traffic conditions on the route server, is able to achieve substantial improvements in terms of average travel speeds and travel times, both indicators of lower degrees of congestion and improved traffic fluidity.<br>Finally, I want to thank the Ecuatorian Republic through the "Secretaría de Educación Superior, Ciencia, Tecnología e Innovación" (SENESCYT), for granting me the scholarship to finance my studies.<br>Zambrano Martínez, JL. (2019). Efficient Traffic Management in Urban Environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/129865<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Xiaowei 1970 May 5. "High-speed and high-saturation-current partially depleted absorber photodetecters [i.e. photodetectors." 2004. http://hdl.handle.net/2152/12696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

"Segmentation based variational model for accurate optical flow estimation." 2009. http://library.cuhk.edu.hk/record=b5894018.

Full text
Abstract:
Chen, Jianing.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.<br>Includes bibliographical references (leaves 47-54).<br>Abstract also in Chinese.<br>Chapter 1 --- Introduction --- p.1<br>Chapter 1.1 --- Background --- p.1<br>Chapter 1.2 --- Related Work --- p.3<br>Chapter 1.3 --- Thesis Organization --- p.5<br>Chapter 2 --- Review on Optical Flow Estimation --- p.6<br>Chapter 2.1 --- Variational Model --- p.6<br>Chapter 2.1.1 --- Basic Assumptions and Constraints --- p.6<br>Chapter 2.1.2 --- More General Energy Functional --- p.9<br>Chapter 2.2 --- Discontinuity Preserving Techniques --- p.9<br>Chapter 2.2.1 --- Data Term Robustification --- p.10<br>Chapter 2.2.2 --- Diffusion Based Regularization --- p.11<br>Chapter 2.2.3 --- Segmentation --- p.15<br>Chapter 2.3 --- Chapter Summary --- p.15<br>Chapter 3 --- Segmentation Based Optical Flow Estimation --- p.17<br>Chapter 3.1 --- Initial Flow --- p.17<br>Chapter 3.2 --- Color-Motion Segmentation --- p.19<br>Chapter 3.3 --- Parametric Flow Estimating Incorporating Segmentation --- p.21<br>Chapter 3.4 --- Confidence Map Construction --- p.24<br>Chapter 3.4.1 --- Occlusion detection --- p.24<br>Chapter 3.4.2 --- Pixel-wise motion coherence --- p.24<br>Chapter 3.4.3 --- Segment-wise model confidence --- p.26<br>Chapter 3.5 --- Final Combined Variational Model --- p.28<br>Chapter 3.6 --- Chapter Summary --- p.28<br>Chapter 4 --- Experiment Results --- p.30<br>Chapter 4.1 --- Quantitative Evaluation --- p.30<br>Chapter 4.2 --- Warping Results --- p.34<br>Chapter 4.3 --- Chapter Summary --- p.35<br>Chapter 5 --- Application - Single Image Animation --- p.37<br>Chapter 5.1 --- Introduction --- p.37<br>Chapter 5.2 --- Approach --- p.38<br>Chapter 5.2.1 --- Pre-Process Stage --- p.39<br>Chapter 5.2.2 --- Coordinate Transform --- p.39<br>Chapter 5.2.3 --- Motion Field Transfer --- p.41<br>Chapter 5.2.4 --- Motion Editing and Apply --- p.41<br>Chapter 5.2.5 --- Gradient-domain composition --- p.42<br>Chapter 5.3 --- Experiments --- p.43<br>Chapter 5.3.1 --- Active Motion Transfer --- p.43<br>Chapter 5.3.2 --- Animate Stationary Temporal Dynamics --- p.44<br>Chapter 5.4 --- Chapter Summary --- p.45<br>Chapter 6 --- Conclusion --- p.46<br>Bibliography --- p.47
APA, Harvard, Vancouver, ISO, and other styles
48

Ju, Chang-Yuan. "Theory and application of optical second harmonic generation on dielectric surfaces." Thesis, 1994. http://hdl.handle.net/1957/35739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Goya, Jaren M. (Jaren Minoru). "Frequency resolved cell sizes using optical coherence tomography." Thesis, 2006. http://hdl.handle.net/10125/20560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Goosen, Gerhardus Rossouw. "Using low cost sensors and kalman filtering for land-based vehicle attitude estimation." Thesis, 2011. http://hdl.handle.net/10210/4218.

Full text
Abstract:
M.Ing.<br>Vehicle attitude is the most significant of the navigational parameters in terms of its influence on accumulated dead reckoning errors. To determine the attitude of the host vehicle body, with respect to the earth, it is necessary to keep track of the orientation of the body axes with respect to the local earth navigational frame (north, east and down). The aim of this research is to investigate the feasibility and the enhancement of low cost inertial sensors (such as gyroscopes) by the addition of magnetometer and pitch and roll angle sensors. The focus of this research is on the use of low cost inertial measurement systems to determine the attitude of a vehicle body. Strapdown system principles and the estimation theory are applied to achieve this goal. Both Euler angles and Quatemions as attitude representation are implemented and compared with one another. Work is concentrated around the mathematical models for low cost sensors and the attitude system dynamics. A sensor cluster is constructed using three gyroscopes, a magnetometer and two inclinometers. These inertial sensors were integrated using a Kalman filter. The mathematics, calculations and principles used are universal for all attitude systems. Practical data was recorded after which it was filtered to illustrate the working of the Kalman filter. The addition of a magnetometer and two inclinometers are indeed feasible for enhancing the attitude obtained from the inertial sensors. The benefit associated with the gyroscopes, when the magnetometer readings are disturbed by external magnetic anomalies, where small and of little significance. This thesis fully describes the theory and approach followed to implement the Kalman filter, making this a good example of a Kalman filter implementation, especially with the MATLAB software realisation presented in the appendix.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography