Dissertationen zum Thema „Information Technology|Engineering, General|Engineering, Electronics and Electrical“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Information Technology|Engineering, General|Engineering, Electronics and Electrical.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Information Technology|Engineering, General|Engineering, Electronics and Electrical" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Tebbetts, Jo A. „Cable modems' transmitted RF| A study of SNR, error rates, transmit levels, and trouble call metrics“. Thesis, Capella University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3556737.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Hypotheses were developed and tested to measure the cable modems operational metrics response to a reconfiguration of the cable modems' transmitted RF applied to the CMTS. The purpose of this experiment was to compare two groups on the use of non-federal RF spectrum to determine if configuring the cable modems' transmitted RF from 25.2 MHz, at 6.4 MHz Wide, 64 QAM and 31 MHz, at 6.4 MHz Wide, 64 QAM to 34.8 MHz, 6.4 MHz Wide, 64QAM improved the data services operational metrics measured by a wire line service operator to determine the quality of their product. The experiment tests the theory; configuring cable modems' transmitted RF to 34.8 MHz, 6.4 MHz Wide, 64QAM on the CMTS significantly impacted a cable modem's operational metrics, and as a result, increased operational effectiveness.

A randomized experiment on 117,084 cable modems resulted in a significant impact on SNR and transmit rates but did not present a significant impact on error rates and the trouble call metrics. The results showed that reconfiguring the cable modems' transmitted RF from 25.2 MHz, at 6.4 MHz Wide, 64 QAM and 31 MHz, at 6.4 MHz Wide, 64 QAM, to 34.8 MHz, 6.4 MHz Wide, 64QAM did significantly increase the SNR and transmit rates but did not significantly impact error rates and the trouble call truck roll metrics. The results are discussed in relation to other work implicating engineering RF management strategies and the impact on the cable modems operational metrics by reconfiguring the cable modems' RF from the lower ends of the RF spectrum into the middle of the RF spectrum configured on a wire line service operator's CMTS.

2

Harms, Herbert Andrew. „Considerations on the optimal and efficient processing of information-bearing signals“. Thesis, Princeton University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3597492.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal detectors and estimators have long been known, e.g., for maximum likelihood or minimum mean-squared error criteria, but might not admit an efficient implementation. A tradeoff often exists between the two goals. This thesis explores the tradeoff between optimality and efficiency in a passive radar system and an analog-to-digital converter. A passive radar system opportunistically uses illuminating signals from the environment to detect and track targets of interest, e.g., airplanes or vehicles. As an opportunistic user of signals, the system does not have control over the transmitted waveform. The available waveforms are not designed for radar and often have undesirable properties for radar systems, so the burden is on the receiver processing to overcome these obstacles. A novel technique is proposed for the processing of digital television signals as passive radar illuminators that eases the need for complex detection and tracking schemes while incurring only a small penalty in detection performance. An analog-to-digital converter samples analog signals for digital processing. The Shannon-Nyquist theorem describes a sufficient sampling and recovery scheme for bandlimited signals from uniformly spaced samples taken at a rate twice the bandwidth of the signal. Frequency-sparse signals are composed of relatively few frequency components and have fewer degrees of freedom than a frequency-dense bandlimited signal. Recent results in compressed sensing describe sufficient sampling and recovery schemes for frequency-sparse signals that require a sampling rate proportional to the spectral density and the logarithm of the bandwidth, while providing high fidelity and requiring many fewer samples, which saves resources. A proposed sampling and simple recovery scheme is shown to efficiently recover the locations of tones in a large bandwidth nearly-optimally using relatively few samples. The proposed sampling scheme is further optimized for full recovery of the input signal by matching the statistics of the scheme to the statistics of the input signal.

3

Koutrakou, Vassiliki N. „National and European collaborative programmes in information technology research in the 1980's“. Thesis, University of Kent, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317659.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Yujin. „Mobility and Traffic Correlations in Device-to-Device (D2D) Communication Networks“. Thesis, North Carolina State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3690209.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martin, Fregelius. „Power electronics and controller interface for a Voltage Source Converter“. Thesis, Uppsala universitet, Elektricitetslära, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-322903.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The purpose of the thesis is to develop a system for a split-rotor drive and evaluatecontrollers and their internal components such as processors, communicationprotocols and execution speed for controlling magnetization currents in a hydropower station.The first part of the thesis builds the theory review and provides an introduction tothe most common processors and controllers available. The processors which wasevaluated were microprocessor, DSP and FPGA which have a high capacity andvariety of implementation possibilities. Two controllers, PLC and PAC whereevaluated, which contain some or several of the processors and have a wide variety ofinputs and outputs and support as well several communication protocols.Three different communication protocols; WLAN 802.11, Ethernet 802.3 andBluetooth 802.15.1. Evaluation was made by comparing BER, throughput, speed and implementationcomplexity. The second part of the thesis was to develop and order an interface card forconnecting power-electronics and measurements circuits for the system, based on thetheory and evaluation of the controller and communication protocols.
6

Bevington, John S. „A model for generating object-based change information from multi-temporal remotely-sensed imagery“. Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/193461/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
As world populations increasingly are clustered in urban areas, so there is a tangible need for accurate mapping of these regions by national mapping agencies. A consequential impact of growing cities is that greater numbers of people across the globe are vulnerable to the effects of natural disasters or anthropogenic catastrophes. Tools such as remote sensing have been widely used by researchers to monitor urban areas for applications such as land use and land cover changes and population distribution to name a few. Air- and space-borne sensors with fine spatio-temporal resolutions have facilitated these analyses, offering an effective and efficient data source for multi-temporal analysis of urban areas. Alongside the increased data availability from remote sensors is a demand for efficient algorithms for interpretation of these images. This thesis describes the development of a conceptual framework for the iterative processing of fine spatial resolution optical images. It consists of two central components, object detection and object comparison. In the object detection phase, buildings are identified in the image and extracted as objects stored in a scene model. Object attributes describing the location, geometric, spectral and textural characteristics of each object are stored in a database, allowing the on-demand display as vector or raster entities. The thesis implements the model through exemplars for the detection of circular and cylindrical features on several remote sensing and simulated datasets. The object comparison phase allows automated change information to be generated describing per-object and intra-object brightness variability over time, hence, allowing change to be quantified for each detected feature. These descriptors facilitate the manual use of qualitative scales for damage assessment. A detailed discussion is presented on the merit of the conceptual model, its limitations and describes how future expansion of the model to full implementation could be achieved.
7

Karabey, Bugra. „Attack Tree Based Information Technology Security Metric Integrating Enterprise Objectives With Vulnerabilities“. Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12614100/index.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Security is one of the key concerns in the domain of Information Technology systems. Maintaining the confidentiality, integrity and availability of such systems, mandates a rigorous prior analysis of the security risks that confront these systems. In order to analyze, mitigate and recover from these risks a metrics based methodology is essential in prioritizing the response strategies to these risks and also this approach is required for resource allocation schedules to mitigate such risks. In addition to that the Enterprise Objectives must be focally integrated in the definition, impact calculation and prioritization stages of this analysis to come up with metrics that are useful both for the technical and managerial communities within an organization. Also this inclusion will act as a preliminary filter to overcome the real life scalability issues inherent with such threat modeling efforts. Within this study an attack tree based approach will be utilized to offer an IT Security Risk Evaluation Method and Metric called TEOREM (Tree based Enterprise Objectives Risk Evaluation Method and Metric) that integrates the Enterprise Objectives with the Information Asset vulnerability analysis within an organization. Applicability of the method has been analyzed within a real life setting and the findings are discussed as well within this study.
8

Renbi, Abdelghani. „Improved PWB test methodologies“. Licentiate thesis, Luleå tekniska universitet, EISLAB, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18253.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Printed Wiring Board (PWB) and Printed Circuit Board Assembly (PCBA) testing aims to ensure an error free board after the etching and the assembly processes. After the etching process, several types of errors might occur such as opens and bridges, which are already, showstoppers in Direct Current (DC) applications. Mouse bites, spurs and others such as weak traces, which can be problematic in Radio Frequency (RF) and high-speed signals applications. Loading expensive component on defective boards can be economically catastrophic especially for high volume production. The rule of ten which has been reported by the production experts says that the defect costs ten times when detected in the next testing phase. Bare board also needs to be tested for the characteristic impedance correctness due to the process variations and the compounding raw material tolerances that can cause characteristic impedance mismatches. Although testing the characteristic impedance is not in interest in some application, sampling the characteristic impedance for a specific design is one way to test the manufacturing process stability for better tuning, otherwise PWBs might differ from each other even within the same batch. In addition to the possibility of defective PWB, the assembly process is never perfect to achieve 100 % of PCBA yield due to the possible errors in the process steps such as paste application, pick and place operations and soldering process which might lead to bridges, opens, wrong or miss oriented components.For low volume production, flying probes test technology is cost efficient as compared to bed-of-nails. The performance of the flying probes system depends on the test algorithm, the mechanical speed and the number of probes. To reduce the initial and maintenance costs of the probing technology and to accelerate the test time, Paper A introduces a new indirect method to test PWB continuity and isolation testing using a single probe for testing both continuity and isolation at the same time. RF signal is injected into the trace under test, instead of a DC current. The phase shift between the incident and the reflected signals is measured as it carries the information about the correctness of the trace when compared with a reference value of the same trace in the correct board. The method shown an important capability for detecting PWB defects such as as opens, DC and RF bridges, exceeded and different width lines. The margin in the measurement between a defective and a correct board, which depends on the type of the defect, is about 7 % to 68 %. Applying this approach to PCBA testing led to significant margins between correct and defective interconnect. The test cases in paper C shown 40 % and 33 %. Moreover, this margin has been proven to be important even for short microstrip line, which intended to connect two typical IC pins. This technique is strongly recommended to be applied to PCBA testing where probing is feasible. The approach can be applied to the complete layout testing or to boost a test strategy whose test solutions are not covering 100 % of the possible defects.By applying this test solution to bed-of-nails equipment, 50 % of the probes will be reduced, on the other hand, for a given design with NI isolated traces and NA adjacent pairs, employing this solution to flying probes system with two probes, leads to the reduction of the number of tests from (NI+NA) tests to NI tests as isolation and continuity are performed in one go. Flying probes system involves mechanical movements, which dominate the test time, reducing the number of the mechanical movements increases dramatically the test throughput. On the other hand, this method is believed to be extremely fast to test the correctness of the characteristic impedance which is prone to variations due to the instability of the PWB manufacturing process, in the same time one could employ the method to evaluate the process stability by checking after each batch of PWBs. Paper B and D provide insight into the impact of the PWB manufacturing variations on the characteristic impedance. Moreover single probe approach is believed to have a good potential for Sequential Build-Up (SBU) interconnects testing where connections between component pads and the upper layers are often impossible to test with the current test technologies.
Godkänd; 2012; 20121123 (abdren); LICENTIATSEMINARIUM Ämne: Industriell elektronik/Industrial Electronics Examinator: Professor Jerker Delsing, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Associate Professor Erik Larsson, Linköpings universitet Tid: Tisdag den 18 december 2012 kl 13.00 Plats: A1514, Luleå tekniska universitet
9

Sheriff, Ray E. „The 2010 Electronics and Telecommunications Research Seminar Series: 9th Workshop Proceedings“. University of Bradford, 2010. http://hdl.handle.net/10454/4355.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
yes
This is the ninth workshop to be organised under the postgraduate programmes in electrical and electronic engineering (EEE). The workshop concludes the Research Seminar Series, which has provided a platform for disseminating the latest research activities in related technologies through its weekly seminars. The EEE courses cover a broad range of technologies and this is reflected in the variety of topics presented during the workshop. In total, forty-four papers have been selected for the proceedings, which have been divided into eight sections. The workshop aims to be as close to a `real¿ event as possible. Hence, authors have responded to a Call for Papers with an abstract, prior to the submission of the final paper. This has been a novel experience for many, if not all of the contributors. As usual, authors have taken up the challenge with enthusiasm, resulting in a collection of papers that reflects today¿s research challenges.
School of Engineering, Design and Technology
10

Persson, Anders. „Platform development of body area network for gait symmetry analysis using IMU and UWB technology“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Having a device with the capability of measure motions from gait produced by a human being, could be of most importance in medicine and sports. Physicians or researchers could measure and analyse key features of a person's gait for the purpose of rehabilitation or science, regarding neurological disabilities. Also in sports, professionals and hobbyists could use such a device for improving their technique or prevent injuries when performing. In this master thesis, I present the research of what technology is capable of today, regarding gait analysis devices. The research that was done has then help the development of a suggested standalone hardware sensor node for a Body Area Network, that can support research in gait analysis. Furthermore, several algorithms like for instance UWB Real-Time Location and Dead Reckoning IMU/AHRS algorithms, have been implemented and tested for the purpose of measuring motions and be able to run on the sensor node device. The work in this thesis shows that a IMU sensor have great potentials for generating high rate motion data while performing on a small mobile device. The UWB technology on the other hand, indicates a disappointment in performance regarding the intended application but can still be useful for wireless communication between sensor nodes. The report also points out the importance of using a high performance micro controller for achieving high accuracy in measurements.
11

She, Huimin. „Network-Calculus-based Performance Analysis for Wireless Sensor Networks“. Licentiate thesis, KTH, Electronic, Computer and Software Systems, ECS, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10686.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Recently, wireless sensor network (WSN) has become a promising technologywith a wide range of applications such as supply chain monitoringand environment surveillance. It is typically composed of multiple tiny devicesequipped with limited sensing, computing and wireless communicationcapabilities. Design of such networks presents several technique challengeswhile dealing with various requirements and diverse constraints. Performanceanalysis techniques are required to provide insight on design parametersand system behaviors.

Based on network calculus, we present a deterministic analysis methodfor evaluating the worst-case delay and buffer cost of sensor networks. Tothis end, three general traffic flow operators are proposed and their delayand buffer bounds are derived. These operators can be used in combinationto model any complex traffic flowing scenarios. Furthermore, the methodintegrates a variable duty cycle to allow the sensor nodes to operate at lowrates thus saving power. In an attempt to balance traffic load and improveresource utilization and performance, traffic splitting mechanisms areintroduced for mesh sensor networks. Based on network calculus, the delayand buffer bounds are derived in non-splitting and splitting scenarios.In addition, analysis of traffic splitting mechanisms are extended to sensornetworks with general topologies. To provide reliable data delivery in sensornetworks, retransmission has been adopted as one of the most popularschemes. We propose an analytical method to evaluate the maximum datatransmission delay and energy consumption of two types of retransmissionschemes: hop-by-hop retransmission and end-to-end retransmission.

We perform a case study of using sensor networks for a fresh food trackingsystem. Several experiments are carried out in the Omnet++ simulationenvironment. In order to validate the tightness of the two bounds obtainedby the analysis method, the simulation results and analytical results arecompared in the chain and mesh scenarios with various input traffic loads.From the results, we show that the analytic bounds are correct and tight.Therefore, network calculus is useful and accurate for performance analysisof wireless sensor network.


Ipack VINN Excellence Center
12

Oza, Neal N. „Engineering Photonic Switches for Quantum Information Processing“. Thesis, Northwestern University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3669298.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

In this dissertation, we describe, characterize, and demonstrate the operation of a dual-in, dual-out, all-optical, fiber-based quantum switch. This "cross-bar" switch is particularly useful for applications in quantum information processing because of its low-loss, high-speed, low-noise, and quantum-state-retention properties.

Building upon on our lab's prior development of an ultrafast demultiplexer [1-3] , the new cross-bar switch can be used as a tunable multiplexer and demultiplexer. In addition to this more functional geometry, we present results demonstrating faster performance with a switching window of ≈45 ps, corresponding to >20-GHz switching rates. We show a switching fidelity of >98%, i. e., switched polarization-encoded photonic qubits are virtually identical to unswitched photonic qubits. We also demonstrate the ability to select one channel from a two-channel quantum data stream with the state of the measured (recovered) quantum channel having >96% relative fidelity with the state of that channel transmitted alone. We separate the two channels of the quantum data stream by 155 ps, corresponding to a 6.5-GHz datastream.

Finally, we describe, develop, and demonstrate an application that utilizes the switch's higher-speed, lower-loss, and spatio-temporal-encoding features to perform quantum state tomographies on entangled states in higher-dimensional Hilbert spaces. Since many previous demonstrations show bipartite entanglement of two-level systems, we define "higher" as d > 2 where d represents the dimensionality of a photon. We show that we can generate and measure time-bin-entangled, two-photon, qutrit (d = 3) and ququat (d = 4) states with >85% and >64% fidelity to an ideal maximally entangled state, respectively. Such higher-dimensional states have applications in dense coding [4] , loophole-free tests of nonlocality [5] , simplifying quantum logic gates [6] , and increasing tolerance to noise and loss for quantum information processing [7] .

13

Larsson, Erik, und Niklas Kron. „Independent project in electrical engineering : Magnetic hand timepiece“. Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325637.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Etherden, Nicholas. „Increasing the hosting capacity of distributed energy resources using storage and communication“. Doctoral thesis, Luleå tekniska universitet, Energivetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18490.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis develops methods to increase the amount of renewable energy sources that can be integrated into a power grid. The assessed methods include i) dynamic real-time assessment to enable the grid to be operated closer to its design limits; ii) energy storage and iii) coordinated control of distributed production units. Power grids using such novel techniques are referred to as “Smart Grids”. Under favourable conditions the use of these techniques is an alternative to traditional grid planning like replacement of transformers or construction of a new power line. Distributed Energy Resources like wind and solar power will impact the performance of the grid and this sets a limit to the amount of such renewables that can be integrated. The work develops the hosting capacity concept as an objective metric to quantify the ability of a power grid to integrate new production. Several case studies are presented using actual hourly production and consumption data. It is shown how the different variability of renewables and consumption affect the hosting capacity. The hosting capacity method is extended to the application of storage and curtailment. The goal is to create greater comparability and transparency, thereby improving the factual base of discussions between grid operators, electricity producers and other stakeholders on the amount and type of production that can be connected to a grid.Energy storage allows the consumption and production of electricity to be decoupled. This in turn allows electricity to be produced as the wind blows and the sun shines while consumed when required. Yet storage is expensive and the research defines when storage offers unique benefits not possible to achieve by other means. Focus is on comparison of storage to conventional and novel methods.As the number of distributed energy resources increase, their electronic converters need to provide services that help to keep the grid operating within its design criteria. The use of functionality from IEC Smart Grid standards, mainly IEC 61850, to coordinate the control and operation of these resources is demonstrated in a Research, Development and Demonstration site. The site contains wind, solar power, and battery storage together with the communication and control equipment expected in the future grids.Together storage, new communication schemes and grid control strategies allow for increased amounts of renewables into existing power grids, without unacceptable effects on users and grid performance.
Avhandlingen studerar hur existerande elnät kan ta emot mer produktion från förnyelsebara energikällor som vindkraft och solenergi. En metodik utvecklas för att objektivt kvantifiera mängden ny produktion som kan tas emot av ett nät. I flera fallstudier på verkliga nät utvärderas potentiella vinster med energilager, realtids gränser för nätets överföringsförmåga, och koordinerad kontroll av småskaliga energiresurser. De föreslagna lösningarna för lagring och kommunikation har verifierats experimentellt i en forskning, utveckling och demonstrationsanläggning i Ludvika.
Godkänd; 2014; Bibliografisk uppgift: Nicholas Etherden är industridoktorand på STRI AB i Göteborg. Vid sidan av doktoreringen har Nicholas varit aktiv som konsult inom kraftsystemsautomation och Smarta Elnät. Hans specialitet är IEC 61850 standarden för kommunikation inom elnät, vindkraftparker och distribuerad generering. Författaren har en civilingenjörsexamen i Teknisk fysik från Uppsala Universitet år 2000. Under studietiden läste han även kurser i kemi, miljökunskap och teoretisk filosofi. Han var under studietiden ordförande för Student Pugwash Sweden och ledamot International Network of Engineers and of Scientists for Global Responsibility (INES). Efter studietiden var han ordförande i Svenska Forskare och Ingenjörer mot Kärnvapen (FIMK). Han började sin professionella bana som trainee på ABB i Västerås där han spenderade sex år som utvecklare och grupp ledare för applikationsutvecklingen i ABB reläskydd. I parallell till arbete har han läst elkraft vid Mälardalenshögskola. År 2008 började han på STRI AB som ansvarig för dess IEC 61850 interoperabilitetslab. Han är på uppdrag av Svenska Kraftnät aktiv i ENTSO-E IEC 61850 specificeringsarbete och svensk representant i IEC tekniska kommitté 57, arbetsgrupp 10 som förvaltar IEC 61850 standarden. Han har hållit över 30 kurser i IEC 61850 standarden i fler än 10 länder.; 20140218 (niceth); Nedanstående person kommer att disputera för avläggande av teknologie doktorsexamen. Namn: Nicholas Etherden Ämne: Elkraftteknik/Electric Power Engineering Avhandling: Increasing the Hosting Capacity of Distributed Energy Resources Using Storage and Communication Opponent: Professor Joao A Peças Lopes, Faculty of Engineering of the University of Porto, Portugal Ordförande: Professor Math Bollen, Avd för energivetenskap, Institutionen för teknikvetenskap och matematik, Luleå tekniska universitet Tid: Måndag den 24 mars 2014, kl 09.00 Plats: Hörsal A, Campus Skellefteå, Luleå tekniska universitet
SmartGrid Energilager
15

Etherden, Nicholas. „Increasing the hosting capacity of distributed energy resources using storage and communication“. Licentiate thesis, Luleå tekniska universitet, Energivetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The use of electricity from Distributed Energy Resources like wind and solar powerwill impact the performance of the electricity network and this sets a limit to theamount of such renewables that can be connected. Investment in energy storage andcommunication technologies enables more renewables by operating the networkcloser to its limits. Electricity networks using such novel techniques are referred toas “Smart Grids”. Under favourable conditions the use of these techniques is analternative to traditional network planning like replacement of transformers orconstruction of new power line.The Hosting Capacity is an objective metric to determine the limit of an electricitynetwork to integrate new consumption or production. The goal is to create greatercomparability and transparency, thereby improving the factual base of discussionsbetween network operators and owners of Distributed Energy Resources on thequantity and type of generation that can be connected to a network. This thesisextends the Hosting Capacity method to the application of storage and curtailmentand develops additional metrics such as the Hosting Capacity Coefficient.The research shows how the different intermittency of renewables and consumptionaffect the Hosting Capacity. Several case studies using real production andconsumption measurements are presented. Focus is on how the permitted amountof renewables can be extended by means of storage, curtailment and advanceddistributed protection and control schemes.
Användningen av el från förnyelsebara energikällor som vind och sol kommer att påverka elnätet, som sätter en gräns för hur mycket distribuerad energiproduktion som kan anslutas. Investeringar i storskalig energilager och användning av modern kommunikationsteknologi gör det möjligt att öka andelen förnyelsebarenergi genom att nätet kan drivas närmare sina gränser. Elnät med sådana nya tekniker kallas ofta för ”Smarta Elnät". Implementering av sådana smarta elnät kan vara ett alternativ till traditionell nätplanering och åtgärder som utbyte av transformatorer eller konstruktion av nya kraftledningen.Nätets acceptansgräns är ett objektivt mått för att bestämma gränsen för nätets förmåga att integrera ny förbrukning eller produktion. Målet är att skapa större transparens och bidra till ett bättre faktaunderlag i diskussioner mellan nätoperatörer och ägare av distribuerade energiresurser. Denna avhandling utökar acceptansgränsmetoden för tillämpning med energilager och produktions nedstyrning och utvecklar ytterligare begrepp så som acceptansgränsen koefficienten.Forskningen visar hur varierbarheten hos olika förnyelsebara energikällor samverkar med förbrukningen och påverkar nätets acceptansgräns. Flera fallstudier från verkliga elnät och med uppmätt produktion och konsumtion presenteras. Fokus är på hur den tillåtna mängden förnyelsebara energikällor kan ökas med hjälp av energilagring, kontrollerad produktionsnedstyrning och med avancerad distribuerade skydd och kontroll applikationer.

Godkänd; 2012; Bibliografisk uppgift: Nicholas Etherden works at STRI AB (www.stri.se) in Gothenburg, Sweden. When he is not pursuing his half-time PhD studies he works as a specialist consultant in the field of Power Utility Automation, specialising on the IEC 61850 standard for power utility automation (today widely used in substations as well as some wind parks, hydro plants and DER and Smart Grid applications such as vehicle-to-grid integration). The author of this thesis received his Master of Science in Engineering Physics from Uppsala University 2000. Side tracks during his engineering studies included studies in theoretical philosophy, chemistry, ecology and environmental sciences as well as chairing the Swedish student committee of the Pugwash Conferences on Science and Worlds Affairs and later board member of the International Network of Engineers and of Scientists for Global Responsibility (INES) and chair of Swedish Scientists and Engineers Against Nuclear Arms. He has been a trainee at ABB in Västerås Sweden and spent six years as developer and team leader for the application development of a new relay protection family (ABB IED 670 series). In parallel to his professional work he studied power system engineering at Mälardalens University and travelled to all continents of the world. Since 2008 he is responsible for the STRI IEC 61850 Independent Interoperability Laboratory and a member of IEC Technical Committee 57 working group 10 "Power system communication and associated data models” and UCA/IEC 61850 User group testing subcommittee. He is co-author of IEC 61850-1 and main contributor to “Technical Report on Functional Test of IEC 61850 systems” and has held over 25 hands-on courses around the world on IEC 61850 “Communication networks and systems for power utility automation”.; 20120514 (niceth); LICENTIATSEMINARIUM Ämnesområde: Energiteknik/Energy Engineering Examinator: Professor Math Bollen, Institutionen för teknikvetenskap och matematik, Luleå tekniska universitet Diskutant: Professor Sami Repo, Tampere University of Technology, Finland Tid: Onsdag den 13 juni 2012 kl 10.00 Plats: Hörsal A, campus Skellefteå, Luleå tekniska universitet


SmartGrid Energilager
16

Ahmadi, Teshnizi Amir Pouya, Marcus Hellström, Tom Bärnheim und Hassan Soltani. „IoT Air Quality Sensor Array : Master's Programme in Electrical Engineering“. Thesis, Uppsala universitet, Institutionen för elektroteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Al, Kzair Christian, Altin Januzi und Andreas Blom. „Understanding the fundamentals of CPU architecture : Bachelor project in Electrical engineering“. Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353427.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Understanding how a computer or rather a CPU works can be a bit tricky and hard to understand. We live today in a society full of computers and there are many who do not understand how a CPU works. This project is aimed to understand how a CPU works and the architecture behind it. For this it is demonstrated the fundamental theory behind it but also a practical computer that has been built from scratch. This computer can demonstrate the theory behind how the CPU works and also how it communicates. This computer is 8-bit which has it limitations but can show the fundamental theory behind how computers work.
18

Gupta, Shoubhik. „Ultra-thin silicon technology for tactile sensors“. Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/41053/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In order to meet the requirements of high performance flexible electronics in fast growing portable consumer electronics, robotics and new fields such as Internet of Things (IoT), new techniques such as electronics based on nanostructures, molecular electronics and quantum electronics have emerged recently. The importance given to the silicon chips with thickness below 50 μm is particularly interesting as this will advance the 3D IC technology as well as open new directions for high-performance flexible electronics. This doctoral thesis focusses on the development of silicon-based ultra-thin chip (UTC) for the next generation flexible electronics. UTCs, on one hand can provide processing speed at par with state-of-the-art CMOS technology, and on the other provide the mechanical flexibility to allow smooth integration on flexible substrates. These development form the motivation behind the work presented in this thesis. As the thickness of any silicon piece decreases, the flexural rigidity decreases. The flexural rigidity is defined as the force couple required to bend a non-rigid structure to a unit curvature, and therefore the flexibility increases. The new approach presented in this thesis for achieving thin silicon exploits existing and well-established silicon infrastructure, process, and design modules. The thin chips of thicknesses ranging between 15 μm - 30 μm, were obtained from processed bulk wafer using anisotropic chemical etching. The thesis also presents thin wafer transfer using two-step transfer printing approach, packaging by lamination or encapsulation between two flexible layerand methods to get the electrical connections out of the chip. The devices realised on the wafer as part of front-end processing, consisted capacitors and transistors, have been tested to analyse the effect of bending on the electrical characteristics. The capacitance of metal-oxide-semiconductor (MOS) capacitors increases by ~5% during bending and similar shift is observed in flatband and threshold voltages. Similarly, the carrier mobility in the channel region of metal-oxide-semiconductor field effect transistor (MOSFET) increases by 9% in tensile bending and decreases by ~5% in compressive bending. The analytical model developed to capture the effect of banding on device performance showed close matching with the experimental results. In order to employ these devices as tactile sensors, two types of piezoelectric materials are investigated, and used in extended gate configuration with the MOSFET. Firstly, a nanocomposite of Poly(vinylidene fluoride-co-trifluoroethylene), P(VDF-TrFE) and barium titanate (BT) was developed. The composite, due to opposite piezo and pyroelectric coefficients of constituents, was able to suppress the sensitivity towards temperature when force and temperature varied together, The sensitivity to force in extended gate configuration was measured to be 630 mV/N, and sensitivity to temperature was 6.57 mV/oC, when it was varied during force application. The process optimisation for sputtering piezoelectric Aluminium Nitride (AlN) was also carried out with many parametric variation. AlN does not require poling to exhibit piezoelectricity and therefore offers an attractive alternative for the piezoelectric layer used in devices such as POSFET (where piezoelectric material is directly deposited over the gate area of MOSFET). The optimised process gave highly orientated columnar structure AlN with piezoelectric coefficient of 5.9 pC/N and when connected in extended gate configuration, a sensitivity (normalised change in drain current per unit force) of 2.65 N-1 was obtained.
19

Zhang, Lanyun. „Using mobile technology to facilitate the user experience of group holiday decision-making“. Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/51935/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
With the increasing expenditure of international tourism around the world, the topic of studying group holiday decision-makings has drawn attentions in the fields of tourism research and business management (Mottiar & Quinn, 2004; Wang et al., 2004; Carr, 2005; Jacobsen & Munar, 2012; Assayer et al., 2011). Yet, the user experience of tourists in groups has been reported to be in need of improvements (Garcia et al., 2009). For example, such user experience lacks effective information sharing among group members, a convenient communication environment, and an efficient decision-making support (Decrop, 2005). A possible solution is technology, such as smart phones, that this technology has evolved from single-purpose communication devices into dynamic tools that support users in a wide variety of tasks (Böhmer et al., 2011). This thesis is devoted to studying the user experience related to technology-supported group holiday decision-making. It aims to investigate how mobile technology can help a group of people to make holiday decisions with a view to enhance the user experience. This thesis reviews theoretical approaches to help understand the concepts and related works (Chapter 2). Research methods are also discussed, including the framework of user-centred design employed in this research, and the challenges of exploring user experience in this context (Chapter 3). This thesis investigates the user experience of how tourist groups plan their trips, including an understanding of user behaviour and requirements. It proposes a model of group trip planning process to describe the core elements of group holiday planning (Chapter 4). Then, it explores a number of factors that influence the group holiday planning process (Chapter 5). Next, tourism information presentation is examined in terms of exploring the characteristics of different types of textual tourism information on the Internet and how the perceptions of tourists are affected by these different types of information accordingly (Chapter 6). Design implications are derived and discussed to guide the design of technology, for the purposes of facilitating group holiday planning process. Chapter 7 describes the three key elements considered in this design of mobile technology: usability, personalisation, and enjoyable user experience. The development of a prototype of this technology, #GT-Planner, is also elaborated (Chapter 7). Finally, this thesis investigates the user experience of this prototype (#GT-Planner), in which both subjective approaches (i.e., questionnaires and interviews) and objective approach (i.e., physiological measurement) are employed (Chapter 8). #GT-Planner is shown to facilitate the group holiday decision-making process and result in an enriched user experience. The thesis primarily discusses the understandings of the user experience of group holiday decision-making, the design implications for group holiday decision-making, the framework of user-centred design, and methods for examining the users in a group and evaluating the technology. Finally, findings and conclusions are specified and highlighted, along with a discussion of the contributions derived from this thesis and the avenues for future work.
20

Fitzpatrick, Dominic Michael Fitzpatrick. „Novel MMIC design process using waveform engineering“. Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/47079/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
It has always been the case that talented individuals with an innate understanding of their subject have been able to produce works of outstanding performance. The purpose of engineering science is to define ways in which such achievements can be made on a regular,predictable basis with a high degree of confidence in success. Some tools, such as computers, have enabled an increase in speed and accuracy, whilst others have given a dramatic increase in the insight into the operation or behavior of materials; the electron microscope for instance. Still others have enabled the creation of devices on a scale unimaginable to our predecessors, Molecular Beam Epitaxy for example. This work is the product of the availability of an understanding of complex theory on microwave transistor operation, significant increases in mathematical processing and data handling, and the assembly of a ‘tool’ that not only allows the measurement of high frequency waveforms, but their manipulation to simultaneously create the environments envisioned by the design engineer. It extends the operation of previous narrow band active load pull measurement systems to 40GHz and importantly facilitates the design of high efficiency modes at X band. The main tenant of this work is to propose that rather than the linear approach of characterisation, design, test, re-iterate, that has been the standard approach to MMIC design to date, the first three stages should be integrated into a single approach which should obviate the need for design reiteration. The result of this approach should be better performance from amplifier designs, greater probability of success first time, and lower costs through less wafer real estate being consumed and fewer sign ‘spins’.
21

Nemeth, Balazs. „Ion camera development for real–time acquisition of localised pH responses using the CMOS based 64×64–pixel ISFET sensor array technology“. Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3559/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis presents the development and test of an integrated ion camera chip for monitoring highly localised ion fluxes of electrochemical processes using an ion sensitive sensor array. Ionic concentration fluctuations are shown to travel across the sensor array as a result of citric acid injection and the BZ-reaction. The imaging capability of non-equilibrium chemical activities is also demonstrated monitoring self-assembling micrometre sized polyoxometalate tubular and membranous architectures. The sufficient spatial resolution for the visualisation of the 10-60 µm wide growing trajectories is provided by the dense sensor array containing 64×64 pixels. In the case of citric acid injection and the BZ-reaction the ion camera chip is shown to be able to resolve pH differences with resolution as low as the area of one pixel. As a result of the transient and volatile ionic fluxes high time resolution is required, thus the signal capturing can be performed in real.time at the maximum sampling rate of 40 µs per pixel, 10.2 ms per array. The extracted sensor data are reconstructed into ionic images and thus the ionic activities can be displayed as individual figures as well as continuous video recordings. This chip is the first prototype in the envisioned establishment of a fully automated CMOS based ion camera system which would be able to image the invisible activity of ions using a single microchip. In addition the capability of detecting ultra-low level pH oscillations in the extracellular space is demonstrated using cells of the slime mould organism. The detected pH oscillations with extent of ~0.022 pH furthermore raise the potential for observing fluctuations of ion currents in cell based tissue environments. The intrinsic noise of the sensor devices are measured to observe noise effect on the detected low level signals. It is experimentally shown that the used ion sensitive circuits, similarly to CMOS, also demonstrate 1/f noise. In addition the reference bias and pH sensitivity of the measured noise is confirmed. Corresponding to the measurement results the noise contribution is approximated with a 28.2 µV peak-to-peak level and related to the 450 µV +/- 70 µV peak-to-peak oscillations amplitudes of the slime mould. Thus a maximum intrinsic noise contribution of 6.2 +/- 1.2 % is calculated. A H+ flickering hypothesis is also presented that correlates the pH fluctuations on the surface of the device with the intrinsic 1/f noise. The ion camera chip was fabricated in an unmodified 4-metal 0.35 µm CMOS process and the ionic imaging technology was based on a 64×64-pixel ion sensitive field effect transistor (ISFET) array. The high-speed and synchronous operation of the 4096 ISFET sensors occupying 715.8×715.8 µm space provided a spatial resolution as low as one pixel. Each pixel contained 4 transistors with 10.2×10.2 µm layout dimensions and the pixels were separated by a 1 µm separation gap. The ion sensitive silicon nitride based passivation layer was in contact with the floating gates of the ISFET sensors. It allowed the capacitive measurements of localised changes in the ionic concentrations, e.g. pH, pNa, on the surface of the chip. The device showed an average ionic sensitivity of 20 mV/pH and 9 mV/pNa. The packaging and encapsulation was carried out using PGA-100 chip carriers and two-component epoxies. Custom designed printed circuit boards (PCBs) were used to provide interface between the ISFET array chip and the data acquisition system. The data acquisition and extraction part of the developed software system was based on LabVIEW, the data processing was carried out on Matlab platform.
22

Malek, Behzad. „Efficient private information retrieval“. Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26966.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In this thesis, we study Private Information Retrieval and Oblivious Transfer, two strong cryptographic tools that are widely used in various security-related applications, such as private data-mining schemes and secure function evaluation protocols. The first non-interactive, secure dot-product protocol, widely used in private data-mining schemes, is proposed based on trace functions over finite fields. We further improve the communication overhead of the best, previously known Oblivious Transfer protocol from O ((log(n))2) to O (log(n)), where n is the size of the database. Our communication-efficient Oblivious Transfer protocol is a non-interactive, single-database scheme that is generally built on Homomorphic Encryption Functions. We also introduce a new protocol that reduces the computational overhead of Private Information Retrieval protocols. This protocol is shown to be computationally secure for users, depending on the security of McEliece public-key cryptosystem. The total online computational overhead is the same as the case where no privacy is required. The computation-saving protocol can be implemented entirely in software, without any need for installing a secure piece of hardware, or replicating the database among servers.
23

Sörensson, Christian. „Cost efficient fluid sensor : Master’s Thesis project in Engineering Physics“. Thesis, Uppsala universitet, Mikrosystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-317792.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
A theoretical investigation of existing sensor techniques, bothcommercial sensors and scientific studies, has been performed inorder to find a cost efficient fluid sensor with the ability todetect small amounts of non-conducting fluids. From these studies,six different techniques could be distinguished. The techniques weretested and compared, both in theory and practically, against certaincriteria’s such as temperature and movement sensibility. Three of thetechniques have been proved to work and two of them were built,installed and tested on an industrial robot manufactured by ABBRobotics. The two most promising techniques distinguished were a photointerrupter and a Quartz Crystal Microbalance sensor. After tests itcould be concluded that both sensors fulfilled all preferences. However out of the two, the Quartz Crystal Microbalance sensorperformed best and could detect smaller amounts of fluid more quicklyand reliably than the photo interrupter. This work has resulted in a patent application.
24

Harrison, Andre V. „Information content models of human vision“. Thesis, The Johns Hopkins University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3572710.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

From night vision goggles, to infrared imagers, to remote controlled bomb disposal robots; we are increasingly employing electronic vision sensors to extend or enhance the limitations of our own visual sensory system. And while we can make these systems better in terms of the amount of power they use, how much information they capture, or how much information they can send to the viewer, it is also important to keep in mind the capabilities of the human who must receive this visual information from the sensor and display system. The best interface between our own visual sensory system and that of the electronic image sensor and display system is one where the least amount of visual information is sent to our own sensory system for processing, yet contains all the visual information that we need to understand the desired environment and to make decisions based on that information. In order to do this it is important to understand both the physiology of the visual sensory system and the psychophysics of how this information is used. We demonstrate this idea by researching and designing the components needed to optimize the compression of dynamic range information onto a display, for the sake of maximizing the amount of perceivable visual information shown to the human visual system.

An idea that is repeated in the construction of our optimal system is the link between designing, modeling, and validating both the design and the model through human testing. Compressing the dynamic range of images while trying to maximize the amount of visual information shown is a unique approach to dynamic range cornpression. So the first component needed to develop our optimal compression method is a way to measure the amount of visual information present in a compressed image. We achieve this by designing an Information Content Quality Assessment metric and we validate the metric using data from our psychophysical experiments [in preparation]. Our psychophysical experiments compare different dynamic range compression methods in terms of the amount of information that is visible after compression. Our quality assessment metric is unique in that it models visual perception using information theory rather than biology. To validate this approach, we extend our model so that it can predict human visual fixation. We compare the predictions of our model against human fixation data and the fixation predictions of similar state of the art fixation models. We show that the predictions of our model are at least corn-parable or better than the predictions of these fixation models. We also present preliminary results on applying the saliency model to identify potentially salient objects in out-of-focus locations due to a finite depth-of-field [in preparation]. The final component needed to implement the optimization is a way to combine the quality assessment model with the results of the psychophysical experiments to reach an optimal compression setting. We discuss how this could be implemented in the future using a generic dynamic range compression algorithm. We also present the design of a wide dynamic range image sensor and a mixed mode readout scheme to improve the accuracy of illumination measurements for each pixel over the entire dynamic range of the imager.

25

Li, Gan. „Stochastic analysis and optimization of power system steady-state with wind farms and electric vehicles“. Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3836/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Since the end of last century, power systems are more often operating under highly stressed and unpredictable conditions because of not only the market-oriented reform but also the rising of renewable generation and electric vehicles. The uncertain factors resulting from these changes lead to higher requirements for the reliability of power grids. In this situation, conventional deterministic analysis and optimization methods cannot fulfil these requirements very well, so stochastic analysis and optimization methods become more and more important. This thesis tries to cover different aspects of stochastic analysis and optimization of the power systems from a perspective of its steady state operation. Its main research topics consist of four parts: deterministic power flow calculations, modelling of wind farm power output and electric vehicle charging demand, probabilistic power flow calculations, as well as stochastic optimal power flow. These different topics involve modelling, analysis and optimization, which could establish a whole stochastic methodology of the power system with wind farms and electric vehicles.
26

Whyte, Griogair W. M. „Antennas for wireless sensor network applications“. Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/408/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The objective of this thesis is to present an analysis of antennas, which are applicable to wireless sensor networks and, in particular, to the requirements of the Speckled Computing Network Consortium. This was done through a review of the scientific literature on the subject, and the design, computer simulation, and experimental verification, of various suitable designs of antenna The first part of this thesis outlines what an antenna is and how it radiates. An insight is also given to the fundamental limitations of antennas. As antennas investigated in this thesis are planar-printed designs, an insight into the types of feed lines applicable, such as microstrip, CPW and slotline, is given. To help characterise the antennas investigated, the fundamental antenna analysis parameters, such as impedance bandwidth, S-parameters, radiation pattern, directivity, antenna efficiency, gain and polarisation are discussed. Also discussed is the 3D electromagnetic simulation software, HFSS, which was used to simulate the antennas in this thesis. To help illustrate the use of HFSS, a proximity-coupled patch antenna, operating at 5.8 GHz, was used as an example. A range of antennas were designed, manufactured and tested. These used conventional printed circuit boards (PCBs) and Gallium Arsenide (GaAs) substrates, operating at a range of frequencies from 2.4 GHz to 12 GHz. A review was conducted into relevant, suitable radio architectures such as, conventional narrowband systems, Ultra-Wide Band (UWB), and simplified radio architectures such as those based on the diode rectifier method, and Super Regenerative Receivers (SRR). There were several UWB antennas designed, which operate over a 3.1 – 10.16 GHz operational band with a VSWR ≤ 2. All the UWB antennas were required to transmit a UWB pulse with minimal distortion, which placed a requirement of linear phase and low values of group delay to minimise distortion on the pulse. UWB antennas investigated included a Vivaldi antenna, which was large, directional and gave excellent pulse transmission characteristics. A CPW-fed monopole was also investigated, which was small, omni-directional and had poor pulse transmission characteristics. A UWB dipole was designed for use in a UWB channel modelling experiment in collaboration with Strathclyde University. The initial UWB dipole investigated was a microstrip-fed structure that had unpredictable behaviour due to the feed, which excited leakage current down the feed cable and, as a result, distorted both the radiation pattern and the pulse. To minimise the leakage current, three other UWB dipoles were investigated. These were a CPW-fed UWB dipole with slots, a hybrid-feed UWB dipole, and a tapered-feed UWB dipole. Presented for these UWB dipoles are S-parameter results, obtained using a vector network analyser, and radiation pattern results obtained using an anechoic chamber. There were several antennas investigated in this thesis directly related to the Speckled Computing Consortiums objective of designing a 5mm3 ‘Speck’. These antennas were conventional narrowband antenna designs operating at either 2.45 GHz or 5.8 GHz. A Rectaxial antenna was designed at 2.45 GHz, which had excellent matching (S11 = -20dB) at the frequency of operation, and an omni-directional radiation pattern with a maximum gain of 2.69 dBi as measured in a far-field anechoic chamber. Attempts were made to increase the frequency of operation but this proved unsuccessful. Also investigated were antennas that were designed to be integrated with a 5.8 GHz MMIC transceiver. The first antenna investigated was a compact-folded dipole, which provided an insight into miniaturisation of antennas and the effect on antenna efficiency. The second antenna investigated was a ‘patch’ antenna. The ‘patch’ antenna utilised the entire geometry of the transceiver as a radiation mechanism and, as a result, had a much improved gain compared to the compact-folded dipole antenna. As the entire transceiver was an antenna, an investigation was carried into the amount of power flow through the transceiver with respect to the input power.
27

Worrall, Kevin James. „Guidance and search algorithms for mobile robots : application and analysis within the context of urban search and rescue“. Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/508/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Urban Search and Rescue is a dangerous task for rescue workers and for this reason the use of mobile robots to carry out the search of the environment is becoming common place. These robots are remotely operated and the search is carried out by the robot operator. This work proposes that common search algorithms can be used to guide a single autonomous mobile robot in a search of an environment and locate survivors within the environment. This work then goes on to propose that multiple robots, guided by the same search algorithms, will carry out this task in a quicker time. The work presented is split into three distinct parts. The first is the development of a nonlinear mathematical model for a mobile robot. The model developed is validated against a physical system. A suitable navigation and control system is required to direct the robot to a target point within an environment. This is the second part of this work. The final part of this work presents the search algorithms used. The search algorithms generate the target points which allow the robot to search the environment. These algorithms are based on traditional and modern search algorithms that will enable a single mobile robot to search an area autonomously. The best performing algorithms from the single robot case are then adapted to a multi robot case. The mathematical model presented in the thesis describes the dynamics and kinematics of a four wheeled mobile ground based robot. The model is developed to allow the design and testing of control algorithms offline. With the model and accompanying simulation the search algorithms can be quickly and repeatedly tested without practical installation. The mathematical model is used as the basis of design for the manoeuvring control algorithm and the search algorithms. This design process is based on simulation studies. In the first instance the control methods investigated are Proportional-Integral-Derivative, Pole Placement and Sliding Mode. Each method is compared using the tracking error, the steady state error, the rise time, the charge drawn from the battery and the ability to control the robot through a simple motion. Obstacle avoidance is also covered as part of the manoeuvring control algorithm. The final aspect investigated is the search algorithms. The following search algorithms are investigated, Lawnmower, Random, HillClimbing, Simulated Annealing and Genetic Algorithms. Variations on these algorithms are also investigated. The variations are based on Tabu Search. Each of the algorithms is investigated in a single robot case with the best performing investigated within a multi robot case. A comparison between the different methods is made based on the percentage of the area covered within the time available, the number of targets located and the time taken to locate targets. It is shown that in the single robot case the best performing algorithms have high random elements and some structure to selecting points. Within the multi robot case it is shown that some algorithms work well and others do not. It is also shown that the useable number of robots is dependent on the size of the environment. This thesis concludes with a discussion on the best control and search algorithms, as indicated by the results, for guiding single and multiple autonomous mobile robots. The advantages of the methods are presented, as are the issues with using the methods stated. Suggestions for further work are also presented.
28

Yanson, Dan Andreyevitch. „Generation of terahertz-modulated optical signals using AlGaAs/GaAs laser diodes“. Thesis, University of Glasgow, 2004. http://theses.gla.ac.uk/2837/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The Thesis reports on the research activities carried out under the Semiconductor-Laser Terahertz-Frequency Converters Project at the Department of Electronics and Electrical Engineering, University of Glasgow. The Thesis presents the work leading to the demonstration of reproducible harmonic modelocked operation from a novel design of monolithic semiconductor laser, comprising a compound cavity formed by a 1-D photonic-bandgap (PBG) mirror. Modelocking was achieved at a harmonic of the fundamental round-trip frequency with pulse repetition rates from 131 GHz up to a record-high frequency of 2.1 THz. The devices were fabricated from GaAs/AlGaAs material emitting at a wavelength of 860 nm and incorporated two gain sections with an etched PBG reflector between them, and a saturable absorber section. Autocorrelation studies are reported, which allow the device behaviour for different modelocking frequencies, compound cavity ratios, and type and number of intra-cavity reflectors to be analyzed. The highly reflective PBG microstructures are shown to be essential for subharmonic-free modelocking operation of the high-frequency devices. It was also demonstrated that the multi-slot PBG reflector can be replaced with two separate slots with smaller reflectivity. Some work was also done on the realisation of a dual-wavelength source using a broad-area laser diode in an external grating-loaded cavity. However, the source failed to deliver the spectrally-narrow lines required for optical heterodyning applications. Photomixer devices incorporating a terahertz antenna for optical-to microwave down-conversion were fabricated, however, no down-conversion experiments were attempted. Finally, novel device designs are proposed that exploit the remarkable spectral and modelocking properties of compound-cavity lasers. The ultrafast laser diodes demonstrated in this Project can be developed for applications in terahertz imaging, medicine, ultrafast optical links and atmospheric sensing.
29

Mitra, Bhargav Kumar. „Scene segmentation using miliarity, motion and depth based cues“. Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2480/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Segmentation of complex scenes to aid surveillance is still considered an open research problem. In this thesis a computational model (CM) has been developed to classify a scene into foreground, moving-shadow and background regions. It has been demonstrated how the CM, with the optional use of a channel ratio test, can be applied to demarcate foreground shadow regions in indoor scenes illuminated by a fixed incandescent source of light. A combined approach, involving the CM working in tandem with a traditional motion cue based segmentation method, has also been constructed. In the combined approach, the CM is applied to segregate the foreground shaded regions in a current frame based on a binary mask generated using a standard background subtraction process (BSP). Various popular outlier detection strategies have been investigated to assess their suitabilities in generating a threshold automatically, required to develop a binary mask from a difference frame, the outcome of the BSP. To evaluate the full scope of the pixel labeling capabilities of the CM and to estimate the associated time constraints, the model is deployed for foreground scene segmentation in recorded real-life video streams. The observations made validate the satisfactory performance of the model in most cases. In the second part of the thesis depth based cues have been exploited to perform the task of foreground scene segmentation. An active structured light based depthestimating arrangement has been modeled in the thesis; the choice of modeling an active system over a passive stereovision one has been made to alleviate some of the difficulties associated with the classical correspondence problem. The model developed not only facilitates use of the set-up but also makes possible a method to increase the working volume of the system without explicitly encoding the projected structured pattern. Finally, it is explained how scene segmentation can be accomplished based solely on the structured pattern disparity information, without generating explicit depthmaps. To de-noise the difference frames, generated using the developed method, two median filtering schemes have been implemented. The working of one of the schemes is advocated for practical use and is described in terms of discrete morphological operators, thus facilitating hardware realisation of the method to speed-up the de-noising process.
30

Liu, Qing. „Antennas using left handed transmission lines“. Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/595/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The research described in this thesis is concerned with the analysis and design of conventional wire antenna types, dipoles and loops, based on the left-handed transmission line approach. The left handed antennas have a unique feature that the wavelength of the induced current becomes shorter with decreasing frequency. The left handed transmission line concept can be extended to construct reduced-size dipole or loop antennas in the VHF frequency band. The use of higher order modes allows orthogonal polarisation to be obtained, which is thought to be a feature unique to these antennas. Efficiency is a key parameter of left handed antennas as the heavy left handed loading increases the resistive loss. A study of the efficiency of small dipole antennas loaded with a left-handed transmission line is specially described, and the comparison with conventional inductive loading dipoles. In a low order mode, the efficiency of L-loading dipole is better with low number of unit cell. If the number of cell increases, CL-loading presents comparable and even better performance. In a high mode the meandered left handed dipole gives the best efficiency due to the phase distribution, presenting orthogonal polarization as well. The optimized dipole loaded with parallel plate capacitors and spiral inductors presents the best performance in impedance and efficiency, even better than the conventional inductive loading. A planar loop antenna using a ladder network of left handed loading is also presented. Various modes can be obtained in the left handed loop antenna. The zero order mode gives rise to omnidirectional patterns in the plane of the loop, with good efficiency. By loading the loop with active components, varactors, a tunable left handed loop antenna with a switchable radiation pattern is implemented. The loop gives an omnidirectional pattern with a null to z axis while working in an n = 0 mode and can switch to a pattern with a null at phi = 45° in the plane of the loop in an n = 2 mode.
31

Roper, Simon Edward. „A room acoustics measurement system using non-invasive microphone arrays“. Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/891/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis summarises research into adaptive room correction for small rooms and pre-recorded material, for example music of films. A measurement system to predict the sound at a remote location within a room, without a microphone at that location was investigated. This would allow the sound within a room to be adaptively manipulated to ensure that all listeners received optimum sound, therefore increasing their enjoyment. The solution presented used small microphone arrays, mounted on the room's walls. A unique geometry and processing system was designed, incorporating three processing stages, temporal, spatial and spectral. The temporal processing identifies individual reflection arrival times from the recorded data. Spatial processing estimates the angles of arrival of the reflections so that the three-dimensional coordinates of the reflections' origin can be calculated. The spectral processing then estimates the frequency response of the reflection. These estimates allow a mathematical model of the room to be calculated, based on the acoustic measurements made in the actual room. The model can then be used to predict the sound at different locations within the room. A simulated model of a room was produced to allow fast development of algorithms. Measurements in real rooms were then conducted and analysed to verify the theoretical models developed and to aid further development of the system. Results from these measurements and simulations, for each processing stage are presented.
32

Adebomehin, Akeem A. „Ultrawideband IEEE802.15.4a cognitive localization methods for the 5G environment“. Thesis, University of Essex, 2017. http://repository.essex.ac.uk/20006/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis focuses on utilization of ultra-wideband (UWB) technology for cognitive localization in the fifth generation (5G) wireless environment that envisages seamless global connection of ubiquitous devices. This suggests the need for cognitive high-definition location-aware networks and devices devoid of the drawbacks of current positioning systems. The thesis therefore models a cognitive UWB IEEE802.15.4a LOS sufficient technique (ULOSTECH); with a framework for optimal UWB localization channel that utilizes combined cluster decay rate and mistiming probability method that achieves over 90% realizations. Moreover, the ULOSTECH NLOS mitigation method achieves about 0.257 improvement ratio on the accuracy of cellular network localization methods. An impulse radio (IR)-UWB device-to-device (D2D) WWAN is further proposed with channel time partitioned into discrete micro-channel slots (DMCS) along with a cluster formation scheme that achieves above 350Mbps network throughput in comparison with 100Mbps cellular and 250Mbps wi-fi standards respectively. Additionally, the cluster cooperation method achieves multi-user access rate of over 485% above cellular network standards. Also proposed is the ULOSTECH D2D-propagation-based combined localization and communication scheme (UD-CLOCS) for ultra-dense networks. This utilizes cooperative D2D data hoping localization technique that achieves a mean distance error of 0.54 – 3.32 shorter than trilateration and multi-dimensional scaling (MDS) methods respectively. Finally, the thesis proposes an overall IR-UWB network layout for the 5G setting. This comprises an all-IP D2D UWB network overlay of concurrent multi-layered super-core architecture (5G-COMUSA). This is significant as the proposed solutions could serve to decongest the licensed spectrums in the 5G environment.
33

Ponder, Christopher John. „A generic computer platform for efficient iris recognition“. Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6780/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This document presents the work carried out for the purposes of completing the Engineering Doctorate (EngD) program at the Institute for System Level Integration (iSLI), which was a partnership between the universities of Edinburgh, Glasgow, Heriot-Watt and Strathclyde. The EngD is normally undertaken with an industrial sponsor, but due to a set of unforeseen circumstances this was not the case for this work. However, the work was still undertaken to the same standards as would be expected by an industrial sponsor. An individual’s biometrics include fingerprints, palm-prints, retinal, iris and speech patterns. Even the way people move and sign their name has been shown to be uniquely associated with that individual. This work focuses on the recognition of an individual’s iris patterns. The results reported in the literature are often presented in such a manner that direct comparison between methods is difficult. There is also minimal code resource and no tool available to help simplify the process of developing iris recognition algorithms, so individual developers are required to write the necessary software almost every time. Finally, segmentation performance is currently only measurable using manual evaluation, which is time consuming and prone to human error. This thesis presents a completely novel generic platform for the purposes of developing, testing and evaluating iris recognition algorithms which is designed to simplify the process of developing and testing iris recognition algorithms. Existing open-source algorithms are integrated into the generic platform and are evaluated using the results it produces. Three iris recognition segmentation algorithms and one normalisation algorithm are proposed. Three of the algorithms increased true match recognition performance by between two and 45 percentage points when compared to the available open-source algorithms and methods found in the literature. A matching algorithm was developed that significantly speeds up the process of analysing the results of encoding. Lastly, this work also proposes a method of automatically evaluating the performance of segmentation algorithms, so minimising the need for manual evaluation.
34

Cao, Menglin. „The development of silicon compatible processes for HEMT realisation“. Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6803/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Compound semiconductor (III-V) devices are crucially important in a range of RF/microwave applications. High Electron Mobility Transistors (HEMTs), as the best low noise high frequency compound semiconductor devices, have been utilised in various applications at microwave and mm-wave frequencies such as communications, imaging, sensing and power. However, silicon based manufacturing will always be the heart of the semiconductor industry. III-V devices are conventionally fabricated using gold-based metallisation and lift off processes, which are incompatible with silicon manufacturing processes based on blanket metal or dielectric deposition and subtractive patterning by dry etching techniques. Therefore, the challenge is to develop silicon compatible processes for the realisation of compound semiconductor devices, whilst not compromising the device performance. In this work, silicon compatible processes for HEMT realisation have been developed, including the demonstration of a copper-based T-gate with the normalised DC resistance of 42 Ω/mm, and the presentation of a gate-first process flow which can incorporate the copper-based T-gate. The copper electroplating process for fabricating T-gate head with the maximum width of 2.5 µm, low damage inductively coupled plasma molybdenum etching process for realising T-gate foot with the minimum footprint of 30 nm, and the full gate-first process flow with non-annealed ohmic contact are described in detail. In addition, this thesis also describes the fabrication and characterisation of a 60 nm footprint gold-based T-gate HEMT realised by conventional III-V processes, yielding a cutoff frequency fT of 183GHz and maximum oscillation frequency fmax of 156GHz. In the comparison between these two types of HEMT, it is anticipated that a HEMT with the copper-based T-gate would not only have a larger maximum frequency of oscillation fmax, but also an easier incorporation into a silicon based manufacturing fab in terms of process technologies, than a HEMT with the gold-based T-gate.
35

Meriggi, Laura. „Antimonide-based mid-infrared light-emitting diodes for low-power optical gas sensors“. Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6691/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The 3-5 μm mid-infrared spectral region is of great interest as it contains the fundamental molecular fingerprints of a number of pollutants and toxic gases, which require remote real-time monitoring in a variety of applications. Consequently, the development of efficient optoelectronic devices operating in this wavelength range is a very fascinating and pertinent research. In recent years, there has been a rapid development of optical technologies for the detection of carbon dioxide (CO2), where the detected optical intensity at the specific gas absorption wavelength of 4.26 μm is a direct indication of the gas concentration, the main applications being in indoor air quality control and ventilation systems. The replacement of conventional infrared thermal components with high performance semiconductor light-emitting diodes (LEDs) and photodiodes in the 3-5 μm range allows to obtain sensors with similar sensitivity, but with an intrinsic wavelength selectivity, reduced power consumption and faster response. Gas Sensing Solutions Ltd. has developed a commercial CO2 optical gas sensor equipped with an AlInSb-based LED and photodiode pair, which has demonstrated a significant reduction in the energy consumption per measurement. The aim of this Ph.D. project, supported by an EPSRC Industrial CASE Studentship, was to improve the performance of mid-infrared AlInSb LEDs. This was achieved through the optimisation of the layer structure and the device design, and the application of different techniques to overcome the poor extraction efficiency (~ 1 %) which limits the LED performance, as a consequence of total-internal reflection and Fresnel reflection. A key understanding was gained on the electrical and optical properties of AlInSb LEDs through the characterisation of the epi-grown material and the fabrication of prototype devices. Improved LED performance, with a lower series resistance and stronger light emission, was achieved thanks to the analysis of a number of LED design parameters, including the doping concentration of the contact layers, the LED lateral dimensions and the electrode contact geometry. A Resonant-Cavity LED structure was designed, with the integration of an epitaxially-grown distributed Bragg reflector between the substrate and the LED active region. The advantage of this design is twofold, as it both redirects the light emitted towards the substrate in the direction of the top LED surface and adds a resonant effect to the structure, resulting in a three-times higher extraction efficiency at the target wavelength of 4.26 μm, spectral narrowing and improved temperature stability. Finally, 2D-periodic metallic hole array patterns were integrated on AlInSb LEDs, showing potential advantages for spectral filtering and enhanced extraction of light emitted above the critical angle.
36

Yang, Shuming. „A novel chip interferometry system for online surface measurements“. Thesis, University of Huddersfield, 2009. http://eprints.hud.ac.uk/id/eprint/6311/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis documents the development of a novel chip interferometry system using advanced microtechnology and optical methodologies. This is the first time that this type of system has been reported in surface metrology. The system is designed to be compact, robust and stable even though it does not involve noise compensation and feedback control. Compared to currently available techniques, this system has great potential for on-line surface measurements. This application is based on a Michelson interferometer and wavelength scanning method. Considering the fabrication capability and effective cost, the system consists of two parts, an optical chip and an optical probe. The optical chip is the main focus of this research and it integrates a tuneable laser, a directional coupler, an optical isolator and a photodetector. The research approach for the optical chip is to use a planar silica motherboard for the passive circuitry onto which are assembled silicon daughterboards containing the different chip components that are to be integrated. The theory, methodology and technique for these individual chip components are explored. The optical probe is used to collimate, diffract and focus the light beam for surface scanning. The individual chip components and the optical probe are applied to build an experimental interferometry system. Initial surface measurements by this system have been carried out. The experimental results provide substantive evidence that the chip interferometry system is feasible for on-line surface measurements.
37

Bingham, Mark. „An interest point based illumination condition matching approach to photometric registration within augmented reality worlds“. Thesis, University of Huddersfield, 2011. http://eprints.hud.ac.uk/id/eprint/11048/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
With recent and continued increases in computing power, and advances in the field of computer graphics, realistic augmented reality environments can now offer inexpensive and powerful solutions in a whole range of training, simulation and leisure applications. One key challenge to maintaining convincing augmentation, and therefore user immersion, is ensuring consistent illumination conditions between virtual and real environments, so that objects appear to be lit by the same light sources. This research demonstrates how real world lighting conditions can be determined from the two-dimensional view of the user. Virtual objects can then be illuminated and virtual shadows cast using these conditions. This new technique uses pairs of interest points from real objects and the shadows that they cast, viewed from a binocular perspective, to determine the position of the illuminant. This research has been initially focused on single point light sources in order to show the potential of the technique and has investigated the relationships between the many parameters of the vision system. Optimal conditions have been discovered by mapping the results of experimentally varying parameters such as FoV, camera angle and pose, image resolution, aspect ratio and illuminant distance. The technique is able to provide increased robustness where greater resolution imagery is used. Under optimal conditions it is possible to derive the position of a real world light source with low average error. An investigation of available literature has revealed that other techniques can be inflexible, slow, or disrupt scene realism. This technique is able to locate and track a moving illuminant within an unconstrained, dynamic world without the use of artificial calibration objects that would disrupt scene realism. The technique operates in real-time as the new algorithms are of low computational complexity. This allows high framerates to be maintained within augmented reality applications. Illuminant updates occur several times a second on an average to high end desktop computer. Future work will investigate the automatic identification and selection of pairs of interest points and the exploration of global illuminant conditions. The latter will include an analysis of more complex scenes and the consideration of multiple and varied light sources.
38

Cryan, R. A. „Communication systems“. Thesis, University of Huddersfield, 1999. http://eprints.hud.ac.uk/id/eprint/7477/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Naeem, Muhammad Azhar. „Monolithically integrated polarisation mode convertor with a semiconductor laser“. Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4419/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In this thesis, the design, optimisation, fabrication and operation of waveguide based semiconductor lasers, integrated with polarisation mode convertors (PMCs), is described. Devices are fabricated in the GaAs/AlGaAs and InP/AlGaInAs material systems, using two types of structures; single PMC and back-to-back PMCs. The convertor designs are based upon air trenches, of sub-wavelength dimensions, being introduced into waveguide structures in order to achieve an asymmetric cross-sectional profile, resulting in wave-pate functionality. The GaAs/AlGaAs PMCs are fabricated using reactive ion etching (RIE), and the phenomena of RIE lag technique is also exploited for obtaining the required asymmetric waveguide profile in a single etch step. These are then integrated with semiconductor lasers. The InP/AlGaInAs PMCs are fabricated using a combination of RIE and inductively coupled plasma (ICP) etching and are integrated with semiconductor lasers and also differential phase shifter (DPS) sections to realise devices with active polarisation control. Integrated devices fabricated on InP/AlGaInAs material system with a semiconductor laser, a PMC followed by a DPS section yields ~40 % polarisation mode conversion whilst the DPS section is held at the transparency condition. Greater than 85 % polarisation mode conversion was also obtained with back to back PMCs, which was complement to the devices fabricated with a single PMC. Furthermore, a first active polarisation controller, monolithically integrated with a semiconductor laser is reported. High speed modulation of the integrated device with 300 Mbps is also demonstrated via current injection to the phase shifter section of the device.
40

Oreshkin, Boris. „Distributed information fusion in sensor networks“. Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86916.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis addresses the problem of design and analysis of distributed in-network signal processing algorithms for effcient aggregation and fusion of information in wireless sensor networks. The distributed in-network signal processing algorithms alleviate a number of drawbacks of the centralized fusion approach. The single point of failure, complex routing protocols, uneven power consumption in sensor nodes, ineffcient wireless channel utilization, and poor scalability are among these drawbacks. These drawbacks of the centralized approach lead to reduced network lifetime, poor robustness to node failures, and reduced network capacity. The distributed algorithms alleviate these issues by using simple pairwise message exchange protocols and localized in-network processing. However, for such algorithms accuracy losses and/or time required to complete a particular fusion task may be significant. The design and analysis of fast and accurate distributed algorithms with guaranteed performance characteristics is thus important. In this thesis two specific problems associated with the analysis and design of such distributed algorithms are addressed.
For the distributed average consensus algorithm a memory based acceleration methodology is proposed. The convergence of the proposed methodology is investigated. For the two important settings of this methodology, optimal values of system parameters are determined and improvement with respect to the standard distributed average consensus algorithm is theoretically characterized. The theoretical improvement characterization matches well with the results of numerical experiments revealing significant and well scaling gain. The practical distributed on-line initialization scheme is devised. Numerical experiments reveal the feasibility of the proposed initialization scheme and superior performance of the proposed methodology with respect to several existing acceleration approaches.
For the collaborative signal and information processing methodology a number of theoretical performance guarantees is obtained. The collaborative signal and information processing framework consists in activating only a cluster of wireless sensors to perform target tracking task in the cluster head using particle filter. The optimal cluster is determined at every time instant and cluster head hand-off is performed if necessary. To reduce communication costs only an approximation of the filtering distribution is sent during hand-off resulting in additional approximation errors. The time uniform performance guarantees accounting for the additional errors are obtained in two settings: the subsample approximation and the parametric mixture approximation hand-off.
Cette thèse aborde le problème de la conception et l'analyse d'algorithmes distribuès servant à l'agrégation efficace et la fusion de l'information dans des reséaux capteurs sans fil. Ces algorithmes distribuès servent à addresser un bon nombre d'inconvénients qu'ont les approches de fusion centralisée telles que le point de défaillance unique, les protocoles de routage complexe, la consommation de puissance inégale dans les noeuds de capteurs, l'utilisation inefficace des voies de transmission sans-fil et l'extensibilité limitée. Ces inconvénients de l'approche centralisée ont comme effet de réduire la durée de vie du reséau, la robustesse des noeuds face aux défaillances et la capacité du réseau. Les algorithmes distribuès atténuent ces problèmes en utilisant des simples protocoles de messageries entre les noeuds ainsi que du traitement d'information localisé. Toutefois, pour ces algorithmes, les pertes de précision et/ou de temps nécessaire pour effectuer une tâche peuvent être importantes. C'est pourquoi la conception et l'analyse d'algorithmes distribuès rapide et précis est importante. Dans cette thèse, deux problèmes spécifiques associés à l'analyse et le conception de tels algorithms sont abordés.
En ce qui concerne l'algorithme de consensus sur la moyenne distribuè, une méthode d'accélération fondé sur la mémoire est proposée et sa convergence analysée. Pour les deux paramètres importants de cette méthodologie, les valeurs optimales pour le système sont déterminées et l'amélioration par rapport à l'algorithme de consensus de base est caractérisée de façon théorique. Cette caractérisation correspond aux resultants d'expériences numériques et révèlent des gains importants et extensibles. Le régime distribuè d'initialisation en ligne est conçu. Des expériences numériques révèlevent la faisabilité du régime d'initilisation proposé ainsi qu'un rendement supérieur à plusieurs approches existantes.
Pour la méthodologie de traitement de signaux et d'information collaborative, un certain nombre de garanties théoriques de performance sont obtenues. Ce cadre de travail consiste à activer seulement une grappe de capteurs sans fil pour effectuer les tâches de pistage d'objet au niveau deu chef de groupe en utilisant un filtre particulaire. La grappe optimale est déterminée à chaque intervale de temps et le transfert du titre de chef de groupe est réalisé au besoin. Pour réduire les coûts de communication, seulement une approximation de la distribution du filtre est envoyé pendant le transfert de responsabilités ce qui entraîne des erreurs supplémentaires. Les garanties de performance uniformes dans le temps tenant compte de ces erreurs supplémentaires sont obtenues dans deux contextes.
41

Meghjani, Malika. „Bimodal information analysis for emotion recognition“. Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86954.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
We present an audio-visual information analysis system for automatic emotion recognition. We propose an approach for the analysis of video sequences which combines facial expressions observed visually with acoustic features to automatically recognize five universal emotion classes: Anger, Disgust, Happiness, Sadness and Surprise. The visual component of our system evaluates the facial expressions using a bank of 20 Gabor filters that spatially sample the images. The audio analysis is based on global statistics of voice pitch and intensity along with the temporal features like speech rate and Mel Frequency Cepstrum Coefficients. We combine the two modalities at feature and score level to compare the respective joint emotion recognition rates. The emotions are instantaneously classified using a Support Vector Machine and the temporal inference is drawn based on scores obtained as the output of the classifier. This approach is validated on a posed audio-visual database and a natural interactive database to test the robustness of our algorithm. The experiments performed on these databases provide encouraging results with the best combined recognition rate being 82%.
Nous présentons un système d'analyse des informations audiovisuelles pour la reconnaissance automatique d'émotion. Nous proposons une méthode pour l'analyse de séquences vidéo qui combine des observations visuelles et sonores permettant de reconnaître automatiquement cinq classes d'émotion universelle : la colère, le dégoût, le bonheur, la tristesse et la surprise. Le composant visuel de notre système évalue les expressions du visage à l'aide d'une banque de 20 filtres Gabor qui échantillonne les images dans l'espace. L'analyse audio est basée sur des données statistiques du ton et de l'intensité de la voix ainsi que sur des caractéristiques temporelles comme le rythme du discours et les coefficients de fréquence Mel Cepstrum. Nous combinons les deux modalités, fonctionnalité et pointage, pour comparer les taux de reconnaissance respectifs. Les émotions sont classifiées instantanément à l'aide d'une « Support Vector Machine » et l'inférence temporelle est déterminée en se basant sur le pointage obtenu à la sortie du classificateur. Cette approche est validée en utilisant une base de données audiovisuelles et une base de données interactives naturelles afin de vérifier la robustesse de notre algorithme. Les expériences effectuées sur ces bases de données fournissent des résultats encourageants avec un taux de reconnaissance combinée pouvant atteindre 82%.
42

Gao, Qiang 1964. „Noise reduction techniques for holographic information storage“. Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/282620.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The effects of wavefront conditioning on the performance of holographic optical data storage systems is investigated. The physical origins of various noise mechanisms which degrade the SNR of the holographic storage are studied for the thin phase (DCG) and the photoreflective crystal (LiNbO₃) recording materials. Dependence of the noise on various system parameters such as focal length, pixel size, number of pixels and material parameters are studied. An algorithm is developed to design pseudorandom phase masks which can improve the signal-to-noise ratio for a given system. The noise reduction by using pseudorandom phase mask and a Galilean configuration are investigated theoretically and experimentally. Significant improvement to the signal-to-noise ratio of holographic storage systems is demonstrated experimentally.
43

Mendonça, Tiago José Ferreira. „Electronics Based Pumping Group Controller Board“. Dissertação, 2018. https://repositorio-aberto.up.pt/handle/10216/113856.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
At CERN, the particle's experiments occur inside of vessels that must remain on a Ultra-High vacuum condition to avoid interaction between air molecules and circulating beams. To achieve that low level of pressure are used mobile pumping groups, widespread through several spots along the accelerator. These groups are composed by valves and pumps. The group's operation is regulated by a proper control process running, currently, on a PLC. The purpose of the thesis is to evaluate the reliability and feasibility of replacing the PLC by an alternative solution. The proposed architecture must assure electrical interface with the devices and support other features inherent to PLC, namely Profibus network. To satisfy such requirements, it was proposed an approach based on single board computer, namely BeagleBone platform. The process is implemented on a soft-PLC following the IEC61131 standard and the core unit is integrated on an electrical board designed which guarantees the isolation and electrical interface with the all physical equipment. Regarding Profibus issue, it has been studied an open source stack previous developed on an older project and its integration with soft-PLC running environment, CODESYS runtime.
44

Mendonça, Tiago José Ferreira. „Electronics Based Pumping Group Controller Board“. Master's thesis, 2018. https://repositorio-aberto.up.pt/handle/10216/113856.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
At CERN, the particle's experiments occur inside of vessels that must remain on a Ultra-High vacuum condition to avoid interaction between air molecules and circulating beams. To achieve that low level of pressure are used mobile pumping groups, widespread through several spots along the accelerator. These groups are composed by valves and pumps. The group's operation is regulated by a proper control process running, currently, on a PLC. The purpose of the thesis is to evaluate the reliability and feasibility of replacing the PLC by an alternative solution. The proposed architecture must assure electrical interface with the devices and support other features inherent to PLC, namely Profibus network. To satisfy such requirements, it was proposed an approach based on single board computer, namely BeagleBone platform. The process is implemented on a soft-PLC following the IEC61131 standard and the core unit is integrated on an electrical board designed which guarantees the isolation and electrical interface with the all physical equipment. Regarding Profibus issue, it has been studied an open source stack previous developed on an older project and its integration with soft-PLC running environment, CODESYS runtime.
45

Pelayo, Guilherme José Esteves. „Printed Electronics Power Supply for IoT Systems“. Dissertação, 2020. https://hdl.handle.net/10216/132431.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
CeNTI - Center for Nanotechnology and Smart Materials is an institute of R&D located in Famalicão. The main goal of the institute is to promote activities of research, Technological Development, Innovation and Engineering with special focus in smart materials and systems. The Internet Of Things is the concept by which multiple devices, are connected and performing activities with each other, such as communication and processing, without human interference. The development of multiple technologies such Artificial Intelligence, Machine Learning, Smart homes, smart devices has accelerated the convergence of all into networks. These networks at the edge of the system possess devices that are meant to make the connection between cyber environments and physical ones. This type of devices are often referred to as Edge devices. Devices like these are most often low power devices used to sense aspects of the physical system and their dimension might be a defining factor to decide if such system is adequate to its function. In sensors the most common forms of power supplies are batteries, mains electricity with a transformer for voltage division and isolation or a combination of both. With the increasing need for miniaturization and technological means to achieve it, the investigation of novel forms becomes more and more relevant. The main objective of this thesis is to investigate the use of printed electronics and in particular printed inductors, to attain an efficient and safe power supply adequate for human handling while aiming to reduce the final volume of the system. The approach intents to use the traditional Transformerless Power Supply circuit configuration using capacitors to drop the mains voltage. Such goal is to prevent undesired power expenditure caused by the introduction of resistances. Besides the voltage drop and rectification the other major concern of the system is the safety of the human operator that may touch the device. Voltage dropping and rectification of grid power is an extremely dangerous circuit configuration due to different ground references and creates an electrocution hazard to both living beings and devices connected to it. The way to circumvent the danger is to introduce galvanic isolation. The system proposed in this project physically separates the output of the system from the input with energy being transferred magnetically. For that purpose, printed inductors are stacked to achieve a planar air-core transformer. The system aims contributes to the continued minimizing of Edge devices that will become progressively more present in everyday life.
In recent times, connected devices are becoming increasingly more common. Such devices usually referred to as IoT (Internet of Things), are converging to ever small builds. This document aims to deepen the progress in the field of miniaturization of such devices. To achieve this goal a power supply is designed. The project intents to offer an alternative to common power supplies by making use of printed inductors. Such components intent to replace the traditional transformer by suppling a reduced volume alternative. An investigation into these inductors is conducted and an implementation of its use is presented. The investigation led to conclude that the inductors may be used to provide isolation but further improvements into the fabrication process are required. Due to the current fabrication process involving impure silver as the conductor the resulting coils have a resistance excessively high. This creates difficulties in magnetic field creation as well as introducing a great level of losses. To solve this problem the presented implementation uses high frequency switching to allows for better results in the receiver side of the system.
46

Pelayo, Guilherme José Esteves. „Printed Electronics Power Supply for IoT Systems“. Master's thesis, 2020. https://hdl.handle.net/10216/132431.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
CeNTI - Center for Nanotechnology and Smart Materials is an institute of R&D located in Famalicão. The main goal of the institute is to promote activities of research, Technological Development, Innovation and Engineering with special focus in smart materials and systems. The Internet Of Things is the concept by which multiple devices, are connected and performing activities with each other, such as communication and processing, without human interference. The development of multiple technologies such Artificial Intelligence, Machine Learning, Smart homes, smart devices has accelerated the convergence of all into networks. These networks at the edge of the system possess devices that are meant to make the connection between cyber environments and physical ones. This type of devices are often referred to as Edge devices. Devices like these are most often low power devices used to sense aspects of the physical system and their dimension might be a defining factor to decide if such system is adequate to its function. In sensors the most common forms of power supplies are batteries, mains electricity with a transformer for voltage division and isolation or a combination of both. With the increasing need for miniaturization and technological means to achieve it, the investigation of novel forms becomes more and more relevant. The main objective of this thesis is to investigate the use of printed electronics and in particular printed inductors, to attain an efficient and safe power supply adequate for human handling while aiming to reduce the final volume of the system. The approach intents to use the traditional Transformerless Power Supply circuit configuration using capacitors to drop the mains voltage. Such goal is to prevent undesired power expenditure caused by the introduction of resistances. Besides the voltage drop and rectification the other major concern of the system is the safety of the human operator that may touch the device. Voltage dropping and rectification of grid power is an extremely dangerous circuit configuration due to different ground references and creates an electrocution hazard to both living beings and devices connected to it. The way to circumvent the danger is to introduce galvanic isolation. The system proposed in this project physically separates the output of the system from the input with energy being transferred magnetically. For that purpose, printed inductors are stacked to achieve a planar air-core transformer. The system aims contributes to the continued minimizing of Edge devices that will become progressively more present in everyday life.
In recent times, connected devices are becoming increasingly more common. Such devices usually referred to as IoT (Internet of Things), are converging to ever small builds. This document aims to deepen the progress in the field of miniaturization of such devices. To achieve this goal a power supply is designed. The project intents to offer an alternative to common power supplies by making use of printed inductors. Such components intent to replace the traditional transformer by suppling a reduced volume alternative. An investigation into these inductors is conducted and an implementation of its use is presented. The investigation led to conclude that the inductors may be used to provide isolation but further improvements into the fabrication process are required. Due to the current fabrication process involving impure silver as the conductor the resulting coils have a resistance excessively high. This creates difficulties in magnetic field creation as well as introducing a great level of losses. To solve this problem the presented implementation uses high frequency switching to allows for better results in the receiver side of the system.
47

Sousa, Duarte Fleming Oliveira de. „Human Sensing and Indoor Location: From coarse to fine detection algorithms based on consumer electronics RF mapping“. Dissertação, 2017. https://repositorio-aberto.up.pt/handle/10216/103010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Depois de definida toda a arquitectura para captura dos dados de Wi-Fi, foi desenvolvido um software para a criação de uma base de dados contendo informações de posição e de força de sinal. Inicialmente, foi desenvolvido um algoritmo preliminar de localização. Posteriormente, implementou-se um algoritmo de classificação para mapear os dados recebidos dos sensores com os guardados anteriormente.Foram feitos testes usando estes algoritmos, e comparados os resultados.Em fase de escrita do documento.
48

Sousa, Duarte Fleming Oliveira de. „Human Sensing and Indoor Location: From coarse to fine detection algorithms based on consumer electronics RF mapping“. Master's thesis, 2017. https://hdl.handle.net/10216/103010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Depois de definida toda a arquitectura para captura dos dados de Wi-Fi, foi desenvolvido um software para a criação de uma base de dados contendo informações de posição e de força de sinal. Inicialmente, foi desenvolvido um algoritmo preliminar de localização. Posteriormente, implementou-se um algoritmo de classificação para mapear os dados recebidos dos sensores com os guardados anteriormente.Foram feitos testes usando estes algoritmos, e comparados os resultados.Em fase de escrita do documento.
49

Berezovskyi, Kostiantyn. „Timing Analysis of General Purpose Graphics Processing Units for Real-Time Systems: Models and Analyses“. Tese, 2016. https://repositorio-aberto.up.pt/handle/10216/83814.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Berezovskyi, Kostiantyn. „Timing Analysis of General Purpose Graphics Processing Units for Real-Time Systems: Models and Analyses“. Doctoral thesis, 2016. https://repositorio-aberto.up.pt/handle/10216/83814.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Zur Bibliographie