Academic literature on the topic 'Sensor fusion and obstacle detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sensor fusion and obstacle detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Sensor fusion and obstacle detection"

1

Gálvez, del Postigo Fernández Carlos. "Grid-Based Multi-Sensor Fusion for On-Road Obstacle Detection: Application to Autonomous Driving." Thesis, KTH, Datorseende och robotik, CVAP, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173316.

Full text
Abstract:
Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system. The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management. The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors.
APA, Harvard, Vancouver, ISO, and other styles
2

Luppi, Alessandro. "Park Assist Optimization by Sensor Fusion Strategy: Development and Model-in-the-loop Validation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
The Rear Park Assist feature is an ADAS (Advanced Driver Assistant System) function intended to provide warnings to the driver when an obstacle that can collide with the rear bumper of the vehicle is present near the back of a car moving in reverse. A Value Optimization project has been completed, with the support of Maserati S.p.A., to study the feasibility of a reconfiguration of the Rear Park Assist feature. Indeed, on current vehicles on the market, the Rear Park Assist function is based on ultrasonic sensors; but the availability of a rear-view camera and corner radars on some Maserati vehicle models may permit the feature to work as planned while relying on a sensor fusion strategy, which combines data coming from the two sensor kinds – camera and radars – and introduces several benefits following the elimination of ultrasonic sensors. To achieve the above goal and benefit from the foreseen advantages, a design stage, subsequent to a preliminary feasibility analysis for the project, has been carried out to produce algorithms for computer vision, radar targets elaboration, and sensor fusion which allow the detection of obstacles as intended by the feature. The validation phase, succeeding the development stage, has permitted to test the devised system against the expected performance documented in a related validation standard. In particular, the developed feature satisfied most of the requirements, although some undesired results mainly related to wrongly detected obstacles. An enhancement process, accompanied by on-vehicle calibrations and tests, might finally allow the Rear Park Assist system based on rear-view camera and corner radars to be implemented on series vehicles.
APA, Harvard, Vancouver, ISO, and other styles
3

Vandi, Damiano. "ADAS Value Optimization for Rear Park Assist: Improvement and Assessment of Sensor Fusion Strategy." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
The project for this thesis consists in an ADAS Value Optimization activity conducted during an internship in Maserati S.p.A. with the objective of removing the ultrasonic sensors used for the Rear Park Assist (RPA) ADAS feature, obtaining the same functionality and performance in the detection and signaling of obstacles behind the car through a new system based on a sensor fusion strategy between Rear View Camera (RVC) and Blind Spot Radars (BSD). To achieve this goal, a study of the current RPA feature has been conducted, and starting from a previous implementation of the sensor fusion strategy for the new system, multiple updates and improvements have been implemented in order to achieve the functionality and performance required. Both hardware and software components of the system were updated and redesigned in the MATLAB/Simulink environment, and the final system obtained was tested through a standard validation procedure in a virtual simulation environment, obtaining encouraging results compatible with the RPA requirements and demonstrating the technical and economic feasibility of the developed RPA system based on a sensor fusion strategy between RVC and BSD which, after additional tests on the actual vehicle, could go into production.
APA, Harvard, Vancouver, ISO, and other styles
4

Rosero, Luis Alberto Rosero. "Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17042017-145443/.

Full text
Abstract:
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores.<br>This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
APA, Harvard, Vancouver, ISO, and other styles
5

Utino, Vítor Manha. "Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17102016-165459/.

Full text
Abstract:
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real.<br>This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
APA, Harvard, Vancouver, ISO, and other styles
6

Doyle, Rory Stephen. "Neurofuzzy multi-sensor data fusion for helicopter obstacle avoidance." Thesis, University of Southampton, 1997. https://eprints.soton.ac.uk/250033/.

Full text
Abstract:
Hazardous weather conditions significantly limit the operational capability of civil helicopters. This limitation arises from the crew's inability to determine the location of obstacles in the environment by sight. In order to assist the crew in these circumstances a range of equipment and sensors may be installed in the helicopter. However, with multiple sensors on board, the problem of efficiently assimilating the large amount of imagery and data available generates a significant workload. A reduction of the workload may be achieved by the automation of this assimilation (sensor fusion) and the design of a system to guide the pilot along obstacle free paths. In order to provide the guidance to avoid obstacles a system must have knowledge about the obstacles' possible positions and likely future positions relative the system's own aircraft. Since the information being provided by the sensors will not be perfect, (i.e. it will have some uncertainty associated with it), and since the process model, which must be used to predict any future positions, will also be uncertain, the required positions must be estimated. As the dynamics of moving obstacles will be a priori unknown, it will be necessary to learn process models for them. The dynamics of the obstacles cannot be guaranteed to be linear, therefore these process models must be capable of reflecting this non-linear behaviour. The uncertain information produced by the various sensors will be related to required knowledge about the obstacles by a sensor model, however this relationship need not be linear, and may even have to be learned. Currently used estimation techniques (e.g. the ordinary extended Kalman filter) are inadequate for estimating the uncertainty involved in the obstacles' positions for the highly non-linear processes under consideration. Neural network approaches to non-linear estimation have recently allowed process and sensor models to be learned (sometimes implicitly), however these approaches have been quite ad hoc in their implementation and have been even more negligent in the estimation of uncertainty. The main contributions of this research are the design of non-linear estimators which may use process and sensor models that result from learning processes, and the use of the output of these estimators to determine guidance for obstacle free paths through the environment in 3 dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Fucheng. "Noncoherent fusion detection in wireless sensor networks." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/360402/.

Full text
Abstract:
The main motivation of this thesis is to design low-complexity high efficiency noncoherent fusion rules for the parallel triple-layer wireless sensor networks (WSNs) based on frequency-hopping Mary frequency shift keying (FH/MFSK) techniques, which are hence referred to as the FH/MFSK WSNs. The FH/MFSKWSNs may be employed to monitor single or multiple source events (SEs)with each SE having multiple states. In the FH/MFSKWSNs, local decisions made by local sensor nodes (LSNs) are transmitted to a fusion center (FC) with the aid of FH/MFSK techniques. At the FC, various noncoherent fusion rules may be suggested for final detection (classification) of the SEs’ states. Specifically, in the context of the FH/MFSK WSNs monitoring single M-ary SE, three noncoherent fusion rules are considered for fusion detection, which include the benchmark equal gain combining (EGC), and the proposed erasure-supported EGC (ES-EGC) as well as the optimum posterior fusion rules. Our studies demonstrate that the ES-EGC fusion rule may significantly outperform the EGC fusion rule, in the cases when the LSNs’ detection is unreliable and when the channel signal-to-noise ratio (SNR) is relative high. For the FH/MFSKWSNs monitoring multiple SEs, six noncoherent fusion rules are investigated, which include the EGC, ES-EGC, EGC assisted N-order IIC (EGC-NIIC), ES-EGC assisted N-order IIC (ES-EGC-NIIC), EGC assisted r-order IIC (EGC-rIIC) and the ES-EGC assisted r-order IIC (ES-EGC-rIIC). The complexity, characteristics as well as detection performance of these fusion rules are investigated. Our studies show that the ES-EGC related fusion rules are highly efficient fusion rules, which have similar complexity as the corresponding EGC related fusion rules, but usually achieve better detection performance than the EGC related fusion rules. Although the ES-EGC is a single-user fusion rule, it is however capable of mitigating the multiple event interference (MEI) generated by multiple SEs. Furthermore, in some of the considered fusion rules, the embedded parameters may be optimized for the FH/MFSK WSNs to achieve the best detection performance. As soft-sensing is often more reliable than hard-sensing, in this thesis, the FH/MFSK WSNs with the LSNs using soft-sensing are investigated associated with the EGC and ES-EGC fusion rules. Our studies reveal that the ES-EGC becomes highly efficient, when the sensing at LSNs is not very reliable. Furthermore, as one of the applications, our FH/MFSK WSN is applied for cognitive spectrum sensing of a primary radio (PR) system constituted by the interleaved frequencydivision multiple access (IFDMA) scheme, which supports multiple uplink users. Associated with our cognitive spectrum sensing system, three types of energy detection based sensing schemes are addressed, and four synchronization scenarios are considered to embrace the synchronization between the received PR IFDMA signals and the sampling operations at cognitive spectrum sensing nodes (CRSNs). The performance of the FH/MFSK WSN assisted spectrum sensing system with EGC or ES-EGC fusion rule is investigated. Our studies show that the proposed spectrum sensing system constitutes one highly reliable spectrum sensing scheme, which is capable of exploiting the space diversity provided by CRSNs and the frequency diversity provided by the IFDMA systems. Finally, the thesis summarises our discoveries and provides discussion on the possible future research issues.
APA, Harvard, Vancouver, ISO, and other styles
8

Abyarjoo, Fatemeh. "Sensor Fusion for Effective Hand Motion Detection." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Höjer, Vidar, and Alexander Sundberg. "Active Dampening : Servo controlled suspension with infrared sensor obstacle detection." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264455.

Full text
Abstract:
The purpose of this project was to create an active dampening suspension for a small prototype that was able to detect obstacles using infrared sensors. The suspension system consisted of servomotors that controlled the angle of a leg upon which a wheel was attached. The infrared distance sensor measured the height of obstacles and the necessary raising of the suspension was calculated on a microcomputer of type Arduino Uno. It was concluded that the constructed system of suspension and obstacle detection was inadequate. Most subsystems worked but not as a whole.<br>Syftet med detta projekt var att konstruera en aktiv stötdämpning för en mindre prototyp som skulle upptäcka hinder med hjälp av infraröda sensorer. Upphängningen bestod av servomotorer som styrde vinkeln på benen där hjulen var monterade. De infraröda sensorerna mätte höjden på hindren och hur mycket upphängningen skulle lyftas beräknades sedan med en mikrodator av typen Arduino Uno. Slutsatsen blev att det konstruerade system av upphängning och hinder-detektion var otillräckligt. De flesta delsystemen fungerade men inte helheten.
APA, Harvard, Vancouver, ISO, and other styles
10

Rouhani, Shahin. "Radar and Thermopile Sensor Fusion for Pedestrian Detection." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-115.

Full text
Abstract:
<p>During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research.</p><p>This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work.</p><p>Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters.</p><p>Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.</p>
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!