Academic literature on the topic 'Event-based cameras'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Event-based cameras.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Event-based cameras"

1

Gehrig, Daniel, and Davide Scaramuzza. "Low-latency automotive vision with event cameras." Nature 629, no. 8014 (2024): 1034–40. http://dx.doi.org/10.1038/s41586-024-07409-w.

Full text
Abstract:
AbstractThe computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements1. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras2.
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Sichao, Hengyi Lv, Yuchen Zhao, Yang Feng, Hailong Liu, and Guoling Bi. "Denoising Method Based on Salient Region Recognition for the Spatiotemporal Event Stream." Sensors 23, no. 15 (2023): 6655. http://dx.doi.org/10.3390/s23156655.

Full text
Abstract:
Event cameras are the emerging bio-mimetic sensors with microsecond-level responsiveness in recent years, also known as dynamic vision sensors. Due to the inherent sensitivity of event camera hardware to light sources and interference from various external factors, various types of noises are inevitably present in the camera’s output results. This noise can degrade the camera’s perception of events and the performance of algorithms for processing event streams. Moreover, since the output of event cameras is in the form of address-event representation, efficient denoising methods for traditional frame images are no longer applicable in this case. Most existing denoising methods for event cameras target background activity noise and sometimes remove real events as noise. Furthermore, these methods are ineffective in handling noise generated by high-frequency flickering light sources and changes in diffused light reflection. To address these issues, we propose an event stream denoising method based on salient region recognition in this paper. This method can effectively remove conventional background activity noise as well as irregular noise caused by diffuse reflection and flickering light source changes without significantly losing real events. Additionally, we introduce an evaluation metric that can be used to assess the noise removal efficacy and the preservation of real events for various denoising methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Xiaoli, and Chao Bei. "Backlight and dim space object detection based on a novel event camera." PeerJ Computer Science 10 (July 12, 2024): e2192. http://dx.doi.org/10.7717/peerj-cs.2192.

Full text
Abstract:
Background For space object detection tasks, conventional optical cameras face various application challenges, including backlight issues and dim light conditions. As a novel optical camera, the event camera has the advantages of high temporal resolution and high dynamic range due to asynchronous output characteristics, which provides a new solution to the above challenges. However, the asynchronous output characteristic of event cameras makes them incompatible with conventional object detection methods designed for frame images. Methods Asynchronous convolutional memory network (ACMNet) for processing event camera data is proposed to solve the problem of backlight and dim space object detection. The key idea of ACMNet is to first characterize the asynchronous event streams with the Event Spike Tensor (EST) voxel grid through the exponential kernel function, then extract spatial features using a feed-forward feature extraction network, and aggregate temporal features using a proposed convolutional spatiotemporal memory module ConvLSTM, and finally, the end-to-end object detection using continuous event streams is realized. Results Comparison experiments among ACMNet and classical object detection methods are carried out on Event_DVS_space7, which is a large-scale space synthetic event dataset based on event cameras. The results show that the performance of ACMNet is superior to the others, and the mAP is improved by 12.7% while maintaining the processing speed. Moreover, event cameras still have a good performance in backlight and dim light conditions where conventional optical cameras fail. This research offers a novel possibility for detection under intricate lighting and motion conditions, emphasizing the superior benefits of event cameras in the realm of space object detection.
APA, Harvard, Vancouver, ISO, and other styles
4

Furmonas, Justas, John Liobe, and Vaidotas Barzdenas. "Analytical Review of Event-Based Camera Depth Estimation Methods and Systems." Sensors 22, no. 3 (2022): 1201. http://dx.doi.org/10.3390/s22031201.

Full text
Abstract:
Event-based cameras have increasingly become more commonplace in the commercial space as the performance of these cameras has also continued to increase to the degree where they can exponentially outperform their frame-based counterparts in many applications. However, instantiations of event-based cameras for depth estimation are sparse. After a short introduction detailing the salient differences and features of an event-based camera compared to that of a traditional, frame-based one, this work summarizes the published event-based methods and systems known to date. An analytical review of these methods and systems is performed, justifying the conclusions drawn. This work is concluded with insights and recommendations for further development in the field of event-based camera depth estimation.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajamanickam, Kuppuraj, and Yannis Hardalupas. "Time-Resolved Imaging Of Wavy Interface In The Primary Atomisation Region Of An Air Assist Atomiser Using An Event-Based Camera." Proceedings of the International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics 21 (July 8, 2024): 1–13. http://dx.doi.org/10.55037/lxlaser.21st.76.

Full text
Abstract:
The current work discusses the demonstration of a low-cost event-based camera for time-resolved imaging (10000 frames/sec) of a primary atomization zone in canonical air-assist atomizers. Experiments have been conducted simultaneously with traditional high-speed and event-based cameras, enabling us to quantitatively assess the potential of event-based cameras in spray imaging applications. Three atomization breakup regimes are considered: columnar, bag, and multimode. Dynamic Mode Decomposition (DMD) is implemented over the instantaneous data sets acquired from both cameras to assess their performance in extracting turbulence statistics. The obtained DMD modes from both cameras are similar, highlighting the potential of low-cost event-based cameras in extracting coherent structures and their spectral contents. Finally, the limitations (e.g., event saturation) of event-based cameras in the context of primary atomization are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Yılmaz, Özgün, Camille Simon-Chane, and Aymeric Histace. "Evaluation of Event-Based Corner Detectors." Journal of Imaging 7, no. 2 (2021): 25. http://dx.doi.org/10.3390/jimaging7020025.

Full text
Abstract:
Bio-inspired Event-Based (EB) cameras are a promising new technology that outperforms standard frame-based cameras in extreme lighted and fast moving scenes. Already, a number of EB corner detection techniques have been developed; however, the performance of these EB corner detectors has only been evaluated based on a few author-selected criteria rather than on a unified common basis, as proposed here. Moreover, their experimental conditions are mainly limited to less interesting operational regions of the EB camera (on which frame-based cameras can also operate), and some of the criteria, by definition, could not distinguish if the detector had any systematic bias. In this paper, we evaluate five of the seven existing EB corner detectors on a public dataset including extreme illumination conditions that have not been investigated before. Moreover, this evaluation is the first of its kind in terms of analysing not only such a high number of detectors, but also applying a unified procedure for all. Contrary to previous assessments, we employed both the intensity and trajectory information within the public dataset rather than only one of them. We show that a rigorous comparison among EB detectors can be performed without tedious manual labelling and even with challenging acquisition conditions. This study thus proposes the first standard unified EB corner evaluation procedure, which will enable better understanding of the underlying mechanisms of EB cameras and can therefore lead to more efficient EB corner detection techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

J., P. Rodríguez-Gómez, Gómez Eguíluz A., R. Martínez-De Dios J., and Ollero A. "Auto-Tuned Event-Based Perception Scheme for Intrusion Monitoring With UAS." IEEE Access 9 (March 17, 2021): 44840–54. https://doi.org/10.1109/ACCESS.2021.3066529.

Full text
Abstract:
This paper presents an asynchronous event-based scheme for automatic intrusion monitoring using Unmanned Aerial Systems (UAS). Event cameras are neuromorphic sensors that capture the illumination changes in the camera pixels with high temporal resolution and dynamic range. In contrast to conventional frame-based cameras, they are naturally robust against motion blur and lighting conditions, which make them ideal for outdoor aerial robot applications. The presented scheme includes two main perception components. First, an asynchronous event-based processing system efficiently detects intrusions by combining several asynchronous event-based algorithms that exploit the advantages of the sequential nature of the event stream. The second is an off-line training mechanism that adjusts the parameters of the event-based algorithms to a particular surveillance scenario and mission. The proposed perception system was implemented in ROS for on-line execution on board UAS, integrated in an autonomous aerial robot architecture, and extensively validated in challenging scenarios with a wide variety of lighting conditions, including day and night experiments in pitch dark conditions.
APA, Harvard, Vancouver, ISO, and other styles
8

Barrios-Avilés, Juan, Taras Iakymchuk, Jorge Samaniego, Leandro Medus, and Alfredo Rosado-Muñoz. "Movement Detection with Event-Based Cameras: Comparison with Frame-Based Cameras in Robot Object Tracking Using Powerlink Communication." Electronics 7, no. 11 (2018): 304. http://dx.doi.org/10.3390/electronics7110304.

Full text
Abstract:
Event-based cameras are not common in industrial applications despite the fact that they can add multiple advantages for applications with moving objects. In comparison with frame-based cameras, the amount of generated data is very low while keeping the main information in the scene. For an industrial environment with interconnected systems, data reduction becomes very important to avoid network congestion and provide faster response time. However, the use of new sensors as event-based cameras is not common since they do not usually provide connectivity to industrial buses. This work develops a network node based on a Field Programmable Gate Array (FPGA), including data acquisition and tracking position for an event-based camera. It also includes spurious reduction and filtering algorithms while keeping the main features at the scene. The FPGA node also includes the stack of the network protocol to provide standard communication among other nodes. The powerlink IEEE 61158 industrial network is used to communicate the FPGA with a controller connected to a self-developed two-axis servo-controlled robot. The inverse kinematics model for the robot is included in the controller. To complete the system and provide a comparison, a traditional frame-based camera is also connected to the controller. Response time and robustness to lighting conditions are tested. Results show that, using the event-based camera, the robot can follow the object using fast image recognition achieving up to 85% percent data reduction providing an average of 99 ms faster position detection and less dispersion in position detection (4.96 mm vs. 17.74 mm in the Y-axis position, and 2.18 mm vs. 8.26 mm in the X-axis position) than the frame-based camera, showing that event-based cameras are more stable under light changes. Additionally, event-based cameras offer intrinsic advantages due to the low computational complexity required: small size, low power, reduced data and low cost. Thus, it is demonstrated how the development of new equipment and algorithms can be efficiently integrated into an industrial system, merging commercial industrial equipment with new devices.
APA, Harvard, Vancouver, ISO, and other styles
9

Christian, Creß, Walter Zimmer, Nils Purschke, et al. "TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras." IEEE Transactions on Intelligent Vehicles 9, no. 7 (2024): 1——19. https://doi.org/10.1109/TIV.2024.3393749.

Full text
Abstract:
Event-based cameras are predestined for Intelligent Transportation Systems (ITS). They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night. However, event-based images lack color and texture compared to images from a conventional RGB camera. Considering that, data fusion between event-based and conventional cameras can combine the strengths of both modalities. For this purpose, extrinsic calibration is necessary. To the best of our knowledge, no targetless calibration between event-based and RGB cameras can handle multiple moving objects, nor does data fusion optimized for the domain of roadside ITS exist. Furthermore, synchronized event-based and RGB camera datasets considering roadside perspective are not yet published. To fill these research gaps, based on our previous work, we extended our targetless calibration approach with clustering methods to handle multiple moving objects. Furthermore, we developed an early fusion, simple late fusion, and a novel spatiotemporal late fusion method. Lastly, we published the TUMTraf Event Dataset, which contains more than 4,111 synchronized event-based and RGB images with 50,496 labeled 2D boxes. During our extensive experiments, we verified the effectiveness of our calibration method with multiple moving objects. Furthermore, compared to a single RGB camera, we increased the detection performance of up to +9% mAP in the day and up to +13% mAP during the challenging night with our presented event-based sensor fusion methods. The TUMTraf Event Dataset is available at https://innovation-mobility.com/tumtraf-dataset.
APA, Harvard, Vancouver, ISO, and other styles
10

Christian, Creß, Walter Zimmer, Nils Purschke, et al. "TUMTraf Event: Calibration and Fusion Resulting in a Dataset for Roadside Event-Based and RGB Cameras." IEEE Transactions on Intelligent Vehicles 9, no. 7 (2024): 1——19. https://doi.org/10.1109/TIV.2024.3393749.

Full text
Abstract:
Event-based cameras are predestined for Intelligent Transportation Systems (ITS). They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night. However, event-based images lack color and texture compared to images from a conventional RGB camera. Considering that, data fusion between event-based and conventional cameras can combine the strengths of both modalities. For this purpose, extrinsic calibration is necessary. To the best of our knowledge, no targetless calibration between event-based and RGB cameras can handle multiple moving objects, nor does data fusion optimized for the domain of roadside ITS exist. Furthermore, synchronized event-based and RGB camera datasets considering roadside perspective are not yet published. To fill these research gaps, based on our previous work, we extended our targetless calibration approach with clustering methods to handle multiple moving objects. Furthermore, we developed an early fusion, simple late fusion, and a novel spatiotemporal late fusion method. Lastly, we published the TUMTraf Event Dataset, which contains more than 4,111 synchronized event-based and RGB images with 50,496 labeled 2D boxes. During our extensive experiments, we verified the effectiveness of our calibration method with multiple moving objects. Furthermore, compared to a single RGB camera, we increased the detection performance of up to +9% mAP in the day and up to +13% mAP during the challenging night with our presented event-based sensor fusion methods. The TUMTraf Event Dataset is available at https://innovation-mobility.com/tumtraf-dataset.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Event-based cameras"

1

Hellberg, Simon, and Dominik Hollidt. "Evaluation of Camera Resolution in Optical Flow Estimation Using Event-Based Cameras." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280321.

Full text
Abstract:
Developments in event-based camera technology and their recent increase in pixel count raised the question of whether resolution helps the accuracy and performance of algorithms. This thesis studies the impact of resolution on optical flow estimation for event-based cameras. For this purpose, we created a data set containing a mix of synthetic scenes and real camera recordings with ground truth available. For the modeling of low-resolution data, we designed three different downsampling algorithms. The camera used for the real scene recordings was the Prophesee (CSD3SVCD), which was determined to be the best out of the current state-of-the-art cameras in a prestudy. The camera investigation evaluated the camera’s performance in terms of temporal and spatial accuracy. In order to answer the question, whether resolution benefits the accuracy of optical flow estimation, we ran a total of 13 algorithms variations from four algorithm families (Lucas-Kanade [1, 2], Local-Planes fitting [2, 3], direction-selective filter [2, 4] and patch match [5]) on the data set. We then analysed their performance in terms of processing time, output density, angular error, endpoint error and relative endpoint error. The results show that no global correlation between resolution and accuracy across all algorithms can be identified. However, methods show individually different behaviour on different data. The best performing methods, the patch match algorithms, seemed to prefer the less dense downsampled data. The evaluation also showed that rather than resolution, the specific characteristics of the data seemed to have a larger impact on accuracy. Thus denoised data might increase accuracy more than a change of resolution.<br>De senaste utvecklingarna inom händelsebaserad kamerateknologi och deras nyligen utökade mängd pixlar ställer frågan om denna högre upplösning påverkar precision samt prestanda för algoritmer. Den här rapporten studerar påverkan av upplösning på optiskt flödes-algoritmer för händelsebaserade kameror. För att göra detta skapas en dataupsättning av riktiga och syntetiska scener, där det sanna optiska flödet är känt. För att modellera den lågupplösta datan används tre olika nedskalningsalgoritmer. Kameran som används för att spela in de riktiga scenerna var Prophesee (CSD3SVCD), som vi avgjorde var den bästa av de nuvarande existerande kamerorna i en förstudie. I förstudien bedömde vi kamerornas precision i tid samt rymd. För att besvara vår huvudsakliga fråga testades totalt 13 algoritmvariationer från fyra algoritmfamiljer (Lucas-Kanade [1, 2], Local Planes fitting [3, 2], direction-selective filter [4, 2] och patch match [5]). Vi analyserar deras prestanda i beräkningstid, densitet av vektorer, ändpunktsfel, relativt ändpunktsfel och vinkelfel. Resultaten visar ingen global trend över alla algoritmer för precision av optiskt flöde baserat på upplösning. Individuella trender kan dock skönjas inom algoritmfamiljer. Den bäst presterande algoritmfamiljen, patch match, verkade föredra de mindre täta typerna av nedskalning. Utvärderingen visar också att över upplösning så verkar datans specifika karaktäristik ha större påverkan på precision. Därför kan brusreducerad data ha mer påverkan på en algoritms precision än en ändrad upplösning.
APA, Harvard, Vancouver, ISO, and other styles
2

Berthelon, Xavier. "Neuromorphic analysis of hemodynamics using event-based cameras." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS404.

Full text
Abstract:
La micro-circulation joue un rôle essentiel dans l’échange de molécules entre le sang et les tissus. Certaines maladies aiguës ou chroniques peuvent altérer cette micro-circulation. Les dysfonctionnements sont alors caractérisés par une baisse de la vitesse des globules rouges ainsi que par une diminution de la densité de perfusion des capillaires. La compréhension de ces perturbations est essentielle dans la physiopathologie de nombreuses maladies. Malgré les avancées technologiques récentes, aucun outil n’est aujourd’hui disponible au chevet du patient pour évaluer l’état de la micro-circulation en temps réel. Dans cette thèse, nous présentons une méthode innovante associant une caméra évènementielle asynchrone, fonctionnant sur le même principe que la rétine humaine, à des dispositifs d’imagerie médicale. Grâce a la grande résolution temporelle de ces caméras, nous pouvons déterminer la vélocité et la densité des globules rouges dans les capillaires sanguins en temps réel. Nous montrons par exemple que durant un choc hémorragique, ce système est capable de détecter une détérioration de la micro-circulation en l’espace de quelques minutes. Cette évaluation rapide pourrait considérablement améliorer l’appréciation de l’état des patients et permettre une adaptation des traitements en temps réel<br>The micro-circulation plays a crucial role in the exchange of molecules between blood cells and organic tissues. Both acute and chronic illnesses can cause a degradation of the micro-circulatory network. The main alterations are characterized by a reduction of the velocity of red blood cells and perfusion density of capillaries. The understanding of such deregulation is crucial in the pathophysiology of many diseases. Despite the recent development of some technical devices to study the micro-circulation, there is no ideal tool to evaluate the microcirculation at bedside. In this thesis, we present an innovative method which couples asynchronous time-based image sensors, built based on the working principle of the human retina, with medical imaging devices. Thanks to the high temporal resolution of these cameras, we estimate red blood cells velocities and densities within capillaries in real time and show for instance that during a hemorrhagic shock, our system estimates deregulation of the micro-circulation within minutes. Such a quick diagnosis could improve the evaluation of patients' states and real-time adaptation of hemodynamic treatments
APA, Harvard, Vancouver, ISO, and other styles
3

MONFORTE, MARCO. "Trajectory Prediction with Event-Based Cameras for Robotics Applications." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1047290.

Full text
Abstract:
This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived. The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods. Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior. Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Maiga, Aïssata, and Johanna Löv. "Real versus Simulated data for Image Reconstruction : A comparison between training with sparse simulated data and sparse real data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302028.

Full text
Abstract:
Our study investigates how training with sparse simulated data versus sparse real data affects image reconstruction. We compared on several criteria such as number of events, speed and high dynamic range, HDR. The results indicate that the difference between simulated data and real data is not large. Training with real data performed often better, but only by 2%. The findings confirm what earlier studies have shown; training with simulated data generalises well, even when training on sparse datasets as this study shows.<br>Vår studie undersöker hur träning med gles simulerad data och gles verklig data från en eventkamera, påverkar bildrekonstruktion. Vi tränade två modeller, en med simulerad data och en med verklig för att sedan jämföra dessa på ett flertal kriterier som antal event, hastighet och high dynamic range, HDR. Resultaten visar att skillnaden mellan att träna med simulerad data och verklig data inte är stor. Modellen tränad med verklig data presterade bättre i de flesta fall, men den genomsnittliga skillnaden mellan resultaten är bara 2%. Resultaten bekräftar vad tidigare studier har visat; träning med simulerad data generaliserar bra, och som denna studie visar även vid träning på glesa datamängder.
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Haixin. "Moving Objects Detection and Tracking using Hybrid Event-based and Frame-based Vision for Autonomous Driving." Electronic Thesis or Diss., Ecole centrale de Nantes, 2023. http://www.theses.fr/2023ECDN0014.

Full text
Abstract:
La caméra basée sur lesévénements est un capteur bioinspiré qui diffèredes caméras à images conventionnelles : Aulieu de saisir des images à une fréquence fixe,elles surveillent de manière asynchrone leschangements de luminosité par pixel etproduisent un flux de données d'événementscontenant l'heure, le lieu et le signe deschangements de luminosité. Les camérasévénementielles offrent des propriétésintéressantes par rapport aux camérastraditionnelles : haute résolution temporelle,gamme dynamique élevée et faibleconsommation d'énergie. Par conséquent, lescaméras événementielles ont un énormepotentiel pour la vision par ordinateur dans desscénarios difficiles pour les camérastraditionnelles, tels que le mouvement rapide etla gamme dynamique élevée. Cette thèse aétudié la détection et le suivi d'objets avec lacaméra événementielle en se basant sur unmodèle et sur l'apprentissage profond. Lastratégie de fuison avec la caméra d'image estproposée puisque la caméra d'image estégalement nécessaire pour fournir desinformations sur l'apparence. Les algorithmesde perception proposés comprennent le fluxoptique, la détection d'objets et la segmentationdu mouvement. Des tests et des analyses ontété effectués pour prouver la faisabilité et lafiabilité des algorithmes de perceptionproposés<br>The event-based camera is a bioinspiredsensor that differs from conventionalframe cameras: Instead of grabbing frameimages at a fixed rate, they asynchronouslymonitor per-pixel brightness change and outputa stream of events data that contains the time,location and sign of the brightness changes.Event cameras offer attractive propertiescompared to traditional cameras: high temporalresolution, high dynamic range, and low powerconsumption. Therefore, event cameras have anenormous potential for computer vision inchallenging scenarios for traditional framecameras, such as fast motion, and high dynamicrange.This thesis investigated the model-based anddeep-learning-based for object detection andtracking with the event camera. The fusionstrategy with the frame camera is proposedsince the frame camera is also needed toprovides appearance infomation. The proposedperception algorithms include optical flow,object detection and motion segmentation.Tests and analyses have been conducted toprove the feasibility and reliability of theproposed perception algorithms
APA, Harvard, Vancouver, ISO, and other styles
6

Tabia, Ahmed. "Pose estimation with event camera." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST093.

Full text
Abstract:
La pose de la caméra est utilisée pour décrire la position et l'orientation d'une caméra dans un système de coordonnées absolu, en référence à six degrés de liberté. L'estimation de la pose de la caméra est essentielle dans divers domaines d'application, tels que la réalité augmentée, la navigationrobotique et les véhicules autonomes.Ces domaines exploitent la pose de la caméra pour des calculs ultérieurs, comme la localisation des objets et la perception de la scène.Estimer la pose d'une caméra présente des défis dans différents scénarios ; les conditions d'éclairage médiocres, y compris une obscurité ou une luminosité extrêmes, limitent l'efficacité de la plupart des méthodes basées sur des caractéristiques. Ces conditions d'éclairage défavorablesentravent la détection et la correspondance précises des caractéristiques, affectant ainsi la précision de l'estimation de la pose de la caméra.Les scènes manquant de textures distinctes compliquent l'extraction de points clés significatifs, tandis que le mouvement rapide entraîne un flou cinétique, nuisant à la qualité de l'image et à la précision de l'estimation de la pose.La plupart de ces défis rencontrés dans l'estimation de la pose de la caméra sont largement liés à la nature des caméras traditionnelles, qui capturent le monde sous forme d'une série d'images fixes, prises successivement à un rythme rapide. Dans les cas où ces difficultés sont particulièrement prononcées, les caméras événementielles offrent des avantages potentiels.Les caméras événementielles sont des capteurs bio-inspirés qui imitent le fonctionnement de la rétine humaine, en capturant les changements d'intensité des pixels plutôt que d'enregistrer des images complètes à un taux fixe, comme le font les caméras traditionnelles basées sur des trames.Cette thèse se concentre sur l'estimation de la pose des caméras événementielles et vise à explorer l'application de méthodes d'apprentissage en profondeur pour la pose et la relocalisation basées sur ces caméras, en tirant parti de leurs propriétés uniques telles que la haute résolution temporelle, la faible latence et la large plage dynamique.La thèse apporte plusieurs contributions au domaine de l'estimation de la pose de caméra événementielle en utilisant des techniques d'apprentissage profond. Ces contributions peuvent être résumées comme suit :• La thèse fournit un aperçu complet des informations de base et des travaux connexes, établissant ainsi une base solide et une compréhension contextuelle de l'estimation de la pose de caméra événementielle.• La thèse explore et développe des approches spécialisées d'apprentissage profond adaptées à l'estimation de la pose de caméra événementielle. Ces techniques exploitent la puissance de l'apprentissage profond pour estimer avec précision la pose de la caméra à l'aide dedonnées événementielles.• La thèse introduit des méthodes pour projeter les données événementielles en données semblables à des images, facilitant l'application d'approches dédiées d'apprentissage profond. Ce processus de projection permet une utilisation efficace des informations événementielles dans la tâche d'estimation de la pose de la caméra.• La thèse propose une nouvelle approche qui applique directement des techniques d'apprentissage profond aux données événementielles brutes, les traitant comme un nuage de points plutôt que de les convertir en images. Cette approche exploite l'ensemble des informations capturées par la caméra événementielle et permet un processus d'apprentissage de bout en bout<br>Camera pose is used to describe the position and orientation of a camera in an absolute coordinate system, with reference to six degrees of freedom. Estimating the camera pose is essential in various application domains, such as augmented reality, robotic navigation, and autonomous vehicles.These fields rely on camera pose for subsequent calculations, such as object localization and scene perception.Estimating the pose of a camera presents challenges in different scenarios; poor lighting conditions, including extreme darkness or brightness, limit the effectiveness of most feature-based methods. These unfavorable lighting conditions hinder precise feature detection and matching, thereby affecting the accuracy of camera pose estimation. Scenes lacking distinct textures complicate the extraction of meaningful keypoints, while rapid motion leads to motion blur, affecting image quality and pose estimation accuracy.Most of these challenges encountered in camera pose estimation are largely related to the nature of traditional cameras, which capture the world as a series of static images taken successively at a rapid pace. In cases where these difficulties are particularly pronounced, event-based cameras offer potential advantages.Event-based cameras are bio-inspired sensors that mimic the functioning of the human retina, capturing changes in pixel intensity rather than recording full images at a fixed rate, as traditional frame-based cameras do.This thesis focuses on estimating the pose of event-based cameras and aims to explore the application of deep learning methods for pose estimation and relocalization based on these cameras, leveraging their unique properties such as high temporal resolution, low latency, and wide dynamic range.The thesis makes several contributions to the field of event-based camera pose estimation using deep learning techniques. These contributions can be summarized as follows:• The thesis provides a comprehensive overview of foundational information and related work, thus establishing a solid foundation and contextual understanding of event-based camera pose estimation.• The thesis explores and develops specialized deep learning approachestailored to event-based camera pose estimation. These techniques harness the power of deep learning to accurately estimate camera pose using event data.• The thesis introduces methods to project event data into image-like data, facilitating the application of dedicated deep learning approaches.This projection process allows for efficient use of event data in the camera pose estimation task.• The thesis proposes a novel approach that directly applies deep learning techniques to raw event data, treating them as a point cloud rather than converting them into images. This approach leverages the entirety of information captured by the event-based camera and enables an end-to-end learning process
APA, Harvard, Vancouver, ISO, and other styles
7

Froude, Melanie. "Lahar dynamics in the Belham river valley, Montserrat : application of remote-camera based monitoring for improved sedimentological interpretation of post-event deposits." Thesis, University of East Anglia, 2015. https://ueaeprints.uea.ac.uk/53421/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oudjail, Veïs. "Réseaux de neurones impulsionnels appliqués à la vision par ordinateur." Electronic Thesis or Diss., Université de Lille (2022-....), 2022. http://www.theses.fr/2022ULILB048.

Full text
Abstract:
Les réseaux de neurones artificiels (RNA) sont devenus des techniques incontournables en vision par ordinateur, cette tendance ayant débuté lors du challenge ImageNet de 2012. Cependant, ce succès s'accompagne d'un coût humain non-négligeable pour l'étiquetage manuel des données, très important dans l'apprentissage des modèles et d'un coût énergétique élevé causé par le besoin de ressources de calcul importantes. Les réseaux de neurones impulsionnels (Spiking Neural Network, SNN) apportent des solutions à ces problématiques. C'est une classe particulière des RNAs, proche du modèle biologique, dans lequel les neurones communiquent de manière asynchrone en représentant l'information via des impulsions (spikes). L'apprentissage des SNN peu reposer sur une règle non supervisée : la STDP. Elle module les poids synaptiques en fonction des corrélations temporelles locales constatées entre les impulsions entrantes et sortantes. Différentes architectures matérielles ont été conçues dans le but d'exploiter les propriétés des SNN (asynchronie, opération éparse et locale, etc.) afin de concevoir des solutions peu énergivores, certaines divisant le coût de plusieurs ordres de grandeur. Les SNNs gagnent en popularité et il y a un intérêt croissant à les appliquer à la vision. Des travaux récents montrent que les SNNs acquièrent en maturité en étant compétitifs par rapport à l'état de l'art sur des datasets d'images "simples" tels que MNIST (chiffres manuscrits) mais pas sur des datasets plus complexes. Cependant, les SNNs peuvent potentiellement se démarquer des RNAs dans le traitement de vidéos. La première raison est que ces modèles intègrent une dimension temporelle en plus. La deuxième raison est qu'ils se prêtent bien à l'utilisation des caméras événementielles. Ce sont des capteurs bio-inspirés qui perçoivent les contrastes temporels d'une scène, autrement dit, ils sont sensibles au mouvement. Chaque pixel peut détecter une variation lumineuse (positive ou négative), ce qui déclenche un événement. Coupler ces caméras aux puces neuromorphiques permet de créer des systèmes de vision totalement asynchrones et massivement parallélisés. L'objectif de cette thèse est d'exploiter les capacités offertes par les SNNs dans le traitement vidéo. Afin d'explorer le potentiel offert par les SNNs, nous nous sommes intéressés à l'analyse du mouvement et plus particulièrement à l'estimation de la direction du mouvement. Le but est de développer un modèle capable d'apprendre incrémentalement, sans supervision et avec peu d'exemples, à extraire des caractéristiques spatio-temporelles. Nous avons donc effectué plusieurs études examinant les différents points mentionnés à l'aide de jeux de données événementielles synthétiques. Nous montrons que le réglage des paramètres des SNNs est essentiel pour que le modèle soit capable d'extraire des caractéristiques utiles. Nous montrons aussi que le modèle est capable d'apprendre de manière incrémentale en lui présentant des classes inédites sans détérioration des performances sur les classes maîtrisées. Pour finir, nous évoquerons certaines limites, notamment sur l'apprentissage des poids en suggérant la possibilité d'apprendre plutôt les délais, encore peu exploités et qui pourrait marquer davantage la rupture face aux RNAs<br>Artificial neural networks (ANN) have become a must-have technique in computer vision, a trend that started during the 2012 ImageNet challenge. However, this success comes with a non-negligible human cost for manual data labeling, very important in model learning, and a high energy cost caused by the need for large computational resources. Spiking Neural Networks (SNN) provide solutions to these problems. It is a particular class of ANNs, close to the biological model, in which neurons communicate asynchronously by representing information through spikes. The learning of SNNs can rely on an unsupervised rule: the STDP. It modulates the synaptic weights according to the local temporal correlations observed between the incoming and outgoing spikes. Different hardware architectures have been designed to exploit the properties of SNNs (asynchrony, sparse and local operation, etc.) in order to design low-power solutions, some of them dividing the cost by several orders of magnitude. SNNs are gaining popularity and there is growing interest in applying them to vision. Recent work shows that SNNs are maturing by being competitive with the state of the art on "simple" image datasets such as MNIST (handwritten numbers) but not on more complex datasets. However, SNNs can potentially stand out from ANNs in video processing. The first reason is that these models incorporate an additional temporal dimension. The second reason is that they lend themselves well to the use of event-driven cameras. They are bio-inspired sensors that perceive temporal contrasts in a scene, in other words, they are sensitive to motion. Each pixel can detect a light variation (positive or negative), which triggers an event. Coupling these cameras to neuromorphic chips allows the creation of totally asynchronous and massively parallelized vision systems. The objective of this thesis is to exploit the capabilities offered by SNNs in video processing. In order to explore the potential offered by SNNs, we are interested in motion analysis and more particularly in motion direction estimation. The goal is to develop a model capable of learning incrementally, without supervision and with few examples, to extract spatiotemporal features. We have therefore performed several studies examining the different points mentioned using synthetic event datasets. We show that the tuning of the SNN parameters is essential for the model to be able to extract useful features. We also show that the model is able to learn incrementally by presenting it with new classes without deteriorating the performance on the mastered classes. Finally, we discuss some limitations, especially on the weight learning, suggesting the possibility of more delay learning, which are still not very well exploited and which could mark a break with ANNs
APA, Harvard, Vancouver, ISO, and other styles
9

Khairallah, Mahmoud. "Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.

Full text
Abstract:
Plutôt que de générer des images de manière constante et synchrone, les capteurs neuromorphiques de vision -également connus sous le nom de caméras événementielles, permettent à chaque pixel de fournir des informations de manière indépendante et asynchrone chaque fois qu'un changement de luminosité est détecté. Par conséquent, les capteurs de vision neuromorphiques n'ont pas les problèmes des caméras conventionnelles telles que les artefacts d'image et le Flou cinétique. De plus, ils peuvent fournir une compression sans perte de donné avec une résolution temporelle et une plage dynamique plus élevée. Par conséquent, les caméras événmentielles remplacent commodément les caméras conventionelles dans les applications robotiques nécessitant une grande maniabilité et des conditions environnementales variables. Dans cette thèse, nous abordons le problème de l'odométrie visio-inertielle à l'aide de caméras événementielles et d'une centrale inertielle. En exploitant la cohérence des caméras événementielles avec les conditions de constance de la luminosité, nous discutons de la possibilité de construire un système d'odométrie visuelle basé sur l'estimation du flot optique. Nous développons notre approche basée sur l'hypothèse que ces caméras fournissent des informations des contours des objets de la scène et appliquons un algorithme de détection de ligne pour la réduction des données. Le suivi de ligne nous permet de gagner plus de temps pour les calculs et fournit une meilleure représentation de l'environnement que les points d'intérêt. Dans cette thèse, nous ne montrons pas seulement une approche pour l'odométrie visio-inertielle basée sur les événements, mais également des algorithmes qui peuvent être utilisés comme algorithmes des caméras événementielles autonomes ou intégrés dans d'autres approches si nécessaire<br>Rather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
APA, Harvard, Vancouver, ISO, and other styles
10

Bernard, Yann. "Calcul neuromorphique pour l'exploration et la catégorisation robuste d'environnement visuel et multimodal dans les systèmes embarqués." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0295.

Full text
Abstract:
Tandis que la quête pour des systèmes de calcul toujours plus puissants se confronte à des contraintes matérielles de plus en plus fortes, des avancées majeures en termes d’efficacité de calcul sont supposées bénéficier d’approches non conventionnelles et de nouveaux modèles de calcul tels que le calcul inspiré du cerveau. Le cerveau est une architecture de calcul massivement parallèle avec des interconnexions denses entre les unités de calcul. Les systèmes neurobiologiques sont donc une source d'inspiration naturelle pour la science et l'ingénierie informatiques. Les améliorations technologiques rapides des supports de calcul ont récemment renforcé cette tendance à travers deux conséquences complémentaires mais apparemment contradictoires : d’une part en offrant une énorme puissance de calcul, elles ont rendu possible la simulation de très grandes structures neuronales comme les réseaux profonds, et d’autre part en atteignant leurs limites technologiques et conceptuelles, elles ont motivé l'émergence de paradigmes informatiques alternatifs basés sur des concepts bio-inspirés. Parmi ceux-ci, les principes de l’apprentissage non supervisé retiennent de plus en plus l’attention.Nous nous intéressons ici plus particulièrement à deux grandes familles de modèles neuronaux, les cartes auto-organisatrices et les champs neuronaux dynamiques. Inspirées de la modélisation de l’auto-organisation des colonnes corticales, les cartes auto-organisatrices ont montré leur capacité à représenter un stimulus complexe sous une forme simplifiée et interprétable, grâce à d’excellentes performances en quantification vectorielle et au respect des relations de proximité topologique présentes dans l’espace d’entrée. Davantage inspirés des mécanismes de compétition dans les macro-colonnes corticales, les champs neuronaux dynamiques autorisent l’émergence de comportements cognitifs simples et trouvent de plus en plus d’applications dans le domaine de la robotique autonome notamment.Dans ce contexte, le premier objectif de cette thèse est de combiner cartes auto-organisatrices (SOM) et champs neuronaux dynamiques (DNF) pour l’exploration et la catégorisation d’environnements réels perçus au travers de capteurs visuels de différentes natures. Le second objectif est de préparer le portage de ce calcul de nature neuromorphique sur un substrat matériel numérique. Ces deux objectifs visent à définir un dispositif de calcul matériel qui pourra être couplé à différents capteurs de manière à permettre à un système autonome de construire sa propre représentation de l’environnement perceptif dans lequel il évolue. Nous avons ainsi proposé et évalué un modèle de détection de nouveauté à partir de SOM. Les considérations matérielles nous ont ensuite amené à des optimisations algorithmiques significatives dans le fonctionnement des SOM. Enfin, nous complémenté le modèle avec des DNF pour augmenter le niveau d'abstraction avec un mécanisme attentionnel de suivi de cible<br>As the quest for ever more powerful computing systems faces ever-increasing material constraints, major advances in computing efficiency are expected to benefit from unconventional approaches and new computing models such as brain-inspired computing. The brain is a massively parallel computing architecture with dense interconnections between computing units. Neurobiological systems are therefore a natural source of inspiration for computer science and engineering. Rapid technological improvements in computing media have recently reinforced this trend through two complementary but seemingly contradictory consequences: on the one hand, by providing enormous computing power, they have made it possible to simulate very large neural structures such as deep networks, and on the other hand, by reaching their technological and conceptual limits, they have motivated the emergence of alternative computing paradigms based on bio-inspired concepts. Among these, the principles of unsupervised learning are receiving increasing attention.We focus here on two main families of neural models, self-organizing maps and dynamic neural fields. Inspired by the modeling of the self-organization of cortical columns, self-organizing maps have shown their ability to represent a complex stimulus in a simplified and interpretable form, thanks to excellent performances in vector quantization and to the respect of topological proximity relationships present in the input space. More inspired by competition mechanisms in cortical macro-columns, dynamic neural fields allow the emergence of simple cognitive behaviours and find more and more applications in the field of autonomous robotics.In this context, the first objective of this thesis is to combine self-organizing maps and dynamic neural fields for the exploration and categorisation of real environments perceived through visual sensors of different natures. The second objective is to prepare the porting of this neuromorphic computation on a digital hardware substrate. These two objectives aim to define a hardware computing device that can be coupled to different sensors in order to allow an autonomous system to construct its own representation of the perceptual environment in which it operates. Therefore, we proposed and evaluated a novelty detection model based on self-organising maps. Hardware considerations then led us to significant algorithmic optimisations SOM operations. Finally, we complemented the model with dynamic neural fields to increase the level of abstraction with an attentional target tracking mechanism
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Event-based cameras"

1

Joshi, Mahesh K., and J. R. Klein. Lifestyle Innovations Generating New Businesses. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198827481.003.0015.

Full text
Abstract:
Life-altering technology is not only improving our lifestyle but also creating new business models. Integration of technology into everyday life is a primary driver of changes in lifestyle. Whether visible or not, today’s technology is everywhere. Consumers come home from work to a smart house that greets them with music, emails them the foods the refrigerator needs, and through spatial phase imaging technology senses their mood. Without human intervention it changes its presentation based on data indicators embedded in everything. The house recognizes mood and compares it with past behaviors, facial reactions, timeline, and acts accordingly. All this happens through a standard security camera with pixelate three-dimensional technology. The same technology can identify the anti-social elements in a crowd, enhance security at any public event venue, and allow doctors to see under our skin without intrusion.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Event-based cameras"

1

Li, Wenxuan, Yan Dong, Shaoqiang Qiu, and Bin Han. "Hardware-Free Event Cameras Temporal Synchronization Based on Event Density Alignment." In Intelligent Robotics and Applications. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6498-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rahman, Muhammad Rameez Ur, Jhony H. Giraldo, Indro Spinelli, Stéphane Lathuilière, and Fabio Galasso. "OVOSE: Open-Vocabulary Semantic Segmentation in Event-Based Cameras." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78444-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iddrisu, Khadija, Waseem Shariff, Noel E. O’Connor, Joseph Lemley, and Suzanne Little. "Evaluating Image-Based Face and Eye Tracking with Event Cameras." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-92460-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Belin, Matts-Åke, and Anna Vadeby. "Speed and Technology: Different Modus of Operandi." In The Vision Zero Handbook. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-76505-7_37.

Full text
Abstract:
AbstractWithin Vision Zero as a strategy, it is imbedded the fact that injuries occur when the mechanical energy reaches individuals at rates that entail forces in excess of their thresholds for injury. Therefore, according to Vision Zero, there are three main strategies to eliminate fatalities and severe injuries due to road crashes: protect people from exposure of harmful energy, reduce the risk of events with harmful energy, and protect people from harmful energy in the event of a collision. Controlling speed is therefore of the task of utmost importance in a strategy such as Vision Zero.A traffic enforcement camera, or “speed camera,” system has the possibility to control speed in a road system, and it has the possibility to affect its road users both at a macro and a micro perspective. In a micro perspective, it primarily concerns how effective the cameras are locally at the road sections where the enforcement is focused on, while at a macro perspective it is more focused on how the camera enforcement system and strategies, possibly together with the overall enforcement strategy, affects attitudes and norms related to driving with excessive speed. Experience worldwide has proven the effectiveness of automated speed cameras in reducing speed and, in turn, crashes and injuries.In this chapter, firstly the rationale behind speed limits, speed management, and speed compliance strategies will be explored and analyzed, in particular from a Vision Zero perspective. Secondly, various different approaches to speed camera systems in Europe, in Sweden, Norway, the Netherlands, and France, will be analyzed and further explored. Finally, based on similarities and differences in approaches in these countries, in the last section some aspects concerning the setting of speed limits, speed management strategies that underpin the choice of camera technology, and modus of operandi, safety effects of and attitudes toward cameras, will be explored and discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Belin, Matts-Åke, and Anna Vadeby. "Speed and Technology: Different Modus of Operandi." In The Vision Zero Handbook. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-23176-7_37-1.

Full text
Abstract:
AbstractWithin Vision Zero as a strategy, it is imbedded the fact that injuries occur when the mechanical energy reaches individuals at rates that entail forces in excess of their thresholds for injury. Therefore, according to Vision Zero, there are three main strategies to eliminate fatalities and severe injuries due to road crashes: protect people from exposure of harmful energy, reduce the risk of events with harmful energy, and protect people from harmful energy in the event of a collision. Controlling speed is therefore of the task of utmost importance in a strategy such as Vision Zero.A traffic enforcement camera, or “speed camera,” system has the possibility to control speed in a road system, and it has the possibility to affect its road users both at a macro and a micro perspective. In a micro perspective, it primarily concerns how effective the cameras are locally at the road sections where the enforcement is focused on, while at a macro perspective it is more focused on how the camera enforcement system and strategies, possibly together with the overall enforcement strategy, affects attitudes and norms related to driving with excessive speed. Experience worldwide has proven the effectiveness of automated speed cameras in reducing speed and, in turn, crashes and injuries.In this chapter, firstly the rationale behind speed limits, speed management, and speed compliance strategies will be explored and analyzed, in particular from a Vision Zero perspective. Secondly, various different approaches to speed camera systems in Europe, in Sweden, Norway, the Netherlands, and France, will be analyzed and further explored. Finally, based on similarities and differences in approaches in these countries, in the last section some aspects concerning the setting of speed limits, speed management strategies that underpin the choice of camera technology, and modus of operandi, safety effects of and attitudes toward cameras, will be explored and discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Mohamed, Sherif A. S., Mohammad-Hashem Haghbayan, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila. "Towards Real-Time Edge Detection for Event Cameras Based on Lifetime and Dynamic Slicing." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44289-7_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Han, Haiqian, Jiacheng Lyu, Jianing Li, et al. "Physical-Based Event Camera Simulator." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72995-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Manilii, Alessandro, Leonardo Lucarelli, Riccardo Rosati, Luca Romeo, Adriano Mancini, and Emanuele Frontoni. "3D Human Pose Estimation Based on Multi-Input Multi-Output Convolutional Neural Network and Event Cameras: A Proof of Concept on the DHP19 Dataset." In Pattern Recognition. ICPR International Workshops and Challenges. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68763-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Haixin, and Vincent Fremont. "Object Tracking with a Fusion of Event-Based Camera and Frame-Based Camera." In Lecture Notes in Networks and Systems. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16078-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bugueno-Cordova, Ignacio, Miguel Campusano, Robert Guaman-Rivera, and Rodrigo Verschae. "A Color Event-Based Camera Emulator for Robot Vision." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59057-3_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Event-based cameras"

1

Sun, Jingkai, Qiang Zhang, Jiaxu Wang, Jiahang Cao, Hao Cheng, and Renjing Xu. "Event Masked Autoencoder: Point-wise Action Recognition with Event-Based Cameras." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10888760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yilan, Dianxi Shi, Zhe Liu, Yuanze Wang, Shiming Song, and Yuxian Li. "Semi-Dense Scene Reconstruction Based on Stereo Event Cameras." In 2025 7th International Conference on Software Engineering and Computer Science (CSECS). IEEE, 2025. https://doi.org/10.1109/csecs64665.2025.11009691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rahman, Nael Mizanur, Uday Kamal, Manish Nagaraj, Shaunak Roy, and Saibal Mukhopadhyay. "Driving Autonomy with Event-Based Cameras: Algorithm and Hardware Perspectives." In 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2024. http://dx.doi.org/10.23919/date58400.2024.10546715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Heng, Nuo Chen, Miao Li, and Wei An. "Spiking Swin Transformer for UAV Object Detection Based on Event Cameras." In 2024 12th International Conference on Information Systems and Computing Technology (ISCTech). IEEE, 2024. https://doi.org/10.1109/isctech63666.2024.10845340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sadak, Ferhat, Edison Gerena, Charlotte Dupont, Rachel Lévy, and Sinan Haliyo. "Human Sperm Detection and Tracking using Event-based Cameras and Unsupervised Learning." In 2024 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS). IEEE, 2024. http://dx.doi.org/10.1109/marss61851.2024.10612710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Guang-Cai, Wenjing Zhou, Ziang Liu, Zhou Ge, and Yingjie Yu. "Dynamic frequency digital holographic detection of MEMS galvanometers based on event cameras." In Holography, Diffractive Optics, and Applications XIV, edited by Changhe Zhou, Liangcai Cao, Ting-Chung Poon, and Hiroshi Yoshikawa. SPIE, 2024. http://dx.doi.org/10.1117/12.3035979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schuberth, Lara, Vincenzo Messina, Ramon Maria Garcia Alarcia, et al. "Leveraging Event-Based Cameras for Enhanced Space Situational Awareness: A Nanosatellite Mission Architecture Study." In 22nd IAA Symposium on Space Debris, Held at the 75th International Astronautical Congress (IAC 2024). International Astronautical Federation (IAF), 2024. https://doi.org/10.52202/078360-0131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Devitt, John W. "Raytheon event-based camera technology." In Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXXVI, edited by Gerald C. Holst and David P. Haefner. SPIE, 2025. https://doi.org/10.1117/12.3055845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McHarg, Matthew G., Richard L. Balthazor, Greg Cohen, Alex Marcireau, Zachry C. Theis, and Peter N. McMahon-Crabtree. "Falcon ODIN: an event based camera payload." In Unconventional Imaging, Sensing, and Adaptive Optics 2024, edited by Santasri R. Bose-Pillai, Jean J. Dolne, and Matthew Kalensky. SPIE, 2024. http://dx.doi.org/10.1117/12.3027694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Park, JongHun, and MunHo Hong. "Continuous Histogram for Event-Based Vision Camera Systems." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW). IEEE, 2025. https://doi.org/10.1109/wacvw65960.2025.00105.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Event-based cameras"

1

Mathew, Jijo K., Haydn Malackowski, Yerassyl Koshan, et al. Development of Latitude/Longitude (and Route/Milepost) Model for Positioning Traffic Management Cameras. Purdue University, 2024. http://dx.doi.org/10.5703/1288284317720.

Full text
Abstract:
Traffic Incident Management (TIM) is a FHWA Every Day Counts initiative with the objective of reducing secondary crashes, improving travel reliability, and ensuring the safety of responders. Agency roadside cameras play a critical role in TIM by helping dispatchers quickly identify the precise location of incidents when receiving reports from motorists with varying levels of spatial accuracy. Reconciling position reports that are often mile-marker based with cameras that operate in a Pan-Tilt-Zoom (PTZ) coordinate system relies on dispatchers having detailed knowledge of hundreds of cameras and perhaps some presets. During real-time incident dispatching, reducing the time it takes to identify the most relevant cameras and view the incident improves incident management dispatch times. This research developed a camera-to-mile marker mapping technique that automatically sets the camera view to a specified mile marker within the field-of-view of the camera. A new performance metric on verification time (TEYE) that captures the time it takes for TMC operators to have the first visual on roadside cameras is proposed for integration into the FHWA TIM event sequence. Performance metrics that summarize spatial camera coverage and image quality for use in both dispatch and long-term statewide planning for camera deployments were also developed. Using mobile mapping and LiDAR geospatial data to automate the mapping of mile markers to camera PTZ settings, and the integration of connected vehicle trajectory data to detect incidents and set the nearest camera view on the incident are both discussed for future studies.
APA, Harvard, Vancouver, ISO, and other styles
2

Maharjan, Sudan Bikash, Pradeep Dangol, Tenzing Chogyal Sherpa, et al. Insights behind the unexpected flooding in the Budhi Gandaki River, Gorkha, Nepal. International Centre for Integrated Mountain Development (ICIMOD), 2024. https://doi.org/10.53055/icimod.1084.

Full text
Abstract:
On 21 April 2024, a flood from the glacial lake Birendra Tal was triggered by a massive ice avalanche caused by calving from the Manaslu Glacier. Manaslu, the eighth-highest mountain in the world at 8,163 meters above sea level, is located in west-central Nepal. The displacement wave resulted in the sudden release of water from the lake outlet into the Budhi Gandaki River in Gorkha district. This event was not a typical glacial lake outburst flood (GLOF), as the overflows did not breach the moraine dam, and no significant impact on the dam was observed post-event. However, the threat remains. Analysis indicates a significant risk of debris flow and ice/snow avalanches from the adjacent valley, compounded by anticipated temperature rises and glacier retreat, suggesting future occurrences could exceed the moderate impact of this event. Vigilance from Disaster Risk Management Officers, community leaders, and governments is recommended. A detailed understanding of glacier dynamics, crevasse formation, and ice detachment processes is crucial for assessing potential risks. Remote sensing techniques and field-based monitoring, such as GB-InSAR and time-lapse cameras, can provide necessary information. Continuous lake monitoring via satellite imagery and in-situ sensors, along with the establishment of a flood early warning system, is recommended to mitigate risks and enhance preparedness. Weakening glaciers in the region pose various hazards, and countries remain ill-prepared to cope with these rapid changes. Urgent and strong political action is required to implement effective risk mitigation strategies.
APA, Harvard, Vancouver, ISO, and other styles
3

Sengupta, Jonah. Demystifying Event-based Camera Latency: Sensor Speed Dependence on Pixel Biasing, Light, and Spatial Activity. DEVCOM Army Research Laboratory, 2023. http://dx.doi.org/10.21236/ad1211287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Underwood, H., Madison Hand, Donald Leopold, Madison Hand, Donald Leopold, and H. Underwood. Abundance and distribution of white-tailed deer on First State National Historical Park and surrounding lands. National Park Service, 2024. http://dx.doi.org/10.36967/2305428.

Full text
Abstract:
We estimated both abundance and distribution of white-tailed deer (Odocoileus virginianus) on the Brandywine Valley unit of First State National Historical Park (FRST) and the Brandywine Creek State Park (BCSP) during 2020 and 2021 with two widely used field methods ? a road-based count and a network of camera traps. We conducted 24 road-based counts, covering 260 km of roadway, and deployed up to 16 camera traps, processing over 82,000 images representing over 5,000 independent observations. In both years, we identified bucks based on their body and antler characteristics, tracking their movements between baited camera trap locations. We tested seven estimators commonly reported in the literature, comparing the relative merits for managers of small, protected natural areas like FRST. Deer densities estimated from conventional road-based distance sampling were approximately 10 deer/km2 lower than densities estimated from camera-trapping surveys. We attribute the bias in road-based distance sampling to the difficulty of recording the precise effort expended to obtain the counts. Modifying the distance sampling method addressed many of the issues associated with the conventional approach. Despite little substantive differences in land cover types between the two methods, a clear spatial segregation of male and female deer at camera trap locations could bias road-based counts if the sexes are not encountered in proportion to their abundances. There was a distinct gradient in deer distribution across the study area, with higher proportions of deer recorded in camera traps at FRST than BCSP, which harvests 20?60 deer annually during a regulated, hunting season. The most reliable (i.e., low bias, acceptable precision) methods, Spatial Capture Recapture (SCR) and Density Surface Modeling (DSM), produced deer densities of approximately 50 deer/km2 in each year ? a number which is consistent with previous estimates for New Castle County, Delaware, and our experience in similar, unhunted natural areas. Across both FRST and BCSP, these densities translated into area-wide (~1000 ha) population sizes between 650?1000 deer, with about one-half to two-thirds comprising the FRST population. Density surface modeling of mapped locations of deer detected during surveys, combined with camera-trapping and a time-to-event data analysis might be the only practical means of reliably assessing white-tailed deer abundance in small (&lt;2000 ha), protected natural areas like FRST. Most other approaches are either too time-consuming, require identification and tracking of individual deer, the use of bait, or require intervention by a subject-area expert.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography