Academic literature on the topic 'Smart camera embedded system'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Smart camera embedded system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Smart camera embedded system"

1

Zarezadeh, Ali Akbar, and Christophe Bobda. "Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras." International Journal of Reconfigurable Computing 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/615824.

Full text
Abstract:
Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC) design. In conjunction with this vision application, a hardware object request broker (ORB) middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Won Hyuck, and Min Seok Jie. "Development of Smart Remote Local Information Embedded System Using Global Positioning System." Applied Mechanics and Materials 681 (October 2014): 51–56. http://dx.doi.org/10.4028/www.scientific.net/amm.681.51.

Full text
Abstract:
In case of existing localization system, it usually depends on video information by camera. Therefore, the camera has to be operated 24 hours a day. For the special region of interest, position recognition is available, however, for the most cases, once the camera is out of observed extent, the position recognition is not available. In order to back up these disadvantages, the Wireless Sensor Network, by using GPS and various sensors, can detect and monitoring the data up to the area which the image information cannot detect. For this, the thesis suggests the efficient monitoring system using GPS and human body detecting sensor.
APA, Harvard, Vancouver, ISO, and other styles
3

Singh, Sanjay, Srinivasa Murali Dunga, AS Mandal, Chandra Shekhar, and Santanu Chaudhury. "FPGA Based Embedded Implementation of Video Summary Generation Scheme in Smart Camera." Advanced Materials Research 403-408 (November 2011): 516–21. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.516.

Full text
Abstract:
In any remote surveillance scenario, smart cameras have to take intelligent decisions to generate summary frames to minimize communication and processing overhead. Video summary generation, in the context of smart camera, is the process of merging the information from multiple frames. A summary generation scheme based on clustering based change detection algorithm has been implemented in our smart camera system for generating frames to deliver requisite information. In this paper we propose an embedded platform based framework for implementing summary generation scheme using HW-SW Co-Design based methodology. The complete system is implemented on Xilinx XUP Virtex-II Pro FPGA board. The overall algorithm is running on PowerPC405 and some of the blocks which are computationally intensive and more frequently called are implemented in hardware using VHDL. The system is designed using Xilinx Embedded Design Kit (EDK).
APA, Harvard, Vancouver, ISO, and other styles
4

Hsu, Ting-Yu, and Xiang-Ju Kuo. "A Stand-Alone Smart Camera System for Online Post-Earthquake Building Safety Assessment." Sensors 20, no. 12 (June 15, 2020): 3374. http://dx.doi.org/10.3390/s20123374.

Full text
Abstract:
Computer vision-based approaches are very useful for dynamic displacement measurement, damage detection, and structural health monitoring. However, for the application using a large number of existing cameras in buildings, the computational cost of videos from dozens of cameras using a centralized computer becomes a huge burden. Moreover, when a manual process is required for processing the videos, prompt safety assessment of tens of thousands of buildings after a catastrophic earthquake striking a megacity becomes very challenging. Therefore, a decentralized and fully automatic computer vision-based approach for prompt building safety assessment and decision-making is desired for practical applications. In this study, a prototype of a novel stand-alone smart camera system for measuring interstory drifts was developed. The proposed system is composed of a single camera, a single-board computer, and two accelerometers with a microcontroller unit. The system is capable of compensating for rotational effects of the camera during earthquake excitations. Furthermore, by fusing the camera-based interstory drifts with the accelerometer-based ones, the interstory drifts can be measured accurately even when residual interstory drifts exist. Algorithms used to compensate for the camera’s rotational effects, algorithms used to track the movement of three targets within three regions of interest, artificial neural networks used to convert the interstory drifts to engineering units, and some necessary signal processing algorithms, including interpolation, cross-correlation, and filtering algorithms, were embedded in the smart camera system. As a result, online processing of the video data and acceleration data using decentralized computational resources is achieved in each individual smart camera system to obtain interstory drifts. Using the maximum interstory drifts measured during an earthquake, the safety of a building can be assessed right after the earthquake excitation. We validated the feasibility of the prototype of the proposed smart camera system through the use of large-scale shaking table tests of a steel building. The results show that the proposed smart camera system had very promising results in terms of assessing the safety of steel building specimens after earthquake excitations.
APA, Harvard, Vancouver, ISO, and other styles
5

Saidi, Youcef, Larbi Boumediene, Mohammed Amine Benmahdjoub, and Abdelkader Mezouar. "Smart embarked electrical network based on embedded system and monitoring camera." International Journal of Computational Science and Engineering 22, no. 1 (2020): 15. http://dx.doi.org/10.1504/ijcse.2020.10029208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Benmahdjoub, Mohammed Amine, Abdelkader Mezouar, Larbi Boumediene, and Youcef Saidi. "Smart embarked electrical network based on embedded system and monitoring camera." International Journal of Computational Science and Engineering 22, no. 1 (2020): 15. http://dx.doi.org/10.1504/ijcse.2020.107236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mosqueron, Romuald, Julien Dubois, Marco Mattavelli, and David Mauvilet. "Smart Camera Based on Embedded HW/SW Coprocessor." EURASIP Journal on Embedded Systems 2008, no. 1 (2008): 597872. http://dx.doi.org/10.1155/2008/597872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cob-Parro, Antonio Carlos, Cristina Losada-Gutiérrez, Marta Marrón-Romera, Alfredo Gardel-Vicente, and Ignacio Bravo-Muñoz. "Smart Video Surveillance System Based on Edge Computing." Sensors 21, no. 9 (April 23, 2021): 2958. http://dx.doi.org/10.3390/s21092958.

Full text
Abstract:
New processing methods based on artificial intelligence (AI) and deep learning are replacing traditional computer vision algorithms. The more advanced systems can process huge amounts of data in large computing facilities. In contrast, this paper presents a smart video surveillance system executing AI algorithms in low power consumption embedded devices. The computer vision algorithm, typical for surveillance applications, aims to detect, count and track people’s movements in the area. This application requires a distributed smart camera system. The proposed AI application allows detecting people in the surveillance area using a MobileNet-SSD architecture. In addition, using a robust Kalman filter bank, the algorithm can keep track of people in the video also providing people counting information. The detection results are excellent considering the constraints imposed on the process. The selected architecture for the edge node is based on a UpSquared2 device that includes a vision processor unit (VPU) capable of accelerating the AI CNN inference. The results section provides information about the image processing time when multiple video cameras are connected to the same edge node, people detection precision and recall curves, and the energy consumption of the system. The discussion of results shows the usefulness of deploying this smart camera node throughout a distributed surveillance system.
APA, Harvard, Vancouver, ISO, and other styles
9

Sabri, Naseer, M. S. Salim, S. Fouad, S. Alwee Aljunid, F. T. AL-Dhief, and C. B. M. Rashidi. "Design and Implementation of an Embedded Smart Intruder Surveillance System." MATEC Web of Conferences 150 (2018): 06019. http://dx.doi.org/10.1051/matecconf/201815006019.

Full text
Abstract:
Remote and scattered valuable and sensitive locations such as labs and offices inside university campus need efficient monitoring and warning system. As well as scattered area and belonging. This research presents a Real-Time intruder Surveillance System based on a single board computer (SBC). Thus the design and development of a cost effective surveillance management system based SBC that can be deployed efficiently in remote and scattered locations such as universities belonging. The fusion of embedded Python codes with SBC that attached to cameras, Long distance sensors, alerting circuitry and wireless module presents a novel integration based effective cost solution and enhances SBC of much flexibility of improvement and development for pervasive remote locations. The system proves the high integrity of smooth working with web application, it’s cost effective and thus can be deployed as many of units to seize and concisely covered remote and scattered area as well as university belonging and departments. The system can be administrated by a remote user sparsely or geographically away from any networked workstation. The proposed solution offers efficient stand alone, flexibility to upgrade and cheap development and installation as well as cost effective ubiquitous surveillance solution. In conclusion, the system acceptable boundaries of successful intruder recognition and warning alert are computed between 1m and 3m distance of intruder from system camera. Recognition rate of 95% and 83% are achieved and the successful warning alert were in the range of 86-97%.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Peng Ju, Gai Zhi Guo, and Zong Zuo Yu. "Application of Wireless Sensor Network in Embedded Smart Home System." Applied Mechanics and Materials 738-739 (March 2015): 74–78. http://dx.doi.org/10.4028/www.scientific.net/amm.738-739.74.

Full text
Abstract:
This paper presents an Embedded Smart Home system solution using wireless sensor network (WSN). The Smart Home system can be sectioned into four parts: wireless sensor network, embedded smart control centre, Server and Client. The major technical of the wireless sensor network is ZigBee. The wireless sensor network includes coordinator node and Sensor node. It is developed based on the Z-Stack protocol stack and the wireless chip CC2530. It is mainly responsible for collecting the environmental parameter of the house and controlling the electrical equipment in the house. It can also support the RFID access control and camera monitor. The control centre communicates with the wireless sensor network by the serial port. It communicates with Server by the TCP socket and transmits data to each client, or communicates with the client by using the wireless communication module directly. Partial hardware electric diagram and software flowchart were provided. Field using indicates that this system is economical and flexible.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Smart camera embedded system"

1

Pélissier, Frantz. "Modélisation et développement d'une plateforme intelligente pour la capture d'images panoramiques cylindriques." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22486/document.

Full text
Abstract:
Dans la plupart des applications de robotique, un système de vision apporte une amélioration significative de la perception de l’environnement. La vision panoramique est particulièrement intéressante car elle rend possible une perception omnidirectionnelle. Elle est cependant rarement utilisée en pratique à cause des limitations technologiques découlant des méthodes la permettant. La grande majorité de ces méthodes associent des caméras, des miroirs, des grands angles et des systèmes rotatifs ensembles pour créer des champs de vision élargis. Les principaux défauts de ces méthodes sont les importantes distorsions des images et l’hétérogénéité de la résolution. Certaines autres méthodes permettant des résolutions homogènes, prodiguent un flot de données très important qui est difficile à traiter en temps réel et sont soit trop lents soit manquent de précision. Pour résoudre ces problèmes, nous proposons la réalisation d’une caméra panoramique intelligente qui présente plusieurs améliorations technologiques par rapport aux autres caméras linéaires rotatives. Cette caméra capture des panoramas cylindriques homogènes avec une résolution de 6600 × 2048 pixels. La synchronisation de la capture avec la position angulaire est possible grâce à une plateforme rotative de précision. Nous proposons aussi une solution au problème que pose le gros flot de données avec l’implémentation d’un extracteur de primitives qui sélectionne uniquement les primitives invariantes des images pour donner un système panoramique de vision qui ne transmet que les données pertinentes. Le système a été modélisé et une méthode de calibrage spécifiquement conçue pour les systèmes cylindriques rotatifs est présentée. Enfin, une application de localisation et de reconstruction 3D est décrite pour montrer une utilisation pratique dans une application de type Simultaneous Localization And Mapping ( SLAM )
In most robotic applications, vision systems can significantly improve the perception of the environment. The panoramic view has particular attractions because it allows omnidirectional perception. However, it is rarely used because the methods that provide panoramic views also have significant drawbacks. Most of these omnidirectional vision systems involve the combination of a matrix camera and a mirror, rotating matrix cameras or a wide angle lens. The major drawbacks of this type of sensors are in the great distortions of the images and the heterogeneity of the resolution. Some other methods, while providing homogeneous resolutions, also provide a huge data flow that is difficult to process in real time and are either too slow or lacking in precision. To address these problems, we propose a smart panoramic vision system that presents technological improvements over rotating linear sensor methods. It allows homogeneous 360 degree cylindrical imaging with a resolution of 6600 × 2048 pixels and a precision turntable to synchronize position with acquisition. We also propose a solution to the bandwidth problem with the implementation of a feature etractor that selects only the invariant feaures of the image in such a way that the camera produces a panoramic view at high speed while delivering only relevant information. A general geometric model has been developped has been developped to describe the image formation process and a caligration method specially designed for this kind of sensor is presented. Finally, localisation and structure from motion experiments are described to show a practical use of the system in SLAM applications
APA, Harvard, Vancouver, ISO, and other styles
2

Baykent, Hayri Kerem. "Implementation Of A Low-cost Smart Camera Apllication On A Cots System." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12613944/index.pdf.

Full text
Abstract:
The objective of this study is to implement a low-cost smart camera application on a Commercial off the Shelf system that is based on Texas Instrument&rsquo
s DM3730 System on Chip processor. Although there are different architectures for smart camera applications, ARM plus DSP based System on Chip architecture is selected for implementation because of its different core abilities. Beagleboard-XM platform that has an ARM plus DSP based System on Chip processor is chosen as Commercial off the Shelf platform. During this thesis, firstly to start-up the Commercial off the Shelf platform the design steps of porting an embedded Linux to ARM core of System on Chip processor is described. Then design steps that are necessary for implementation of smart camera applications on both ARM and DSP cores in parallel are given in detail. Furthermore, the real-time image processing performance of the Beagleboard-xM platform for the smart camera applications is evaluated with simple implementations.
APA, Harvard, Vancouver, ISO, and other styles
3

Szczepanski, Michał. "Online stereo camera calibration on embedded systems." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC095.

Full text
Abstract:
Cette thèse décrit une approche de calibration en ligne des caméras stéréo pour des systèmes embarqués. Le manuscrit introduit une nouvelle mesure de la qualité du service de cette fonctionnalité dans les systèmes cyber physiques. Ainsi, le suivi et le calcul des paramètres internes du capteur (requis pour de nombreuses tâches de vision par ordinateur) est réalisé dynamiquement. La méthode permet à la fois d'augmenter la sécurité et d'améliorer les performances des systèmes utilisant des caméras stéréo. Elle prolonge la durée de vie des appareils grâce à cette procédure d'auto-réparation, et peut accroître l'autonomie. Des systèmes tels que les robots mobiles ou les lunettes intelligentes en particulier peuvent directement bénéficier de cette technique.La caméra stéréo est un capteur capable de fournir un large spectre de données. Au préalable, le capteur doit être calibré extrinsèquement, c'est à dire que les positions relatives des deux caméras doivent être déterminées. Cependant, cette calibration extrinsèque peut varier au cours du temps à cause d'interactions avec l'environnement extérieur par exemple (chocs, vibrations...). Ainsi, une opération de recalibration permet de corriger ces effets. En effet, des données mal comprises peuvent entraîner des erreurs et le mauvais fonctionnement des applications. Afin de contrer un tel scénario, le système doit disposer d'un mécanisme interne, la qualité des services, pour décider si les paramètres actuels sont corrects et/ou en calculer des nouveaux, si nécessaire. L'approche proposée dans cette thèse est une méthode d'auto-calibration basée sur l'utilisation de données issues uniquement de la scène observée (sans modèles contrôlés). Tout d'abord, nous considérons la calibration comme un processus système s'exécutant en arrière-plan devant fonctionner en continu et en temps réel. Cette calibration interne n'est pas la tâche principale du système, mais la procédure sur laquelle s'appuient les applications de haut niveau. Pour cette raison, les contraintes systèmes limitent considérablement l'algorithme en termes de complexité, de mémoire et de temps. La méthode de calibration proposée nécessite peu de ressources et utilise des données standards provenant d'applications de vision par ordinateur, de sorte qu'elle est masquée à l'intérieur du pipeline applicatif. Dans ce manuscrit, de nombreuses discussions sont consacrées aux sujets liés à la calibration de caméras en ligne pour des systèmes embarqués, tels que des problématiques sur l'extraction de points d'intérêts robustes et au calcul du facteur d'échelle, les aspects d’implémentation matérielle, les applications de haut niveau nécessitant cette approche, etc.Enfin, cette thèse décrit et explique une méthodologie pour la constitution d'un nouveau type d'ensemble de données, permettant de représenter un changement de position d'une caméra,pour valider l’approche. Le manuscrit explique également les différents environnements de travail utilisés dans la réalisation des jeux de données et la procédure de calibration de la caméra. De plus, il présente un premier prototype de casque intelligent, sur lequel s’exécute dynamiquement le service d’auto-calibration proposé. Enfin, une caractérisation en temps réel sur un processeur embarqué ARM Cortex A7 est réalisée
This thesis describes an approach for online calibration of stereo cameras on embeddedsystems. It introduces a new functionality for cyber physical systems by measuring the qualityof service of the calibration. Thus, the manuscript proposes a dynamic monitoring andcalculation of the internal sensor parameters required for many computer vision tasks. Themethod improves both security and system efficiency using stereo cameras. It prolongs the lifeof the devices thanks to this self-repair capability, which increases autonomy. Systems such asmobile robots or smart glasses in particular can directly benefit from this technique.The stereo camera is a sensor capable of providing a wide spectrum of data. Beforehand, thissensor must be extrinsically calibrated, i.e. the relative positions of the two cameras must bedetermined.. However, camera extrinsic calibration can change over time due to interactionswith the external environment for example (shocks, vibrations...). Thus, a recalibrationoperation allow correcting these effects. Indeed, misunderstood data can lead to errors andmalfunction of applications. In order to counter such a scenario, the system must have aninternal mechanism, a quality of service, to decide whether the current parameters are correctand/or calculate new ones, if necessary.The approach proposed in this thesis is a self-calibration method based on the use of data coming only from the observed scene, without controlled models. First of all, we consider calibration as a system process running in the background and having to run continuously in real time. This internal calibration is not the main task of the system, but the procedure on which high-level applications rely. For this reason, system constraints severely limit the algorithm in terms of complexity, memory and time. The proposed calibration method requires few resources and uses standard data from computer vision applications, so it is hidden within the application pipeline. In this manuscript, we present many discussions to topics related to the online stereocalibration on embedded systems, such as problems on the extraction of robust points ofinterest, the calculation of the scale factor, hardware implementation aspects, high-levelapplications requiring this approach, etc. Finally, this thesis describes and explains amethodology for the building of a new type of dataset to represent the change of the cameraposition to validate the approach. The manuscript also explains the different workenvironments used in the realization of the datasets and the camera calibration procedure. Inaddition, it presents the first prototype of a smart helmet, on which the proposed self-calibration service is dynamically executed. Finally, this thesis characterizes the real-timecalibration on an embedded ARM Cortex A7 processor
APA, Harvard, Vancouver, ISO, and other styles
4

Hasanzadeh, Mujtaba, and Alexandra Hengl. "Real-Time Pupillary Analysis By An Intelligent Embedded System." Thesis, Mälardalens högskola, Inbyggda system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44352.

Full text
Abstract:
With no online pupillary analysis methods today, both the medical and the research fields are left to carry out a lengthy, manual and often faulty examination. A real-time, intelligent, embedded systems solution to pupillary analysis would help reduce faulty diagnosis, speed-up the analysis procedure by eliminating the human expert operator and in general, provide a versatile and highly adaptable research tool. Therefore, this thesis has sought to investigate, develop and test possible system designs for pupillary analysis, with the aim for caffeine detection. A pair of LED manipulator glasses have been designed to standardize the illumination method across testing. A data analysis method of the raw pupillary data has been established offline and then adapted to a real-time platform. ANN was chosen as classification algorithm. The accuracy of the ANN from the offline analysis was 94% while for the online classification the obtained accuracy was 17%. A realtime data communication and synchronization method has been developed. The resulting system showed reliable and fast execution times. Data analysis and classification took no longer than 2ms, faulty data detection showed consistent results. Data communication suffered no message loss. In conclusion, it is reported that a real-time, intelligent, embedded solution is feasible for pupillary analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Abtahi, Shabnam. "Driver Drowsiness Monitoring Based on Yawning Detection." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23295.

Full text
Abstract:
Driving while drowsy is a major cause behind road accidents, and exposes the driver to a much higher crash risk compared to driving while alert. Therefore, the use of assistive systems that monitor a driver’s level of vigilance and alert the fatigue driver can be significant in the prevention of accidents. This thesis introduces three different methods towards the detection of drivers’ drowsiness based on yawning measurement. All three approaches involve several steps, including the real time detection of the driver’s face, mouth and yawning. The last approach, which is the most accurate, is based on the Viola-Jones theory for face and mouth detection and the back projection theory for measuring both the rate and the amount of changes in the mouth for yawning detection. Test results demonstrate that the proposed system can efficiently measure the aforementioned parameters and detect the yawning state as a sign of a driver’s drowsiness.
APA, Harvard, Vancouver, ISO, and other styles
6

Bouderbane, Mustapha. "Système de vision à haute gamme dynamique auto adaptable." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCK048.

Full text
Abstract:
La génération d’images à grande gamme de dynamique (HDR) à l’aide de plusieurs expositions est largement utilisée pour récupérer la gamme de dynamique entière d’une scène filmée. La technique se base sur la fusion de deux images (ou plus) à faibles gamme de dynamique(LDR). Cependant, cette technique doit être utilisée pour les scènes statiques et elle ne peut pas être appliquée directement pour les scènes contenant du mouvement. Les mouvements introduits par les objets dans les images de pile d’images LDR créent des artefacts fantômes dans l’image HDR reconstruite.Dans cette thèse, nous avons étudié et évalué un grand nombre d’algorithmes utilisés pour corriger ou éviter ces artefacts. Nous avons fait un compromis entre robustesse et complexité dans le choix de la méthode permettant la suppression de cet artefact afin de proposer un système de générationde vidéo HDR en temps réel (caméra intelligente). Ce dernier est implémenté sur un circuit FPGA.Cette caméra intelligente basée sur un FPGA est présentée avec des résultats expérimentaux de la conception démontrant l’efficacité de la méthode sélectionnée. Le système proposé permet de générer des flux vidéo HDR, y compris le processus de suppression des artefacts fantômes, à 60images/s pour une résolution de capteur complète (1280 × 1024)
High dynamic range (HDR) image generation using temporal exposure bracketing is widely used to recover the whole dynamic range of a filmed scene by fusion of two or more low dynamic range (LDR) images. Temporal exposure bracketing technique should be employed for static scenes and it cannot be applied directly for dynamic scenes. Motions introduced by moving objects in the LDR stack images create ghosts artifacts in the reconstructed HDR image. In this thesis, we have studied and evaluated a large nuber of algorithms used to correct or avoid these artifacts and we mad a trade-off between robustness and complexity in order to propose a real-time HDR video generation system.The real-time HDR image generation system is implemented on a FPGA circuit. This FPGA-based smart camera is presented with some experimental results to demonstrate the selected method and design efficiency. The proposed system enables HDR video streams, including ghost removal processing, to be generated at 60 f ps for a full sensor resolution (1280 × 1024)
APA, Harvard, Vancouver, ISO, and other styles
7

Burbano, Andres. "Système de caméras intelligentes pour l’étude en temps-réel de personnes en mouvement." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS139/document.

Full text
Abstract:
Nous proposons un système dedétection et de suivi des personnes enmouvement dans des grands espaces. Notresolution repose sur un réseau de camérasintelligentes pour l'extraction desinformations spatio-temporelles despersonnes. Les caméras sont composées d'uncapteur 3D, d'un système embarqué et decommunication. Nous avons montrél'efficacité du placement des capteurs 3D enposition zénithale par rapport auxoccultations et variations d’échelle.Nous garantissons l'exécution des traitementsen temps-réel (~20 fps), permettantde détecter des déplacements rapides avecune précision jusqu’à 99 %, et capable d’unfiltrage paramétrique des cibles non désiréescomme les enfants ou les caddies.Nous avons réalisé une étude sur la viabilitétechnologique des résultats pour de grandsespaces, rendant la solution industrialisable
We propose a detection and trackingsystem of people moving in large spacessystem. Our solution is based on a network ofsmart cameras capable of retrievingspatiotemporal information from the observedpeople. These smart cameras are composed bya 3d sensor, an onboard system and acommunication and power supply system. Weexposed the efficacy of the overhead positionto decreasing the occlusion and the scale'svariation.Finally, we carried out a study on the use ofspace, and a global trajectories analysis ofrecovered information by our and otherssystems, able to track people in large andcomplex spaces
APA, Harvard, Vancouver, ISO, and other styles
8

Boussadi, Mohamed Amine. "Conception et développement d'un circuit multiprocesseurs en ASIC dédié à une caméra intelligente." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22552/document.

Full text
Abstract:
Suffisante pour exécuter les algorithmes à la cadence de ces capteurs d’images performants, tout en gardant une faible consommation d’énergie. Les systèmes monoprocesseur n’arrivent plus à satisfaire les exigences de ce domaine. Ainsi, grâce aux avancées technologiques et en s’appuyant sur de précédents travaux sur les machines parallèles, les systèmes multiprocesseurs sur puce (MPSoC) représentent une solution intéressante et prometteuse. Dans de précédents travaux à cette thèse, la cible technologique pour développer de tels systèmes était les FPGA. Or les résultats ont montré les limites de cette cible en terme de ressource matérielles et en terme de performance (vitesse notamment). Ce constat nous amène à changer de cible c’est-à-dire à passer sur cible ASIC nécessitant ainsi de retravailler profondément l’architecture et les IPs qui existaient autour de la méthode existante (appelée HNCP, pour Homogeneous Network of Communicating Processors). Afin de bénéficier de la performance offerte par la cible ASIC, les systèmes multiprocesseurs proposés s’appuient sur la flexibilité de son architecture. Combinés à des squelettes de parallélisation facilitant la programmabilité de l’architecture, les circuits proposés permettent d’offrir des systèmes supportant le portage en temps réels de différentes classes d’algorithme de traitement d’images. Le résultat de ce travail a abouti à la fabrication d’un circuit intégré à base d’un seul processeur et de ses périphériques en technologie ST CMOS 65nm dont la surface est d’environ 1 mm² et à la définition de 2 architectures multiprocesseurs flexibles basées sur le concept des squelettes de parallélisation (une architecture de 16 coeurs de processeur en technologie ST CMOS 65 nm et une deuxième architecture de 64 coeurs de processeur en technologie ST CMOS FD-SOI 28 nm)
Smart sensors today require processing components with sufficient power to run algorithms at the rate of these high-performance image sensors, while maintaining low power consumption. Monoprocessor systems are no longer able to meet the requirements of this field. Thus, thanks to technological advances and based on previous works on parallel computers, multiprocessor systems on chip (MPSoC) represent an interesting and promising solution. Previous works around this thesis have used FPGA as technological target. However, results have shown the limits of this target in terms of hardware resources and in terms of performance (speed in particular). This observation leads us to change the target from FPGA to ASIC. This migration requires deep rework at the architecture level. Particularly, existing IPs around the method (called HNCP for Homogeneous Network of Communicating Processors) have to be revisited. To take advantage of the performance offered by the ASIC target, proposed multiprocessor systems are based on the flexibility of its architecture. Combined with parallel skeletons that ease programmability of the architecture, the proposed circuits allow to offer systems that support various real-time image processing algorithms. This work has led to the fabrication of an integrated circuit based on a single processor and its peripheral using ST CMOS 65nm technology with an area around 1 mm². Moreover, two flexible multiprocessor architectures based on the concept of parallel skeletons have been proposed (a 16 cores 65 nm CMOS multiprocessors and a 64 cores 28 nm FD-SOI CMOS multiprocessors)
APA, Harvard, Vancouver, ISO, and other styles
9

Uddin-Al-Hasan, Main. "Real-time Embedded Panoramic Imaging for Spherical Camera System." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2518.

Full text
Abstract:
Panoramas or stitched images are used in topographical mapping, panoramic 3D reconstruction, deep space exploration image processing, medical image processing, multimedia broadcasting, system automation, photography and other numerous fields. Generating real-time panoramic images in small embedded computer is of particular importance being lighter, smaller and mobile imaging system. Moreover, this type of lightweight panoramic imaging system is used for different types of industrial or home inspection. A real-time handheld panorama imaging system is developed using embedded real-time Linux as software module and Gumstix Overo and PandaBoard ES as hardware module. The proposed algorithm takes 62.6602 milliseconds to generate a panorama frame from three images using a homography matrix. Hence, the proposed algorithm is capable of generating panorama video with 15.95909365 frames per second. However, the algorithm is capable to be much speedier with more optimal homography matrix. During the development, Ångström Linux and Ubuntu Linux are used as the operating system with Gumstix Overo and PandaBoard ES respectively. The real-time kernel patch is used to configure the non-real-time Linux distribution for real-time operation. The serial communication software tools C-Kermit, Minicom are used for terminal emulation between development computer and small embedded computer. The software framework of the system consist UVC driver, V4L/V4L2 API, OpenCV API, FFMPEG API, GStreamer, x264, Cmake, Make software packages. The software framework of the system also consist stitching algorithm that has been adopted from available stitching methods with necessary modification. Our proposed stitching process automatically finds out motion model of the Spherical camera system and saves the matrix in a look file. The extracted homography matrix is then read from look file and used to generate real-time panorama image. The developed system generates real-time 180° view panorama image from a spherical camera system. Beside, a test environment is also developed to experiment calibration and real-time stitching with different image parameters. It is able to take images with different resolutions as input and produce high quality real-time panorama image. The QT framework is used to develop a multifunctional standalone software that has functions for displaying real-time process algorithm performance in real-time through data visualization, camera system calibration and other stitching options. The software runs both in Linux and Windows. Moreover, the system has been also realized as a prototype to develop a chimney inspection system for a local company.
Main Uddin-Al-Hasan, E-mail: main.hasan@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
10

Birem, Merwan. "Localisation et détection de fermeture de boucle basées saillance visuelle : algorithmes et architectures matérielles." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22558/document.

Full text
Abstract:
Dans plusieurs tâches de la robotique, la vision est considérée comme l’élément essentiel avec lequel la perception de l’environnement ou l’interaction avec d’autres utilisateurs peut se réaliser. Néanmoins, les artefacts potentiellement présents dans les images capturées rendent la tâche de reconnaissance et d’interprétation de l’information visuelle extrêmement compliquée. Il est de ce fait, très important d’utiliser des primitives robustes, stables et ayant un taux de répétabilité élevé afin d’obtenir de bonnes performances. Cette thèse porte sur les problèmes de localisation et de détection de fermeture de boucle d’un robot mobile en utilisant la saillance visuelle. Les résultats en termes de précision et d’efficacité des applications de localisation et de détection de fermeture sont évalués et comparés aux résultats obtenus avec des approches de l’état de l’art sur différentes séquences d’images acquises en milieu extérieur. Le principal inconvénient avec les modèles proposés pour l’extraction de zones de saillance est leur complexité de calcul, ce qui conduit à des temps de traitement important. Afin d’obtenir un traitement en temps réel, nous présentons dans ce mémoire l’implémentation du détecteur de régions saillantes sur la plate forme reconfigurable DreamCam
In several tasks of robotics, vision is considered to be the essential element by which the perception of the environment or the interaction with other users can be realized. However, the potential artifacts in the captured images make the task of recognition and interpretation of the visual information extremely complicated. It is therefore very important to use robust, stable and high repeatability rate primitives to achieve good performance. This thesis deals with the problems of localization and loop closure detection for a mobile robot using visual saliency. The results in terms of accuracy and efficiency of localization and closure detection applications are evaluated and compared to the results obtained with the approaches provided in literature, both applied on different sequences of images acquired in outdoor environnement. The main drawback with the models proposed for the extraction of salient regions is their computational complexity, which leads to significant processing time. To obtain a real-time processing, we present in this thesis also the implementation of the salient region detector on the reconfigurable platform DreamCam
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Smart camera embedded system"

1

Bobda, Christophe, and Senem Velipasalar. Distributed Embedded Smart Cameras: Architectures, Design and Applications. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bobda, Christophe, and Senem Velipasalar. Distributed Embedded Smart Cameras: Architectures, Design and Applications. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bobda, Christophe, and Senem Velipasalar. Distributed Embedded Smart Cameras: Architectures, Design and Applications. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States. National Aeronautics and Space Administration., ed. Robust, Brillouin active embedded fiber-is-the-sensor system in smart composite structures: Grant no. NAG-1835, April 22, 1996-April 21, 1997, annual progress report. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

United States. National Aeronautics and Space Administration., ed. Robust, Brillouin active embedded fiber-is-the-sensor system in smart composite structures: Grant no. NAG-1835, April 22, 1996-April 21, 1997, annual progress report. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

United States. National Aeronautics and Space Administration., ed. Robust, Brillouin active embedded fiber-is-the-sensor system in smart composite structures: Grant no. NAG-1835, April 22, 1996-April 21, 1997, annual progress report. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Joshi, Mahesh K., and J. R. Klein. Lifestyle Innovations Generating New Businesses. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198827481.003.0015.

Full text
Abstract:
Life-altering technology is not only improving our lifestyle but also creating new business models. Integration of technology into everyday life is a primary driver of changes in lifestyle. Whether visible or not, today’s technology is everywhere. Consumers come home from work to a smart house that greets them with music, emails them the foods the refrigerator needs, and through spatial phase imaging technology senses their mood. Without human intervention it changes its presentation based on data indicators embedded in everything. The house recognizes mood and compares it with past behaviors, facial reactions, timeline, and acts accordingly. All this happens through a standard security camera with pixelate three-dimensional technology. The same technology can identify the anti-social elements in a crowd, enhance security at any public event venue, and allow doctors to see under our skin without intrusion.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Smart camera embedded system"

1

Pelissier, Frantz, and François Berry. "Design of a Real-Time Embedded Stereo Smart Camera." In Advanced Concepts for Intelligent Vision Systems, 344–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17688-3_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ozer, Burak. "The Journey of a Project Through the Eyes of a Smart Camera." In Embedded, Cyber-Physical, and IoT Systems, 233–44. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16949-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fularz, Michał, Marek Kraft, Adam Schmidt, and Andrzej Kasiński. "The Architecture of an Embedded Smart Camera for Intelligent Inspection and Surveillance." In Advances in Intelligent Systems and Computing, 43–52. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15796-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

El Zant, Chawki, Quentin Charrier, Khaled Benfriha, and Patrick Le Men. "Enhanced Manufacturing Execution System “MES” Through a Smart Vision System." In Lecture Notes in Mechanical Engineering, 329–34. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70566-4_52.

Full text
Abstract:
AbstractThe level of industrial performance is a vital issue for any company wishing to develop and acquire more market share. This article presents a novel approach to integrate intelligent visual inspection into “MES” control systems in order to gain performance. The idea is to adapt an intelligent image processing system via in-situ cameras to monitor the production system. The images are thus analyzed in real time via machine learning interpreting the visualized scene and interacting with some features of the MES system, such as maintenance, quality control, security, operations, etc. This novel technological brick, combined with the flexibility of production, contributes to optimizing the system in terms of autonomy and responsiveness to detect anomalies, already encountered, or even new ones. This smart visual inspection system is considered as a Cyber Physical System CPS brick integrated to the manufacturing system which will be considered an edge computing node in the final architecture of the platform. This smart CPS represents the 1st level of calculation and analysis in real time due to embedded intelligence. Cloud computing will be a perspective for us, which will represent the 2nd level of computation, in deferred time, in order to analyze the new anomalies encountered and identify potential solutions to integrate into MES. Ultimately, this approach strengthens the robustness of the control systems and increases the overall performance of industrial production.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmadinia, Ali, and David Watson. "A Survey of Systems-on-Chip Solutions for Smart Cameras." In Distributed Embedded Smart Cameras, 25–41. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7705-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mefenza, Michael, Franck Yonga, and Christophe Bobda. "Design and Verification Environment for High-Performance Video-Based Embedded Systems." In Distributed Embedded Smart Cameras, 69–90. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7705-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ozcan, Koray, Anvith Mahabalagiri, and Senem Velipasalar. "Automatic Fall Detection and Activity Classification by a Wearable Camera." In Distributed Embedded Smart Cameras, 151–72. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7705-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Kuan-Hui, Chun-Te Chu, Younggun Lee, Zhijun Fang, and Jenq-Neng Hwang. "Consistent Human Tracking Over Self-organized and Scalable Multiple-camera Networks." In Distributed Embedded Smart Cameras, 189–209. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7705-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shubiksha, T. V., S. Karthick, M. Mohammad Sharukh, M. Naveen, and K. Shanthi. "Smart Irrigation Using Embedded System." In Advances in Automation, Signal Processing, Instrumentation, and Control, 725–34. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8221-9_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Christie, Marc, Fabrice Lamarche, and Frédéric Benhamou. "A Spatio-temporal Reasoning System for Virtual Camera Planning." In Smart Graphics, 119–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02115-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Smart camera embedded system"

1

Dube, Swaraj, Khor Jeen Ghee, Wong Weng Onn, and Quek Zhen Han. "Embedded user interface for smart camera." In 2017 7th IEEE International Conference on System Engineering and Technology (ICSET). IEEE, 2017. http://dx.doi.org/10.1109/icsengt.2017.8123416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Intelligent Video Analysis Algorithm Embedded Smart Camera System." In Annual International Conference on Intelligent Computing, Computer Science and Information Systems. International Academy of Engineers, 2016. http://dx.doi.org/10.15242/iae.iae0416017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Burbano, Andres. "3D-Sensing Distributed Embedded System for the Study of Human Kinetic Behavior." In ICDSC '16: 10th international conference on distributed smart camera. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2967413.2974032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mhalla, Ala, Thierry chateau, Sami Gazzah, and Najoua Essoukri Ben Amara. "A Faster R-CNN Multi-Object Detector on a Nvidia Jetson TX1 Embedded System." In ICDSC '16: 10th international conference on distributed smart camera. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2967413.2974033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leon-Salas, W. D., Senem Velipasalar, Nathan Schemm, and Sina Balkir. "A Low-Cost, Tiled Embedded Smart Camera System for Computer Vision Applications." In 2007 First ACM/IEEE International Conference on Distributed Smart Cameras. IEEE, 2007. http://dx.doi.org/10.1109/icdsc.2007.4357515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Benamara, Adel, Mihaela Scuturici, and Serge Miguet. "Multiple Object Tracking on Smart Embedded Camera For Automated Conveying Systems." In ICDSC 2017: International Conference on Distributed Smart Cameras. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3131885.3131919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Giordano, Marco, Philipp Mayer, and Michele Magno. "A Battery-Free Long-Range Wireless Smart Camera for Face Detection." In SenSys '20: The 18th ACM Conference on Embedded Networked Sensor Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3417308.3430273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pantho, Md Jubaer Hossain, Festus Hategekimana, and Christophe Bobda. "A System on FPGA for Fast Handwritten Digit Recognition in Embedded Smart Cameras." In ICDSC 2017: International Conference on Distributed Smart Cameras. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3131885.3131927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pandey, J. G., A. Karmakar, and C. Shekhar. "An embedded architecture for implementation of a video acquisition module of a smart camera system." In 2012 International Conference on Devices, Circuits and Systems (ICDCS 2012). IEEE, 2012. http://dx.doi.org/10.1109/icdcsyst.2012.6188702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shariff, Saleem Ulla, Maheboob Hussain, and Mohammed Farhaan Shariff. "Smart unusual event detection using low resolution camera for enhanced security." In 2017 4th International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS). IEEE, 2017. http://dx.doi.org/10.1109/iciiecs.2017.8275833.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Smart camera embedded system"

1

Agamy, Mohammed. Module Embedded Micro-inverter Smart Grid Ready Residential Solar Electric System. Office of Scientific and Technical Information (OSTI), October 2015. http://dx.doi.org/10.2172/1350097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Khorrami, F., S. U. Pillai, and S. Nourbakhsh. Modeling, Identification, and Control Design for a Flexible Pointing System with Embedded Smart Materials. Fort Belvoir, VA: Defense Technical Information Center, July 1997. http://dx.doi.org/10.21236/ada328831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography