Добірка наукової літератури з теми "Multi-camera tracking data"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Multi-camera tracking data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Multi-camera tracking data"

1

Mahalakshmi, N., and S. R. Saranya. "Robust Visual Tracking for Multiple Targets with Data Association and Track Management." International Journal of Advance Research and Innovation 3, no. 2 (2015): 68–71. http://dx.doi.org/10.51976/ijari.321516.

Повний текст джерела
Анотація:
Multi-object tracking is still a challenging task in computer vision. A robust approach is proposed to realize multi-object tracking using camera networks. Detection algorithms are utilized to detect object regions with confidence scores for initialization of individual particle filters. Since data association is the key issue in Tracking-by-Detection mechanism, an efficient HOG algorithm and SVM classifier algorithm are used for tracking multiple objects. Furthermore, tracking in single cameras is realized by a greedy matching method. Afterwards, 3D geometry positions are obtained from the rectangular relationship between objects. Corresponding objects are tracked in cameras to take the advantages of camera based tracking. The proposed algorithm performs online and does not need any information about the scene, any restrictions of enter-and-exit zones, no assumption of areas where objects are moving on and can be extended to any class of object tracking. Experimental results show the benefits of using camera by the higher accuracy and detect the objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bamrungthai, Pongsakon, and Viboon Sangveraphunsiri. "CU-Track: A Multi-Camera Framework for Real-Time Multi-Object Tracking." Applied Mechanics and Materials 415 (September 2013): 325–32. http://dx.doi.org/10.4028/www.scientific.net/amm.415.325.

Повний текст джерела
Анотація:
This paper presents CU-Track, a multi-camera framework for real-time multi-object tracking. The developed framework includes a processing unit, the target object, and the multi-object tracking algorithm. A PC-cluster has been developed as the processing unit of the framework to process data in real-time. To setup the PC-cluster, two PCs are connected by using PCI interface cards that memory can be shared between the two PCs to ensure high speed data transfer and low latency. A novel mechanism for PC-to-PC communication is proposed. It is realized by a dedicated software processing module called the Cluster Module. Six processing modules have been implemented to realize system operations such as camera calibration, camera synchronization and 3D reconstruction of each target. Multiple spherical objects with the same size are used as the targets to be tracked. Two configurations of them, active and passive, can be used for tracking by the system. The algorithm based on Kalman filter and nearest neighbor searching is developed for multi-object tracking. Two applications have been implemented on the system, which confirm the validity and effectiveness of the developed framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sharma, Anil, Saket Anand, and Sanjit K. Kaul. "Reinforcement Learning Based Querying in Camera Networks for Efficient Target Tracking." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 555–63. http://dx.doi.org/10.1609/icaps.v29i1.3522.

Повний текст джерела
Анотація:
Surveillance camera networks are a useful monitoring infrastructure that can be used for various visual analytics applications, where high-level inferences and predictions could be made based on target tracking across the network. Most multi-camera tracking works focus on re-identification problems and trajectory association problems. However, as camera networks grow in size, the volume of data generated is humongous, and scalable processing of this data is imperative for deploying practical solutions. In this paper, we address the largely overlooked problem of scheduling cameras for processing by selecting one where the target is most likely to appear next. The inter-camera handover can then be performed on the selected cameras via re-identification or another target association technique. We model this scheduling problem using reinforcement learning and learn the camera selection policy using Q-learning. We do not assume the knowledge of the camera network topology but we observe that the resulting policy implicitly learns it. We evaluate our approach using NLPR MCT dataset, which is a real multi-camera multi-target tracking benchmark and show that the proposed policy substantially reduces the number of frames required to be processed at the cost of a small reduction in recall.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cant, Olivia, Stephanie Kovalchik, Rod Cross, and Machar Reid. "Validation of ball spin estimates in tennis from multi-camera tracking data." Journal of Sports Sciences 38, no. 3 (November 29, 2019): 296–303. http://dx.doi.org/10.1080/02640414.2019.1697189.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nikodem, Maciej, Mariusz Słabicki, Tomasz Surmacz, Paweł Mrówka, and Cezary Dołęga. "Multi-Camera Vehicle Tracking Using Edge Computing and Low-Power Communication." Sensors 20, no. 11 (June 11, 2020): 3334. http://dx.doi.org/10.3390/s20113334.

Повний текст джерела
Анотація:
Typical approaches to visual vehicle tracking across large area require several cameras and complex algorithms to detect, identify and track the vehicle route. Due to memory requirements, computational complexity and hardware constrains, the video images are transmitted to a dedicated workstation equipped with powerful graphic processing units. However, this requires large volumes of data to be transmitted and may raise privacy issues. This paper presents a dedicated deep learning detection and tracking algorithms that can be run directly on the camera’s embedded system. This method significantly reduces the stream of data from the cameras, reduces the required communication bandwidth and expands the range of communication technologies to use. Consequently, it allows to use short-range radio communication to transmit vehicle-related information directly between the cameras, and implement the multi-camera tracking directly in the cameras. The proposed solution includes detection and tracking algorithms, and a dedicated low-power short-range communication for multi-target multi-camera tracking systems that can be applied in parking and intersection scenarios. System components were evaluated in various scenarios including different environmental and weather conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lyu, Pengfei, Minxiang Wei, and Yuwei Wu. "Multi-Vehicle Tracking Based on Monocular Camera in Driver View." Applied Sciences 12, no. 23 (November 30, 2022): 12244. http://dx.doi.org/10.3390/app122312244.

Повний текст джерела
Анотація:
Multi-vehicle tracking is used in advanced driver assistance systems to track obstacles, which is fundamental for high-level tasks. It requires real-time performance while dealing with object illumination variations and deformations. To this end, we propose a novel multi-vehicle tracking algorithm based on a monocular camera in driver view. It follows the tracking-by-detection paradigm and integrates detection and appearance descriptors into a single network. The one-stage detection approach consists of a backbone, a modified BiFPN as a neck layer, and three prediction heads. The data association consists of a two-step matching strategy together with a Kalman filter. Experimental results demonstrate that the proposed approach outperforms state-of-the-art algorithms. It is also able to solve the tracking problem in driving scenarios while maintaining 16 FPS on the test dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Straw, Andrew D., Kristin Branson, Titus R. Neumann, and Michael H. Dickinson. "Multi-camera real-time three-dimensional tracking of multiple flying animals." Journal of The Royal Society Interface 8, no. 56 (July 14, 2010): 395–409. http://dx.doi.org/10.1098/rsif.2010.0230.

Повний текст джерела
Анотація:
Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster . At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yi, Chunlei, Kunfan Zhang, and Nengling Peng. "A multi-sensor fusion and object tracking algorithm for self-driving vehicles." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 233, no. 9 (August 2019): 2293–300. http://dx.doi.org/10.1177/0954407019867492.

Повний текст джерела
Анотація:
Vehicles need to detect threats on the road, anticipate emerging dangerous driving situations and take proactive actions for collision avoidance. Therefore, the study on methods of target detection and recognition are of practical value to a self-driving system. However, single sensor has its weakness, such as poor weather adaptability with lidar and camera. In this article, we propose a novel spatial calibration method based on multi-sensor systems, and the approach utilizes rotation and translation of the coordinate system. The validity of the proposed spatial calibration method is tested through comparisons with the data calibrated. In addition, a multi-sensor fusion and object tracking algorithm based on target level to detect and recognize targets is tested. Sensors contain lidar, radar and camera. The multi-sensor fusion and object tracking algorithm takes advantages of various sensors such as target location from lidar, target velocity from radar and target type from camera. Besides, multi-sensor fusion and object tracking algorithm can achieve information redundancy and increase environmental adaptability. Compared with the results of single sensor, this new approach is verified to have the accuracy of location, velocity and recognition by real data.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wu, Zhihong, Fuxiang Li, Yuan Zhu, Ke Lu, and Mingzhi Wu. "Design of a Robust System Architecture for Tracking Vehicle on Highway Based on Monocular Camera." Sensors 22, no. 9 (April 27, 2022): 3359. http://dx.doi.org/10.3390/s22093359.

Повний текст джерела
Анотація:
Multi-Target tracking is a central aspect of modeling the environment of autonomous vehicles. A mono camera is a necessary component in the autonomous driving system. One of the biggest advantages of the mono camera is it can give out the type of vehicle and cameras are the only sensors able to interpret 2D information such as road signs or lane markings. Besides this, it has the advantage of estimating the lateral velocity of the moving object. The mono camera is now being used by companies all over the world to build autonomous vehicles. In the expressway scenario, the forward-looking camera can generate a raw picture to extract information from and finally achieve tracking multiple vehicles at the same time. A multi-object tracking system, which is composed of a convolution neural network module, depth estimation module, kinematic state estimation module, data association module, and track management module, is needed. This paper applies the YOLO detection algorithm combined with the depth estimation algorithm, Extend Kalman Filter, and Nearest Neighbor algorithm with a gating trick to build the tracking system. Finally, the tracking system is tested on the vehicle equipped with a forward mono camera, and the results show that the lateral and longitudinal position and velocity can satisfy the need for Adaptive Cruise Control (ACC), Navigation On Pilot (NOP), Auto Emergency Braking (AEB), and other applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gai, Wei, Meng Qi, Mingcong Ma, Lu Wang, Chenglei Yang, Juan Liu, Yulong Bian, Gerard de Melo, Shijun Liu, and Xiangxu Meng. "Employing Shadows for Multi-Person Tracking Based on a Single RGB-D Camera." Sensors 20, no. 4 (February 15, 2020): 1056. http://dx.doi.org/10.3390/s20041056.

Повний текст джерела
Анотація:
Although there are many algorithms to track people that are walking, existing methods mostly fail to cope with occluded bodies in the setting of multi-person tracking with one camera. In this paper, we propose a method to use people’s shadows as a clue to track them instead of treating shadows as mere noise. We introduce a novel method to track multiple people by fusing shadow data from the RGB image with skeleton data, both of which are captured by a single RGB Depth (RGB-D) camera. Skeletal tracking provides the positions of people that can be captured directly, while their shadows are used to track them when they are no longer visible. Our experiments confirm that this method can efficiently handle full occlusions. It thus has substantial value in resolving the occlusion problem in multi-person tracking, even with other kinds of cameras.
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Multi-camera tracking data"

1

Mikić, Ivana. "Human body model acquisition and tracking using multi-camera voxel data /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2002. http://wwwlib.umi.com/cr/ucsd/fullcit?p3036991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Повний текст джерела
Анотація:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

(9708467), Siddhant Srinath Betrabet. "Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3D LIDAR and Multi-Camera Setup." Thesis, 2021.

Знайти повний текст джерела
Анотація:

Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects.

These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy.

The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.


Стилі APA, Harvard, Vancouver, ISO та ін.
4

Betrabet, Siddhant S. "Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera Setup." Thesis, 2020. http://hdl.handle.net/1805/24776.

Повний текст джерела
Анотація:
Indiana University-Purdue University Indianapolis (IUPUI)
Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects. These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy. The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cant, Olivia. "Exploring the effects of ball speed and spin in Grand Slam tennis match-play." Thesis, 2020. https://vuir.vu.edu.au/42175/.

Повний текст джерела
Анотація:
This thesis featured modern technology to investigate the effect of ball speed and spin on aspects of on-court hitting performance. Adjusting a shot’s ball flight – be that in the form of speed and/or spin – is a tennis tactic that features in almost every point that is played. Past research has highlighted the importance of generating high shot speeds for on-court performance, while the limited empirical work that has examined the influence of ball spin has largely relied on indirect measures. Indeed, even with ball-tracking systems such as Hawk-Eye being commonplace at professional-level tournaments, the precision of proprietary spin measures is not well understood and limits the extent to which they can be used to derive insight by scientists and practitioners. During rally play, it is rare for players to produce just ball speed or spin for any given shot; more logically generating varying combinations of both speed and spin. The interplay between these characteristics has been largely overlooked in the literature; so much so that the popular concept of stroke heaviness, thought to capture the unique combined effects of speed and spin, has not been explored. Further, research relating shot characteristics (i.e., speed) with point outcomes is too simplistic as it essentially disregards the influence of one shot on the next, including how incoming shot characteristics shape the impact and quality of an opponent’s reply. To address these gaps in the literature, this thesis validated methods to estimate ball spin from the sport’s most common multi-camera tracking technology (Hawk-Eye), finding that a theoretical ball trajectory model applied to Hawk-Eye outputs was most accurate. This method estimated spin rate with a root mean square error (RMSE) of 221.93 RPM and correctly classified the spin direction of all trials, thus, outperforming Hawk-Eye’s proprietary spin rate (RMSE: 549.56 RPM) and direction (97.60% correctly classified) measure. This has widespread applications given the extent to which Hawk- Eye is used during professional matches and allowed the thesis’s subsequent studies to probe spatiotemporal data from Grand Slam matches. This involved the novel exploration of player and data-driven views of the attributes and effects of stroke heaviness and then investigation of the effect of incoming shot characteristics (i.e., speed, spin, landing depth) on aspects of on-court hitting performance (i.e., player impact, return stroke quality). Investigating the concept of stroke heaviness highlighted the complexity of this style of shot-making, while further examination of the influence of incoming shot speed and spin on player impact and ball-striking revealed that producing a consistent contact point and return stroke was outside of a player’s full control. To summarise, developing a method to accurately estimate ball spin from ball- tracking data allowed this thesis to extend current knowledge on the influence of incoming shot characteristics on aspects of performance during Grand Slam matches. Accordingly this thesis provides coaches and players with a method to estimate spin in practice and match contexts and highlights how shot characteristics can be varied to influence an opponent’s contact point and the quality of their next shot.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Multi-camera tracking data"

1

Ristani, Ergys, Francesco Solera, Roger Zou, Rita Cucchiara, and Carlo Tomasi. "Performance Measures and a Data Set for Multi-target, Multi-camera Tracking." In Lecture Notes in Computer Science, 17–35. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48881-3_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Multi-camera tracking data"

1

Du, Wei, and Justus Piater. "Data Fusion by Belief Propagation for Multi-Camera Tracking." In 2006 9th International Conference on Information Fusion. IEEE, 2006. http://dx.doi.org/10.1109/icif.2006.301712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hamid, A. K., L. S. Melaku, M. Pelillo, and A. Prati. "Using dominant sets for data association in multi-camera tracking." In ICDSC '15: International Conference on distributed Smart Cameras. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2789116.2789126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Heimsch, Dominik, Yan Han Lau, Chinmaya Mishra, Sutthiphong Srigrarom, and Florian Holzapfel. "Re-Identification for Multi-Target-Tracking Systems Using Multi-Camera, Homography Transformations and Trajectory Matching." In 2022 Sensor Data Fusion: Trends, Solutions, Applications (SDF). IEEE, 2022. http://dx.doi.org/10.1109/sdf55338.2022.9931703.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Poschmann, Johannes, Tim Pfeifer, and Peter Protzel. "Optimization based 3D Multi-Object Tracking using Camera and Radar Data." In 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2021. http://dx.doi.org/10.1109/iv48863.2021.9575636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Panev, Stanislav, and Agata Manolova. "Improved multi-camera 3D Eye Tracking for human-computer interface." In 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). IEEE, 2015. http://dx.doi.org/10.1109/idaacs.2015.7340743.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Arar, Nuri Murat, and Jean-Philippe Thiran. "Estimating fusion weights of a multi-camera eye tracking system by leveraging user calibration data." In ETRA '16: 2016 Symposium on Eye Tracking Research and Applications. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2857491.2857510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Byeon, Moonsub, Songhwai Oh, Kikyung Kim, Haan-Ju Yoo, and Jin Young Choi. "Efficient Spatio-Temporal Data Association Using Multidimensional Assignment in Multi-Camera Multi-Target Tracking." In British Machine Vision Conference 2015. British Machine Vision Association, 2015. http://dx.doi.org/10.5244/c.29.68.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Choi, Hyunguk, and Moongu Jeon. "Data association for non-overlapping multi-camera multi-object tracking based on similarity function." In 2016 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia). IEEE, 2016. http://dx.doi.org/10.1109/icce-asia.2016.7804834.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Panev, Stanislav, Plamen Petrov, Ognian Boumbarov, and Krasimir Tonchev. "Human gaze tracking in 3D space with an active multi-camera system." In 2013 IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). IEEE, 2013. http://dx.doi.org/10.1109/idaacs.2013.6662719.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Campbell, Mark, and Daniel E. Clark. "Joint stereo camera calibration and multi-target tracking using the linear-complexity factorial cumulant filter." In 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF). IEEE, 2019. http://dx.doi.org/10.1109/sdf.2019.8916653.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії