Academic literature on the topic 'Camera synchronisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Camera synchronisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Camera synchronisation"

1

Schmitt, Robert, and Yu Cai. "Single camera-based synchronisation within a concept of robotic assembly in motion." Assembly Automation 34, no. 2 (April 1, 2014): 160–68. http://dx.doi.org/10.1108/aa-04-2013-040.

Full text
Abstract:
Purpose – Automated robotic assembly on a moving workpiece, referred to as assembly in motion, demands that an assembly robot is synchronised in all degrees of freedom to the moving workpiece, on which assembly parts are installed. Currently, this requirement cannot be met due to the lack of robust estimation of 3D positions and the trajectory of the moving workpiece. The purpose of this paper is to develop a camera system that measures the 3D trajectory of the moving workpiece for robotic assembly in motion. Design/methodology/approach – For the trajectory estimation, an assembly robot-guided, monocular camera system is developed. The motion trajectory of a workpiece is estimated, as the trajectory is considered as a linear combination of trajectory bases, such as discrete cosine transform bases. Findings – The developed camera system for trajectory estimation is tested within the robotic assembly of a cylinder block in motion. The experimental results show that the proposed method is able to reconstruct arbitrary trajectories of an assembly point on a workpiece moving in 3D space. Research limitations/implications – With the developed technology, a point trajectory can be recovered offline only after all measurement images are acquired. For practical assembly tasks in real production, this method should be extended to determine the trajectory online during the motion of a workpiece. Practical implications – For practical, robotic assembly in motion, such as assembling tires, wheels and windscreens on conveyed vehicle bodies, the developed technology can be used for positioning a moving workpiece, which is in the distant field of an assembly robot. Originality/value – Besides laser trackers, indoor global positioning systems and stereo cameras, this paper provides a solution of trajectory estimation by using a monocular camera system.
APA, Harvard, Vancouver, ISO, and other styles
2

Tay, Cheryl Sihui, and Pui Wah Kong. "A Video-Based Method to Quantify Stroke Synchronisation in Crew Boat Sprint Kayaking." Journal of Human Kinetics 65, no. 1 (December 31, 2018): 45–56. http://dx.doi.org/10.2478/hukin-2018-0038.

Full text
Abstract:
AbstractThe study aimed to quantify stroke synchronisation in two-seater crew boat sprint kayaking (K2) using a video-based method, and to assess the intra- and inter-rater reliabilities of this method. Twelve sub-elite sprint kayakers (six males and six females) from a national team were paired into six single-gender K2 crews. The crews were recorded at 120 Hz with a sagittal-view video camera during 200-m time trials. Video analysis identified four meaningful positions of a stroke (catch, immersion, extraction and release). The timing difference (termed “offset”) between the front and back paddlers, within each K2, at each stroke position was calculated, with zero offset indicating perfect synchronisation. Results showed almost perfect intra-rater reliability of this method. The intra-class correlation (ICC) ranged from .87 to 1.00, and standard error of measurement ( SEM) from 0 to 5 milliseconds (ms). Inter-rater reliability was substantial to almost perfect (ICC .72 – .94, SEM 2 – 6 ms). On average, 35 strokes were analysed for each crew and the mean offset was 17 ms, or 5.7% of water phase duration. Crews were more synchronised at the catch (11 ms, 3.8%) than the release (21 ms, 7.2%). However, the stroke synchronisation profiles of the six sub-elite crews varied considerably from each other. For example, the best performing male and female crews had directly contrasting profiles. This suggests that there is no universal stroke synchronisation profile for well-trained sprint kayakers. This video-based method may aid future investigations on improving performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Adorna, Marcel, Petr Zlámal, Tomáš Fíla, Jan Falta, Markus Felten, Michael Fries, and Anne Jung. "TESTING OF HYBRID NICKEL-POLYURETHANE FOAMS AT HIGH STRAIN-RATES USING HOPKINSON BAR AND DIGITAL IMAGE CORRELATION." Acta Polytechnica CTU Proceedings 18 (October 23, 2018): 72. http://dx.doi.org/10.14311/app.2018.18.0072.

Full text
Abstract:
In this paper Split Hopkinson pressure bar (SHPB) was used for dynamic testing of nickel coated polyurethane hybrid foams. The foams were manufactured by electrodeposition of a nickel coating on the standard open-cell polyurethane foam. High strength aluminium alloy bars instrumented with foil strain-gauges were used for dynamic loading of the specimens. Experiments were observed using a high-speed camera with frame-rate set to approx. 100-150 kfps. Precise synchronisation of the high-speed camera and the strain-gauge record was achieved using a through-beam photoelectric sensor. Dynamic equilibrium in the specimen was achieved in all measurements. Digital image correlation technique (DIC) was used to evaluate in-plane displacements and deformations of the samples. Specimens of two different dimensions were tested to investigate the collapse of the foam structure under high-speed loading at the specific strain-rate and strain.
APA, Harvard, Vancouver, ISO, and other styles
4

Crump, Duncan A., Janice M. Dulieu-Barton, and Marco L. Longana. "Approaches to Synchronise Conventional Measurements with Optical Techniques at High Strain Rates." Applied Mechanics and Materials 70 (August 2011): 75–80. http://dx.doi.org/10.4028/www.scientific.net/amm.70.75.

Full text
Abstract:
Polymer composites are increasingly being used in high-end and military applications, mainly due to their excellent tailorability to specific loading scenarios and strength/stiffness to weight ratios. The overall purpose of the research project is to develop an enhanced understanding of the behaviour of fibre reinforced polymer composites when subjected to high velocity loading. This is particularly important in military applications, where composite structures are at a high risk of receiving high strain rate loading, such as those resulting from collisions or blasts. The work described here considers an approach that allows the collection of full-field temperature and strain data to investigate the complex viscoelastic behaviour of composite material at high strain rates. To develop such a data-rich approach digital image correlation (DIC) is used to collect the displacement data and infra-red thermography (IRT) is used to collect temperature data. The use of optical techniques at the sampling rates necessary to capture the behaviour of composites subjected to high loading rates is novel and requires using imaging systems at the far extent of their design specification. One of the major advantages of optical techniques is that they are non-contact; however this also forms one of the challenges to their application to high speed testing. The separate camera systems and the test machine/loading system must be synchronised to ensure that the correct strain/temperature measurement is correlated with the correct temporal value of the loading regime. The loading rate exacerbates the situation where even at high sampling rates the data is discrete and therefore it is difficult to match values. The work described in the paper concentrates on investigating the possibility of the high speed DIC and synchronisation. The limitations of bringing together the techniques are discussed in detail, and a discussion of the relative merits of each synchronisation approach is included, which takes into consideration ease of use, accuracy, repeatability etc.
APA, Harvard, Vancouver, ISO, and other styles
5

Szymczyk, Tomasz, and Stanisław Skulimowski. "The study of human behaviour in a laboratory set-up with the use of innovative technology." MATEC Web of Conferences 252 (2019): 02010. http://dx.doi.org/10.1051/matecconf/201925202010.

Full text
Abstract:
Virtual reality technologies known since the 1990s have virtually disappeared. They have been replaced by their new editions characterised by a very faithful reproduction of details. The laboratory described in the article was created to study the intensity of various sensations experienced by a person in virtual worlds. A number of different scenarios have been developed that are designed to trigger specific reactions, to produce symptoms similar to those observed in arachnophobia, acrophobia or claustrophobia. The person examined can be equipped with a series of non-invasive sensors placed on the body in such a way that they do not interfere with immersion in VR. The laboratory instruments enable the acquisition and synchronisation of many signals. Body movement data is recorded by means of Kinect. Involuntary hand movements are measured with 5DT gloves. In addition, body temperature, ambient temperature and skin moisture are continuously monitored. Apart from recording the image from VR goggles, it is also possible to record the entire session on camera in 4K resolution in order to interpret facial expressions. The results are then analysed in detail and checked for patterns. The article describes both the test set-up itself as well as several test scenarios and presents the results of pilot studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Iseli, C., and A. Lucieer. "TREE SPECIES CLASSIFICATION BASED ON 3D SPECTRAL POINT CLOUDS AND ORTHOMOSAICS ACQUIRED BY SNAPSHOT HYPERSPECTRAL UAS SENSOR." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 379–84. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-379-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> In recent years, there has been a growing number of small hyperspectral sensors suitable for deployment on unmanned aerial systems (UAS. The introduction of the hyperspectral snapshot sensor provides interesting opportunities for acquisition of three-dimensional (3D) hyperspectral point clouds based on the structure-from-motion (SfM) workflow. In this study, we describe the integration of a 25-band hyperspectral snapshot sensor (PhotonFocus camera with IMEC 600&amp;thinsp;&amp;ndash;&amp;thinsp;875&amp;thinsp;nm 5x5 mosaic chip) on a multi-rotor UAS. The sensor was integrated with a dual frequency GNSS receiver for accurate time synchronisation and geolocation. We describe the sensor calibration workflow, including dark current and flat field characterisation. An SfM workflow was implemented to derive hyperspectral 3D point clouds and orthomosaics from overlapping frames. On-board GNSS coordinates for each hyperspectral frame assisted in the SfM process and allowed for accurate direct georeferencing (&amp;lt;&amp;thinsp;10&amp;thinsp;cm absolute accuracy). We present the processing workflow to generate seamless hyperspectral orthomosaics from hundreds of raw images. Spectral reference panels and in-field spectral measurements were used to calibrate and validate the spectral signatures. This process provides a novel data type which contains both 3D, geometric structure and detailed spectral information in a single format. First, to determine the potential improvements that such a format could provide, the core aim of this study was to compare the use of 3D hyperspectral point clouds to conventional hyperspectral imagery in the classification of two Eucalyptus tree species found in Tasmania, Australia. The IMEC SM5x5 hyperspectral snapshot sensor was flown over a small native plantation plot, consisting of a mix of the <i>Eucalyptus pauciflora</i> and <i>E. tenuiramis</i> species. High overlap hyperspectral imagery was captured and then processed using SfM algorithms to generate both a hyperspectral orthomosaic and a dense hyperspectral point cloud. Additionally, to ensure the optimum spectral quality of the data, the characteristics of the hyperspectral snapshot imaging sensor were analysed utilising measurements captured in a laboratory environment. To coincide with the generated hyperspectral point cloud data, both a file format and additional processing and visualisation software were developed to provide the necessary tools for a complete classification workflow. Results based on the classification of the <i>E. pauciflora</i> and <i>E. tenuiramis</i> species revealed that the hyperspectral point cloud produced an increased classification accuracy over conventional hyperspectral imagery based on random forest classification. This was represented by an increase in classification accuracy from 67.2% to 73.8%. It was found that even when applied separately, the geometric and spectral feature sets from the point cloud both provided increased classification accuracy over the hyperspectral imagery.</p>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Camera synchronisation"

1

Nguyen, Thanh-Tin. "Auto-calibration d'une multi-caméra omnidirectionnelle grand public fixée sur un casque." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC060/document.

Full text
Abstract:
Les caméras sphériques et 360 deviennent populaires et sont utilisées notamment pour la création de vidéos immersives et la génération de contenu pour la réalité virtuelle. Elles sont souvent composées de plusieurs caméras grand-angles/fisheyes pointant dans différentes directions et rigidement liées les unes aux autres. Cependant, il n'est pas si simple de les calibrer complètement car ces caméras grand public sont rolling shutter et peuvent être mal synchronisées. Cette thèse propose des méthodes permettant de calibrer ces multi-caméras à partir de vidéos sans utiliser de mire de calibration. On initialise d'abord un modèle de multi-caméra grâce à des hypothèses appropriées à un capteur omnidirectionnel sans direction privilégiée : les caméras ont les mêmes réglages (dont la fréquence et l'angle de champs de vue) et sont approximativement équiangulaires. Deuxièmement, sachant que le module de la vitesse angulaire est le même pour deux caméras au même instant, nous proposons de synchroniser les caméras à une image près à partir des vitesses angulaires estimées par structure-from-motion monoculaire. Troisièmement, les poses inter-caméras et les paramètres intrinsèques sont estimés par structure-from-motion et ajustement de faisceaux multi-caméras avec les approximations suivantes : la multi-caméra est centrale, global shutter ; et la synchronisation précédant est imposée.Enfin, nous proposons un ajustement de faisceaux final sans ces approximations, qui raffine notamment la synchronisation (à précision sous-trame), le coefficient de rolling shutter et les autres paramètres (intrinsèques, extrinsèques, 3D). On expérimente dans un contexte que nous pensons utile pour des applications comme les vidéos 360 et la modélisation 3D de scènes : plusieurs caméras grand public ou une caméra sphérique fixée(s) sur un casque et se déplaçant le long d'une trajectoire de quelques centaines de mètres à quelques kilomètres
360 degree and spherical multi-cameras built by fixing together several consumer cameras become popular and are convenient for recent applications like immersive videos, 3D modeling and virtual reality. This type of cameras allows to include the whole scene in a single view.When the goal of our applications is to merge monocular videos together into one cylinder video or to obtain 3D informations from environment,there are several basic steps that should be performed beforehand.Among these tasks, we consider the synchronization between cameras; the calibration of multi-camera system including intrinsic and extrinsic parameters (i.e. the relative poses between cameras); and the rolling shutter calibration. The goal of this thesis is to develop and apply user friendly method. Our approach does not require a calibration pattern. First, the multi-camera is initialized thanks to assumptions that are suitable to an omnidirectional camera without a privileged direction:the cameras have the same setting (frequency, image resolution, field-of-view) and are roughly equiangular.Second, a frame-accurate synchronization is estimated from instantaneous angular velocities of each camera provided by monocular Structure-from-Motion.Third, both inter-camera poses and intrinsic parameters are refined using multi-camera Structure-from-Motion and bundle adjustment.Last, we introduce a bundle adjustment that estimates not only the usual parameters but also a subframe-accurate synchronization and the rolling shutter. We experiment in a context that we believe useful for applications (3D modeling and 360 videos):several consumer cameras or a spherical camera mounted on a helmet and moving along trajectories of several hundreds of meters or kilometers
APA, Harvard, Vancouver, ISO, and other styles
2

Zara, Henri. "Système d'acquisition vidéo rapide : application à la mécanique des fluides." Saint-Etienne, 1997. http://www.theses.fr/1997STET4012.

Full text
Abstract:
Les systèmes de vision sont aujourd'hui largement employés pour les études expérimentales en mécanique des fluides. Les techniques d'acquisition d'images se heurtent cependant à une limitation technologique concernant la cadence et la résolution des images. De nouveaux capteurs d'images électroniques rapides, mais également certaines méthodes d'exposition permettent d'apporter des solutions à ce problème. Nous présentons dans ce mémoire un système d'acquisition vidéo rapide élaboré autour d'une plate-forme expérimentale de mécanique des fluides. Ce système est constitué en particulier : d'une camera CCD numérique (8bits) de résolution 512x512 pixels avec une cadence de 100 i/s ; d'un système de synchronisation qui assure les différents modes d'exposition des images ainsi que la synchronisation des éléments de la plate-forme. L'ensemble est commandé par un ordinateur PC, pour la configuration des paramètres ainsi que pour le stockage des images. La technique d'éclairage mise en oeuvre est la tomographie laser. Un choix technologique original nous a conduit au couplage de la camera à un intensificateur de lumière. Ce choix permet l'utilisation de sources laser continues de faible puissance. Il offre également de larges possibilités d'exposition des images par la commande de la porte optique de l'intensificateur. Dans une première partie nous présentons une étude détaillée des capteurs d'images à semi-conducteur, ainsi que des intensificateurs d'images. Les descriptions techniques concernant la réalisation de la caméra et du système de synchronisation sont ensuite exposées. L'ensemble d'acquisition vidéo final met en oeuvre deux cameras synchronisées permettant l'enregistrement d'une séquence de paires d'images d'occurrence très proche (50 ns). Quelques exemples expérimentaux permettent de confirmer les possibilités de notre système
APA, Harvard, Vancouver, ISO, and other styles
3

Pooley, Daniel William. "The automated synchronisation of independently moving cameras." 2008. http://hdl.handle.net/2440/49461.

Full text
Abstract:
Computer vision is concerned with the recovery of useful scene or camera information from a set of images. One classical problem is the estimation of the 3D scene structure depicted in multiple photographs. Such estimation fundamentally requires determining how the cameras are related in space. For a dynamic event recorded by multiple video cameras, finding the temporal relationship between cameras has a similar importance. Estimating such synchrony is key to a further analysis of the dynamic scene components. Existing approaches to synchronisation involve using visual cues common to both videos, and consider a discrete uniform range of synchronisation hypotheses. These prior methods exploit known constraints which hold in the presence of synchrony, from which both a temporal relationship, and an unchanging spatial relationship between the cameras can be recovered. This thesis presents methods that synchronise a pair of independently moving cameras. The spatial configuration of cameras is assumed to be known, and a cost function is developed to measure the quality of synchrony even for accuracies within a fraction of a frame. A Histogram method is developed which changes the approach from a consideration of multiple synchronisation hypotheses, to searching for seemingly synchronous frame pairs independently. Such a strategy has increased efficiency in the case of unknown frame rates. Further savings can be achieved by reducing the sampling rate of the search, by only testing for synchrony across a small subset of frames. Two robust algorithms are devised, using Bayesian inference to adaptively seek the sampling rate that minimises total execution time. These algorithms have a general underlying premise, and should be applicable to a wider class of robust estimation problems. A method is also devised to robustly synchronise two moving cameras when their spatial relationship is unknown. It is assumed that the motion of each camera has been estimated independently, so that these motion estimates are unregistered. The algorithm recovers both a synchronisation estimate, and a 3D transformation that spatially registers the two cameras.
Thesis (Ph.D.) - University of Adelaide, School of Computer Science, 2008
APA, Harvard, Vancouver, ISO, and other styles
4

Benrhaiem, Rania. "Méthodes d’analyse de mouvement en vision 3D : invariance aux délais temporels entre des caméras non synchronisées et flux optique par isocontours." Thèse, 2016. http://hdl.handle.net/1866/18469.

Full text
Abstract:
Cette thèse porte sur deux sujets de vision par ordinateur axés sur l’analyse de mouvement dans une scène dynamique vue par une ou plusieurs caméras. En premier lieu, nous avons travaillé sur le problème de la capture de mouvement avec des caméras non synchronisées. Ceci entraı̂ne généralement des erreurs de mise en correspondance 2D et par la suite des erreurs de reconstruction 3D. En contraste avec les solutions matérielles déjà existantes qui essaient de minimiser voire annuler le délai temporel entre les caméras, nous avons proposé une solution qui assure une invariance aux délais. En d’autres termes, nous avons développé une méthode qui permet de trouver la bonne mise en correspondance entre les points à reconstruire indépendamment du délai temporel. En second lieu, nous nous sommes intéressés au problème du flux optique avec une approche différente des méthodes proposées dans l’état de l’art. Le flux optique est utilisé pour l’analyse de mouvement en temps réel. Il est donc important qu’il soit calculé rapidement. Généralement, les méthodes existantes de flux optique sont classées en deux principales catégories: ou bien à la fois denses et précises mais très exigeantes en calcul, ou bien rapides mais moins denses et moins précises. Nous avons proposé une alternative qui tient compte à la fois du temps de calcul et de la précision du résultat. Nous avons proposé d’utiliser les isocontours d’intensité et de les mettre en correspondance afin de retrouver le flux optique en question. Ces travaux ont amené à deux principales contributions intégrées dans les chapitres de la thèse.
In this thesis we focused on two computer vision subjects. Both of them concern motion analysis in a dynamic scene seen by one or more cameras. The first subject concerns motion capture using unsynchronised cameras. This causes many correspondence errors and 3D reconstruction errors. In contrast with existing material solutions trying to minimize the temporal delay between the cameras, we propose a software solution ensuring an invariance to the existing temporal delay. We developed a method that finds the good correspondence between points regardless of the temporal delay. It solves the resulting spatial shift and finds the correct position of the shifted points. In the second subject, we focused on the optical flow problem using a different approach than the ones in the state of the art. In most applications, optical flow is used for real-time motion analysis. It is then important to be performed in a reduced time. In general, existing optical flow methods are classified into two main categories: either precise and dense but computationally intensive, or fast but less precise and less dense. In this work, we propose an alternative solution being at the same time, fast and precise. To do this, we propose extracting intensity isocontours to find corresponding points representing the related optical flow. By addressing these problems we made two major contributions.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Camera synchronisation"

1

Imre, Evren, and Adrian Hilton. "Through-the-Lens Synchronisation for Heterogeneous Camera Networks." In British Machine Vision Conference 2012. British Machine Vision Association, 2012. http://dx.doi.org/10.5244/c.26.97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duckworth, Tobias, and David J. Roberts. "Camera Image Synchronisation in Multiple Camera Real-Time 3D Reconstruction of Moving Humans." In 2011 IEEE/ACM 15th International Symposium on Distributed Simulation and Real Time Applications (DS-RT). IEEE, 2011. http://dx.doi.org/10.1109/ds-rt.2011.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Imre, Evren, Jean-Yves Guillemaut, and Adrian Hilton. "Through-the-Lens Multi-camera Synchronisation and Frame-Drop Detection for 3D Reconstruction." In 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT). IEEE, 2012. http://dx.doi.org/10.1109/3dimpvt.2012.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aerts, Peter, and Eric Demeester. "Time Synchronisation of Low-cost Camera Images with IMU Data based on Similar Motion." In 16th International Conference on Informatics in Control, Automation and Robotics. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007837602920299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ainsworth, Roger W., and Steven J. Thorpe. "The Development of a Doppler Global Velocimeter for Transonic Turbine Applications." In ASME 1994 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/94-gt-146.

Full text
Abstract:
The development of a Doppler Global Velocimeter (DGV) for the measurement of transonic turbo-machinery flows in the Oxford Isentropic Light Piston Tunnel rotor facility is described. A novel optical arrangement for capturing both reference and iodine cell discriminated images with a single CCD camera and frame grabber is presented. Practical arrangements for determination of the iodine cell transmission properties as a function of temperature and light frequency are discussed in the context of using an argon ion continuous wave laser for illumination. Flow seeding aspects of the experiment are described with particular emphasis on particle dynamics and light scattering. Error bounds for the DGV measurements are assessed and quantified in respect to the frame grabber resolution and Gaussian beam profile. Results of measurements of the velocity of a rotating disc with tip speed of nominally 90 m/s, obtained with 0.5 W single mode argon ion laser illumination are presented. Practical aspects for employing the DGV on the established Oxford rotor facility, such as seeding of the flow, optical access and synchronisation of data acquisition are addressed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography