Academic literature on the topic 'Camera motion estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Camera motion estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Camera motion estimation"

1

Guan, Banglei, Xiangyi Sun, Yang Shang, Xiaohu Zhang, and Manuel Hofer. "Multi-camera networks for motion parameter estimation of an aircraft." International Journal of Advanced Robotic Systems 14, no. 1 (January 1, 2017): 172988141769231. http://dx.doi.org/10.1177/1729881417692312.

Full text
Abstract:
A multi-camera network is proposed to estimate an aircraft’s motion parameters relative to the reference platform in large outdoor fields. Multiple cameras are arranged to cover the aircraft’s large-scale motion spaces by field stitching. A camera calibration method using dynamic control points created by a multirotor unmanned aerial vehicle is presented under the conditions that the field of view of the cameras is void. The relative deformation of the camera network caused by external environmental factors is measured and compensated using a combination of cameras and laser rangefinders. A series of field experiments have been carried out using a fixed-wing aircraft without artificial makers, and its accuracy is evaluated using an onboard Differential Global Positioning System. The experimental results show that the multi-camera network is precise, robust, and highly dynamic and can improve the aircraft’s landing accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Mansour, Mostafa, Pavel Davidson, Oleg Stepanov, and Robert Piché. "Relative Importance of Binocular Disparity and Motion Parallax for Depth Estimation: A Computer Vision Approach." Remote Sensing 11, no. 17 (August 23, 2019): 1990. http://dx.doi.org/10.3390/rs11171990.

Full text
Abstract:
Binocular disparity and motion parallax are the most important cues for depth estimation in human and computer vision. Here, we present an experimental study to evaluate the accuracy of these two cues in depth estimation to stationary objects in a static environment. Depth estimation via binocular disparity is most commonly implemented using stereo vision, which uses images from two or more cameras to triangulate and estimate distances. We use a commercial stereo camera mounted on a wheeled robot to create a depth map of the environment. The sequence of images obtained by one of these two cameras as well as the camera motion parameters serve as the input to our motion parallax-based depth estimation algorithm. The measured camera motion parameters include translational and angular velocities. Reference distance to the tracked features is provided by a LiDAR. Overall, our results show that at short distances stereo vision is more accurate, but at large distances the combination of parallax and camera motion provide better depth estimation. Therefore, by combining the two cues, one obtains depth estimation with greater range than is possible using either cue individually.
APA, Harvard, Vancouver, ISO, and other styles
3

Holešovský, Ondřej, Radoslav Škoviera, Václav Hlaváč, and Roman Vítek. "Experimental Comparison between Event and Global Shutter Cameras." Sensors 21, no. 4 (February 6, 2021): 1137. http://dx.doi.org/10.3390/s21041137.

Full text
Abstract:
We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.
APA, Harvard, Vancouver, ISO, and other styles
4

Ogawa, Shota, Kenichi Asami, and Mochimitsu Komori. "Design and Evaluation of Compact Real-time Descriptor for Camera Motion Estimation." Journal of the Institute of Industrial Applications Engineers 5, no. 2 (April 25, 2017): 90–99. http://dx.doi.org/10.12792/jiiae.5.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kwon, Soon-Kak, and Seong-Woo Kim. "Motion Estimation Method by Using Depth Camera." Journal of Broadcast Engineering 17, no. 4 (July 30, 2012): 676–83. http://dx.doi.org/10.5909/jbe.2012.17.4.676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lv Yao-wen, 吕耀文, 王建立 WANG Jian-li, 王昊京 WANG Hao-jing, 刘维 LIU Wei, 吴量 WU Liang, and 曹景太 CAO Jing-tai. "Estimation of camera poses by parabolic motion." Optics and Precision Engineering 22, no. 4 (2014): 1078–85. http://dx.doi.org/10.3788/ope.20142204.1078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nilsson, Emil, Christian Lundquist, Thomas B. Schön, David Forslund, and Jacob Roll. "Vehicle Motion Estimation Using an Infrared Camera." IFAC Proceedings Volumes 44, no. 1 (January 2011): 12952–57. http://dx.doi.org/10.3182/20110828-6-it-1002.03037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Jiexiong, John Folkesson, and Patric Jensfelt. "Geometric Correspondence Network for Camera Motion Estimation." IEEE Robotics and Automation Letters 3, no. 2 (April 2018): 1010–17. http://dx.doi.org/10.1109/lra.2018.2794624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Özyeşil, Onur, Amit Singer, and Ronen Basri. "Stable Camera Motion Estimation Using Convex Programming." SIAM Journal on Imaging Sciences 8, no. 2 (January 2015): 1220–62. http://dx.doi.org/10.1137/140977576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jonchery, C., F. Dibos, and G. Koepfler. "Camera Motion Estimation Through Planar Deformation Determination." Journal of Mathematical Imaging and Vision 32, no. 1 (April 12, 2008): 73–87. http://dx.doi.org/10.1007/s10851-008-0086-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Camera motion estimation"

1

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Jae-Hak. "Camera motion estimation for multi-camera systems /." View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081211.011120/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Srestasathiern, Panu. "Line Based Estimation of Object Space Geometry and Camera Motion." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hannuksela, J. (Jari). "Camera based motion estimation and recognition for human-computer interaction." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514289781.

Full text
Abstract:
Abstract Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices. For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate the 3-D pose of the head. Experiments indicate that the methods introduced can be applied on platforms with limited computational resources. A novel object tracking method is also presented. The idea is to combine Kalman filtering and EM-algorithms to track an object, such as a finger, using motion features. This technique is also applicable when some conventional methods such as colour segmentation and background subtraction cannot be used. In addition, a new feature based camera ego-motion estimation framework is proposed. The method introduced exploits gradient measures for feature selection and feature displacement uncertainty analysis. Experiments with a fixed point implementation testify to the effectiveness of the approach on a camera-equipped mobile phone. The feasibility of the methods developed is demonstrated in three new mobile interface solutions. One of them estimates the ego-motion of the device with respect to the user's face and utilises that information for browsing large documents or bitmaps on small displays. The second solution is to use device or finger motion to recognize simple gestures. In addition to these applications, a novel interactive system to build document panorama images is presented. The motion estimation and recognition techniques presented in this thesis have clear potential to become practical means for interacting with mobile devices. In fact, cameras in future mobile devices may, for the most of time, be used as sensors for self intuitive user interfaces rather than using them for digital photography.
APA, Harvard, Vancouver, ISO, and other styles
5

Kurz, Christian [Verfasser], and Hans-Peter [Akademischer Betreuer] Seidel. "Constrained camera motion estimation and 3D reconstruction / Christian Kurz. Betreuer: Hans-Peter Seidel." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2014. http://d-nb.info/1063330734/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hughes, Lloyd Haydn. "Enhancing mobile camera pose estimation through the inclusion of sensors." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95917.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Monocular structure from motion (SfM) is a widely researched problem, however many of the existing approaches prove to be too computationally expensive for use on mobile devices. In this thesis we investigate how inertial sensors can be used to increase the performance of SfM algorithms on mobile devices. Making use of the low cost inertial sensors found on most mobile devices we design and implement an extended Kalman filter (EKF) to exploit their complementary nature, in order to produce an accurate estimate of the attitude of the device. We make use of a quaternion based system model in order to linearise the measurement stage of the EKF, thus reducing its computational complexity. We use this attitude estimate to enhance the feature tracking and camera localisation stages in our SfM pipeline. In order to perform feature tracking we implement a hybrid tracking algorithm which makes use of Harris corners and an approximate nearest neighbour search to reduce the search space for possible correspondences. We increase the robustness of this approach by using inertial information to compensate for inter-frame camera rotation. We further develop an efficient bundle adjustment algorithm which only optimises the pose of the previous three key frames and the 3D map points common between at least two of these frames. We implement an optimisation based localisation algorithm which makes use of our EKF attitude estimate and the tracked features, in order to estimate the pose of the device relative to the 3D map points. This optimisation is performed in two steps, the first of which optimises only the translation and the second optimises the full pose. We integrate the aforementioned three sub-systems into an inertial assisted pose estimation pipeline. We evaluate our algorithms with the use of datasets captured on the iPhone 5 in the presence of a Vicon motion capture system for ground truth data. We find that our EKF can estimate the device’s attitude with an average dynamic accuracy of ±5°. Furthermore, we find that the inclusion of sensors into the visual pose estimation pipeline can lead to improvements in terms of robustness and computational efficiency of the algorithms and are unlikely to negatively affect the accuracy of such a system. Even though we managed to reduce execution time dramatically, compared to typical existing techniques, our full system is found to still be too computationally expensive for real-time performance and currently runs at 3 frames per second, however the ever improving computational power of mobile devices and our described future work will lead to improved performance. From this study we conclude that inertial sensors make a valuable addition into a visual pose estimation pipeline implemented on a mobile device.
AFRIKAANSE OPSOMMING: Enkel-kamera struktuur-vanaf-beweging (structure from motion, SfM) is ’n bekende navorsingsprobleem, maar baie van die bestaande benaderings is te berekeningsintensief vir gebruik op mobiele toestelle. In hierdie tesis ondersoek ons hoe traagheidsensors gebruik kan word om die prestasie van SfM algoritmes op mobiele toestelle te verbeter. Om van die lae-koste traagheidsensors wat op meeste mobiele toestelle gevind word gebruik te maak, ontwerp en implementeer ons ’n uitgebreide Kalman filter (extended Kalman filter, EKF) om hul komplementêre geaardhede te ontgin, en sodoende ’n akkurate skatting van die toestel se postuur te verkry. Ons maak van ’n kwaternioon-gebaseerde stelselmodel gebruik om die meetstadium van die EKF te lineariseer, en so die berekeningskompleksiteit te verminder. Hierdie afskatting van die toestel se postuur word gebruik om die fases van kenmerkvolging en kameralokalisering in ons SfM proses te verbeter. Vir kenmerkvolging implementeer ons ’n hibriede volgingsalgoritme wat gebruik maak van Harris-hoekpunte en ’n benaderde naaste-buurpunt-soektog om die soekruimte vir moontlike ooreenstemmings te verklein. Ons verhoog die robuustheid van hierdie benadering, deur traagheidsinligting te gebruik om vir kamerarotasies tussen raampies te kompenseer. Verder ontwikkel ons ’n doeltreffende bondelaanpassingsalgoritme wat slegs optimeer oor die vorige drie sleutelraampies, en die 3D punte gemeenskaplik tussen minstens twee van hierdie raampies. Ons implementeer ’n optimeringsgebaseerde lokaliseringsalgoritme, wat gebruik maak van ons EKF se postuurafskatting en die gevolgde kenmerke, om die posisie en oriëntasie van die toestel relatief tot die 3D punte in die kaart af te skat. Die optimering word in twee stappe uitgevoer: eerstens net oor die kamera se translasie, en tweedens oor beide die translasie en rotasie. Ons integreer die bogenoemde drie sub-stelsels in ’n pyplyn vir postuurafskatting met behulp van traagheidsensors. Ons evalueer ons algoritmes met die gebruik van datastelle wat met ’n iPhone 5 opgeneem is, terwyl dit in die teenwoordigheid van ’n Vicon bewegingsvasleggingstelsel was (vir die gelyktydige opneming van korrekte postuurdata). Ons vind dat die EKF die toestel se postuur kan afskat met ’n gemiddelde dinamiese akkuraatheid van ±5°. Verder vind ons dat die insluiting van sensors in die visuele postuurafskattingspyplyn kan lei tot verbeterings in terme van die robuustheid en berekeningsdoeltreffendheid van die algoritmes, en dat dit waarskynlik nie die akkuraatheid van so ’n stelsel negatief beïnvloed nie. Al het ons die uitvoertyd drasties verminder (in vergelyking met tipiese bestaande tegnieke) is ons volledige stelsel steeds te berekeningsintensief vir intydse verwerking op ’n mobiele toestel en hardloop tans teen 3 raampies per sekonde. Die voortdurende verbetering van mobiele toestelle se berekeningskrag en die toekomstige werk wat ons beskryf sal egter lei tot ’n verbetering in prestasie. Uit hierdie studie kan ons aflei dat traagheidsensors ’n waardevolle toevoeging tot ’n visuele postuurafskattingspyplyn kan maak.
APA, Harvard, Vancouver, ISO, and other styles
7

Fathollahi, Ghezelghieh Mona. "Estimation of Human Poses Categories and Physical Object Properties from Motion Trajectories." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6835.

Full text
Abstract:
Despite the impressive advancements in people detection and tracking, safety is still a key barrier to the deployment of autonomous vehicles in urban environments [1]. For example, in non-autonomous technology, there is an implicit communication between the people crossing the street and the driver to make sure they have communicated their intent to the driver. Therefore, it is crucial for the autonomous car to infer the future intent of the pedestrian quickly. We believe that human body orientation with respect to the camera can help the intelligent unit of the car to anticipate the future movement of the pedestrians. To further improve the safety of pedestrians, it is important to recognize whether they are distracted, carrying a baby, or pushing a shopping cart. Therefore, estimating the fine- grained 3D pose, i.e. (x,y,z)-coordinates of the body joints provides additional information for decision-making units of driverless cars. In this dissertation, we have proposed a deep learning-based solution to classify the categorized body orientation in still images. We have also proposed an efficient framework based on our body orientation classification scheme to estimate human 3D pose in monocular RGB images. Furthermore, we have utilized the dynamics of human motion to infer the body orientation in image sequences. To achieve this, we employ a recurrent neural network model to estimate continuous body orientation from the trajectories of body joints in the image plane. The proposed body orientation and 3D pose estimation framework are tested on the largest 3D pose estimation benchmark, Human3.6m (both in still images and video), and we have proved the efficacy of our approach by benchmarking it against the state-of-the-art approaches. Another critical feature of self-driving car is to avoid an obstacle. In the current prototypes the car either stops or changes its lane even if it causes other traffic disruptions. However, there are situations when it is preferable to collide with the object, for example a foam box, rather than take an action that could result in a much more serious accident than collision with the object. In this dissertation, for the first time, we have presented a novel method to discriminate between physical properties of these types of objects such as bounciness, elasticity, etc. based on their motion characteristics . The proposed algorithm is tested on synthetic data, and, as a proof of concept, its effectiveness on a limited set of real-world data is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
8

Almatrafi, Mohammed Mutlaq. "Optical Flow for Event Detection Camera." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1576188397203882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Hong Yun. "Deep Learning for Visual-Inertial Odometry: Estimation of Monocular Camera Ego-Motion and its Uncertainty." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156331321922759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ekström, Marcus. "Road Surface Preview Estimation Using a Monocular Camera." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151873.

Full text
Abstract:
Recently, sensors such as radars and cameras have been widely used in automotives, especially in Advanced Driver-Assistance Systems (ADAS), to collect information about the vehicle's surroundings. Stereo cameras are very popular as they could be used passively to construct a 3D representation of the scene in front of the car. This allowed the development of several ADAS algorithms that need 3D information to perform their tasks. One interesting application is Road Surface Preview (RSP) where the task is to estimate the road height along the future path of the vehicle. An active suspension control unit can then use this information to regulate the suspension, improving driving comfort, extending the durabilitiy of the vehicle and warning the driver about potential risks on the road surface. Stereo cameras have been successfully used in RSP and have demonstrated very good performance. However, the main disadvantages of stereo cameras are their high production cost and high power consumption. This limits installing several ADAS features in economy-class vehicles. A less expensive alternative are monocular cameras which have a significantly lower cost and power consumption. Therefore, this thesis investigates the possibility of solving the Road Surface Preview task using a monocular camera. We try two different approaches: structure-from-motion and Convolutional Neural Networks.The proposed methods are evaluated against the stereo-based system. Experiments show that both structure-from-motion and CNNs have a good potential for solving the problem, but they are not yet reliable enough to be a complete solution to the RSP task and be used in an active suspension control unit.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Camera motion estimation"

1

Schoepflin, Todd Nelson. Algorithms for estimating mean vehicle speed using uncalibrated traffic management cameras. [Olympia, Wash.]: Washington State Dept. of Transportation, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Camera motion estimation"

1

Utsumi, Akira, Hiroki Mori, Jun Ohya, and Masahiko Yachida. "Multiple camera based human motion estimation." In Computer Vision — ACCV'98, 655–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63931-4_274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peng, Xin, Yifu Wang, Ling Gao, and Laurent Kneip. "Globally-Optimal Event Camera Motion Estimation." In Computer Vision – ECCV 2020, 51–67. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dani, Ashwin P., and Warren E. Dixon. "Single Camera Structure and Motion Estimation." In Visual Servoing via Advanced Numerical Methods, 209–29. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-089-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kurz, Christian, Thorsten Thormählen, Bodo Rosenhahn, and Hans-Peter Seidel. "Exploiting Mutual Camera Visibility in Multi-camera Motion Estimation." In Advances in Visual Computing, 391–402. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10331-5_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lertrusdachakul, Thitiporn, Terumasa Aoki, and Hiroshi Yasuda. "Camera Motion Estimation by Image Feature Analysis." In Pattern Recognition and Image Analysis, 618–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11552499_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alyousefi, Khaled, and Jonathan Ventura. "Multi-camera Motion Estimation with Affine Correspondences." In Lecture Notes in Computer Science, 417–31. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50347-5_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Xuefeng, Cuicui Zhang, and Takashi Matsuyama. "Inlier Estimation for Moving Camera Motion Segmentation." In Computer Vision -- ACCV 2014, 352–67. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16817-3_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Almeida, Jurandy, Rodrigo Minetto, Tiago A. Almeida, Ricardo da S. Torres, and Neucimar J. Leite. "Robust Estimation of Camera Motion Using Optical Flow Models." In Advances in Visual Computing, 435–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10331-5_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ventura, Jonathan, Clemens Arth, and Vincent Lepetit. "Approximated Relative Pose Solvers for Efficient Camera Motion Estimation." In Computer Vision - ECCV 2014 Workshops, 180–93. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16178-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shimada, N., Y. Shirai, and Y. Kuno. "Model Adaptation and Posture Estimation of Moving Articulated Object Using Monocular Camera." In Articulated Motion and Deformable Objects, 159–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10722604_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Camera motion estimation"

1

"OMNIDIRECTIONAL CAMERA MOTION ESTIMATION." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001084505770584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ratshidaho, Terence, Jules Raymond Tapamo, Jonathan Claassens, and Natasha Govender. "ToF camera ego-motion estimation." In 2012 5th Robotics and Mechatronics Conference of South Africa (ROBMECH). IEEE, 2012. http://dx.doi.org/10.1109/robomech.2012.6558458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Lili, Chengyu Cao, Amanda Young, and Naira Hovakimyan. "Motion Estimation via a Zoom Camera." In AIAA Guidance, Navigation and Control Conference and Exhibit. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2008. http://dx.doi.org/10.2514/6.2008-7446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mahyari, Mohsen Yaghobi, and Mohammad Ghanbari. "Robust estimation of camera motion parameters." In 2014 22nd Iranian Conference on Electrical Engineering (ICEE). IEEE, 2014. http://dx.doi.org/10.1109/iraniancee.2014.6999655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meng, Zelin, Xiangbo Kong, Lin Meng, and Hiroyuki Tomiyama. "Camera Motion Estimation and optimization Approach." In 2019 International Conference on Advanced Mechatronic Systems (ICAMechS). IEEE, 2019. http://dx.doi.org/10.1109/icamechs.2019.8861680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"CAMERA MOTION ESTIMATION USING PARTICLE FILTERS." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001086906700673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Ding, Miao Liu, and Hong Zhang. "Camera motion estimation using normal flows." In 2012 International Conference on Graphic and Image Processing, edited by Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2010851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Farin, Dirk, and Peter H. N. de With. "Estimating physical camera parameters based on multisprite motion estimation." In Electronic Imaging 2005, edited by Amir Said and John G. Apostolopoulos. SPIE, 2005. http://dx.doi.org/10.1117/12.587557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"ROBUST CAMERA MOTION ESTIMATION IN VIDEO SEQUENCES." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0001361702940302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zou, Xiao-chun, Ming-yi He, Xin-bo Zhao, and Yan Feng. "A Robust Feature-Based Camera Motion Estimation Method." In 2010 International Conference on Innovative Computing & Communication and 2010 Asia-Pacific Conference on Information Technology & Ocean Engineering, (CICC-ITOE). IEEE, 2010. http://dx.doi.org/10.1109/cicc-itoe.2010.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography