Добірка наукової літератури з теми "Computer vision and multimedia computation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Computer vision and multimedia computation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Computer vision and multimedia computation":

1

Babar, Muhammad, Mohammad Dahman Alshehri, Muhammad Usman Tariq, Fasee Ullah, Atif Khan, M. Irfan Uddin, and Ahmed S. Almasoud. "IoT-Enabled Big Data Analytics Architecture for Multimedia Data Communications." Wireless Communications and Mobile Computing 2021 (December 17, 2021): 1–9. http://dx.doi.org/10.1155/2021/5283309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The present spreading out of the Internet of Things (IoT) originated the realization of millions of IoT devices connected to the Internet. With the increase of allied devices, the gigantic multimedia big data (MMBD) vision is also gaining eminence and has been broadly acknowledged. MMBD management offers computation, exploration, storage, and control to resolve the QoS issues for multimedia data communications. However, it becomes challenging for multimedia systems to tackle the diverse multimedia-enabled IoT settings including healthcare, traffic videos, automation, society parking images, and surveillance that produce a massive amount of big multimedia data to be processed and analyzed efficiently. There are several challenges in the existing structural design of the IoT-enabled data management systems to handle MMBD including high-volume storage and processing of data, data heterogeneity due to various multimedia sources, and intelligent decision-making. In this article, an architecture is proposed to process and store MMBD efficiently in an IoT-enabled environment. The proposed architecture is a layered architecture integrated with a parallel and distributed module to accomplish big data analytics for multimedia data. A preprocessing module is also integrated with the proposed architecture to prepare the MMBD and speed up the processing mechanism. The proposed system is realized and experimentally tested using real-time multimedia big data sets from athentic sources that discloses the effectiveness of the proposed architecture.
2

WANG, CHIA-JEN, SHWU-HUEY YEN, and PATRICK S. WANG. "A MULTIMEDIA WATERMARKING TECHNIQUE BASED ON SVMs." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 08 (December 2008): 1487–511. http://dx.doi.org/10.1142/s0218001408006934.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper we present an improved support vector machines (SVMs) watermarking system for still images and video sequences. By a thorough study on feature selection for training SVM, the proposed system shows significant improvements on computation efficiency and robustness to various attacks. The improved algorithm is extended to be a scene-based video watermarking technique. In a given scene, the algorithm uses the first h' frames to train an embedding SVM, and uses the trained SVM to watermark the rest of the frames. In the extracting phrase, the detector uses only the center h frames of the first h' frames to train an extracting SVM. The final extracted watermark in a given scene is the average of watermarks extracted from the remaining frames. Watermarks are embedded in l longest scenes of a video such that it is computationally efficient and capable to resist possible frames swapping/deleting/duplicating attacks. Two collusion attacks, namely temporal frame averaging and watermark estimation remodulation, on video watermarking are discussed and examined. The proposed video watermarking algorithm is shown to be robust to compression and collusion attacks, and it is novel and practical for SVM-applications.
3

Tellaeche Iglesias, Alberto, Ignacio Fidalgo Astorquia, Juan Ignacio Vázquez Gómez, and Surajit Saikia. "Gesture-Based Human Machine Interaction Using RCNNs in Limited Computation Power Devices." Sensors 21, no. 24 (December 8, 2021): 8202. http://dx.doi.org/10.3390/s21248202.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of gestures is one of the main forms of human machine interaction (HMI) in many fields, from advanced robotics industrial setups, to multimedia devices at home. Almost every gesture detection system uses computer vision as the fundamental technology, with the already well-known problems of image processing: changes in lighting conditions, partial occlusions, variations in color, among others. To solve all these potential issues, deep learning techniques have been proven to be very effective. This research proposes a hand gesture recognition system based on convolutional neural networks and color images that is robust against environmental variations, has a real time performance in embedded systems, and solves the principal problems presented in the previous paragraph. A new CNN network has been specifically designed with a small architecture in terms of number of layers and total number of neurons to be used in computationally limited devices. The obtained results achieve a percentage of success of 96.92% on average, a better score than those obtained by previous algorithms discussed in the state of the art.
4

LAI, JIAN HUANG, and PONG C. YUEN. "FACE AND EYE DETECTION FROM HEAD AND SHOULDER IMAGE ON MOBILE DEVICES." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 07 (November 2006): 1053–75. http://dx.doi.org/10.1142/s0218001406005150.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the advance of semiconductor technology, the current mobile devices support multimodal input and multimedia output. In turn, human computer communication applications can be developed in mobile devices such as mobile phone and PDA. This paper addresses the research issues of face and eye detection on mobile devices. The major obstacles that we need to overcome are the relatively low processor speed, low storage memory and low image (CMOS senor) quality. To solve these problems, this paper proposes a novel and efficient method for face and eye detection. The proposed method is based on color information because the computation time is small. However, the color information is sensitive to the illumination changes. In view of this limitation, this paper proposes an adaptive Illumination Insensitive (AI2) Algorithm, which dynamically calculates the skin color region based on an image color distribution. Moreover, to solve the strong sunlight effect, which turns the skin color pixel into saturation, a dual-color-space model is also developed. Based on AI2algorithm and face boundary information, face region is located. The eye detection method is based on an average integral of density, projection techniques and Gabor filters. To quantitatively evaluate the performance of the face and eye detection, a new metric is proposed. 2158 head & shoulder images captured under uncontrolled indoor and outdoor lighting conditions are used for evaluation. The accuracy in face detection and eye detection are 98% and 97% respectively. Moreover, the average computation time of one image using Matlab code in Pentium III 700MHz computer is less than 15 seconds. The computational time will be reduced to tens hundreds of millisecond (ms) if low level programming language is used for implementation. The results are encouraging and show that the proposed method is suitable for mobile devices.
5

Mahmoudi, Sidi Ahmed, Mohammed Amin Belarbi, El Wardani Dadi, Saïd Mahmoudi, and Mohammed Benjelloun. "Cloud-Based Image Retrieval Using GPU Platforms." Computers 8, no. 2 (June 14, 2019): 48. http://dx.doi.org/10.3390/computers8020048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The process of image retrieval presents an interesting tool for different domains related to computer vision such as multimedia retrieval, pattern recognition, medical imaging, video surveillance and movements analysis. Visual characteristics of images such as color, texture and shape are used to identify the content of images. However, the retrieving process becomes very challenging due to the hard management of large databases in terms of storage, computation complexity, temporal performance and similarity representation. In this paper, we propose a cloud-based platform in which we integrate several features extraction algorithms used for content-based image retrieval (CBIR) systems. Moreover, we propose an efficient combination of SIFT and SURF descriptors that allowed to extract and match image features and hence improve the process of image retrieval. The proposed algorithms have been implemented on the CPU and also adapted to fully exploit the power of GPUs. Our platform is presented with a responsive web solution that offers for users the possibility to exploit, test and evaluate image retrieval methods. The platform offers to users a simple-to-use access for different algorithms such as SIFT, SURF descriptors without the need to setup the environment or install anything while spending minimal efforts on preprocessing and configuring. On the other hand, our cloud-based CPU and GPU implementations are scalable, which means that they can be used even with large database of multimedia documents. The obtained results showed: 1. Precision improvement in terms of recall and precision; 2. Performance improvement in terms of computation time as a result of exploiting GPUs in parallel; 3. Reduction of energy consumption.
6

Li, Xiaojing, and Shuting Ding. "Interpersonal Interface System of Multimedia Intelligent English Translation Based on Deep Learning." Scientific Programming 2022 (April 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/8027003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial intelligence is a very challenging science, and people who are engaged in this work must understand computer knowledge, psychology, and philosophy. Artificial intelligence includes a wide range of sciences; it is composed of different fields, such as machine learning and computer vision. In recent years, with the rise and joint drive of technologies such as the Internet, big data, the Internet of Things, and voice recognition, the rapid development of artificial intelligence technology has presented new features such as deep learning, cross-border integration, and human-machine collaboration. Intelligent English translation is an innovation and experiment of the English language industry in the field of science and technology. Machine translation is the use of computer programmes to translate natural language into a specific natural language. This interdisciplinary subject includes three subjects: artificial intelligence, computational linguistics, and mathematical logic. Machine translation is becoming more and more popular with the public, and its advantages are mainly that machine translation is much cheaper than manual translation. From this study, it can be seen that the accuracy rate of the traditional ICTCLAS method is 76.40%, while the accuracy rate of the research method in this article is 94.58%, indicating that the research method used in this article is better. Machine translation greatly reduces the cost of translation because there are very few processes that require human participation, and the translation is basically done by computers. It is also an important research content in the field of English major research. Due to the wide range of language cultures and the influence of local culture, thought, and semantic environment, traditional human translation methods have many shortcomings. Nowadays, the demand for translation is unsatisfactory. Based on this, this paper analyzes the application status of intelligent translation, the prospect of intelligent translation, and realizes the optimization design of the human interface system for the intelligent translation of English. Through experiments, it is found that the multimedia intelligent English translation system based on deep learning not only improves the accuracy of English translation but also greatly improves the efficiency of people learning English.
7

Lee, Hyowon, Nazlena Mohamad Ali, and Lynda Hardman. "Designing Interactive Applications to Support Novel Activities." Advances in Human-Computer Interaction 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/180192.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
R&D in media-related technologies including multimedia, information retrieval, computer vision, and the semantic web is experimenting on a variety of computational tools that, if sufficiently matured, could support many novel activities that are not practiced today. Interactive technology demonstration systems produced typically at the end of their projects show great potential for taking advantage of technological possibilities. These demo systems or “demonstrators” are, even if crude or farfetched, a significant manifestation of the technologists’ visions in transforming emerging technologies into novel usage scenarios and applications. In this paper, we reflect on design processes and crucial design decisions made while designing some successful, web-based interactive demonstrators developed by the authors. We identify methodological issues in applying today’s requirement-driven usability engineering method to designing this type of novel applications and solicit a clearer distinction between designing mainstream applications and designing novel applications. More solution-oriented approaches leveraging design thinking are required, and more pragmatic evaluation criteria is needed that assess the role of the system in exploiting the technological possibilities to provoke further brainstorming and discussion. Such an approach will support a more efficient channelling of the technology-to-application transformation which are becoming increasingly crucial in today’s context of rich technological possibilities.
8

He, Guanqi, and Guo Lu. "Research on saliency detection method based on depth and width neural network." Journal of Physics: Conference Series 2037, no. 1 (September 1, 2021): 012034. http://dx.doi.org/10.1088/1742-6596/2037/1/012034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Image saliency detection is to segment the most important areas in the image. Solving the problem of image saliency detection usually involves knowledge in computer vision, neuroscience, cognitive psychology and other fields. In recent years, as deep learning has made great achievements in the field of computer vision, the application of deep learning has also played a good role in image saliency detection. Therefore, algorithms based on deep convolutional neural networks have become solutions to image saliency The most effective method of detection. For researchers, improving the computational efficiency of neural network-based saliency detection algorithms generally starts from two perspectives. One is to tailor the network structure and combine traditional feature extraction methods for processing. The other is to use a lighter network to solve the saliency detection problem. Based on these two points of thinking, this paper proposes two efficient and accurate neural network-based saliency detection algorithms. In recent years, with the rapid development of multimedia and Internet technologies, a huge amount of picture information is generated on blogs, social networking or shopping platforms every day. Such a lot of information not only enriches people’s lives, but also provides efficient and accurate network management platforms. The management of these image information brings difficulties. Therefore, how to understand and process these image information more intelligently and efficiently has become a hot topic for many image processing and computer vision researchers. Among them, the saliency detection technology plays a key role in solving the problem of intelligent understanding and processing of images. To put it simply, saliency detection is a technology to automatically calculate or detect the most important areas in an image, and its processing results provide a basis for understanding and processing the image content. Saliency detection is a basic problem in computer vision, neuroscience and visual perception. The algorithm detects and extracts the most interesting or significant areas in the image.
9

Zhang, Wenqiang, Jiemin Fang, Xinggang Wang, and Wenyu Liu. "EfficientPose: Efficient human pose estimation with neural architecture search." Computational Visual Media 7, no. 3 (April 7, 2021): 335–47. http://dx.doi.org/10.1007/s41095-021-0214-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractHuman pose estimation from image and video is a key task in many multimedia applications. Previous methods achieve great performance but rarely take efficiency into consideration, which makes it difficult to implement the networks on lightweight devices. Nowadays, real-time multimedia applications call for more efficient models for better interaction. Moreover, most deep neural networks for pose estimation directly reuse networks designed for image classification as the backbone, which are not optimized for the pose estimation task. In this paper, we propose an efficient framework for human pose estimation with two parts, an efficient backbone and an efficient head. By implementing a differentiable neural architecture search method, we customize the backbone network design for pose estimation, and reduce computational cost with negligible accuracy degradation. For the efficient head, we slim the transposed convolutions and propose a spatial information correction module to promote the performance of the final prediction. In experiments, we evaluate our networks on the MPII and COCO datasets. Our smallest model requires only 0.65 GFLOPs with 88.1% PCKh@0.5 on MPII and our large model needs only 2 GFLOPs while its accuracy is competitive with the state-of-the-art large model, HRNet, which takes 9.5 GFLOPs.
10

Garcia Freitas, Pedro, Luísa da Eira, Samuel Santos, and Mylene Farias. "On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment." Journal of Imaging 4, no. 10 (October 4, 2018): 114. http://dx.doi.org/10.3390/jimaging4100114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity.

Дисертації з теми "Computer vision and multimedia computation":

1

Gong, Shaogang. "Parallel computation of visual motion." Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gavin, Andrew S. (Andrew Scott). "Low computation vision-based navigation for mobile robots." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bryant, Bobby PROTOTYPES NIGHT VISION COMPUTER AIDED INSTRUCTION GOGGLES RISK TRAINING INTERACTIONS THREE DIMENSIONAL INSTRUCTIONS GRAPHICS OPERATION COMPUTERS PILOTS THESES. "A computer-based multimedia prototype for night vision goggles /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA286208.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1994.
Thesis advisor(s): Kishore Sengupta, Alice Crawford. "September 1994." Bibliography: p. 35. Also available online.
4

Bryant, Bobby. "A computer-based multimedia prototype for night vision goggles." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/30923.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Naval aviators who employ night vision goggles (NVG) face additional risks during nighttime operations. In an effort to reduce these risks, increased training with NVGs is suggested. Our goal was to design a computer-based, interactive multimedia system that would assist in the training of pilots who use NVGs. This thesis details the methods and techniques used in the development of the NVG multimedia prototype. It describes which hardware components and software applications were utilized as well as how the prototype was developed. Several facets of multimedia technology (sound, animation, video and three dimensional graphics) have been incorporated into the interactive prototype. For a more robust successive prototype, recommendations are submitted for future enhancements that include alternative methodologies as well as expanded interactions. Multimedia, Computer aided instruction, Night vision goggles.
5

Sahiner, Ali Vahit. "A computation model for parallelism : self-adapting parallel servers." Thesis, University of Westminster, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305872.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Jianguo, and 劉建國. "Fast computation of moments with applications to transforms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31235086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Liu, Jianguo. "Fast computation of moments with applications to transforms /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B17664986.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Battiti, Roberto Fox Geoffrey C. "Multiscale methods, parallel computation, and neural networks for real-time computer vision /." Diss., Pasadena, Calif. : California Institute of Technology, 1990. http://resolver.caltech.edu/CaltechETD:etd-06072007-074441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hsiao, Hsu-Feng. "Multimedia streaming congestion control over heterogeneous networks : from distributed computation and end-to-end perspectives /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/5946.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nóbrega, Rui Pedro da Silva. "Interactive acquisition of spatial information from images for multimedia applications." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dissertação para obtenção do Grau de Doutor em Informática
This dissertation addresses the problem of creating interactive mixed reality applications where virtual objects interact in a real world scenario. These scenarios are intended to be captured by the users with cameras. In other words, the goal is to produce applications where virtual objects are introduced in photographs taken by the users. This is relevant to create games and architectural and space planning applications that interact with visual elements in the images such as walls, floors and empty spaces. Introducing virtual objects in photographs or video sequences presents several challenges, such as the pose estimation and the visually correct interaction with the boundaries of such objects. Furthermore, the introduced virtual objects should be interactive and respond to the real physical environments. The proposed detection system is semi-automatic and thus depends partially on the user to obtain the elements it needs. This operation should be significantly simple to accommodate the needs of a non-expert user. The system analyzes a photo captured by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create virtual mixed and augmented reality applications where the user takes one or more photos of a certain place and interactively introduces virtual objects or elements that blend with the picture in real time. This document discusses computer vision, computer graphics and human-computer interaction techniques required to acquire images and information about the scenario involving the user. To demonstrate the framework and the proposed solutions, several proof-of-concept projects are presented and studied. Additionally, to validate the solution several system tests are described and each case-study interface was subject of different user-studies.
Fundação para a Ciência e Tecnologia - research grant SFRH/BD/47511/2008

Книги з теми "Computer vision and multimedia computation":

1

Salerno, Emanuele. Computational Intelligence for Multimedia Understanding: International Workshop, MUSCLE 2011, Pisa, Italy, December 13-15, 2011, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Petra, Perner, ed. Case-based reasoning on images and signals. Berlin: Springer, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Camara, Oscar. Statistical Atlases and Computational Models of the Heart. Imaging and Modelling Challenges: Second International Workshop, STACOM 2011, Held in Conjunction with MICCAI 2011, Toronto, ON, Canada, September 22, 2011, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kanatani, Kenʼichi. Geometric computation for machine vision. Oxford: Clarendon Press, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Steinmetz, Ralf. Multimedia Applications. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mandal, Mrinal Kr. Multimedia signals and systems. Boston: Kluwer Academic Publishers, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kanatani, Kenʼichi. Statistical optimization for geometric computation: Theory and practice. Amsterdam: Elsevier, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Foresti, Gian Luca. Multimedia Video-Based Surveillance Systems: Requirements, Issues and Solutions. Boston, MA: Springer US, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kevitt, Paul Mc. Integration of Natural Language and Vision Processing: (Volume II) Intelligent Multimedia. Dordrecht: Springer Netherlands, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sarfraz, Muhammad. Computer vision and image processing in intelligent systems and multimedia technologies. Hershey, PA: Information Science Reference, an impring of IGI Global, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Computer vision and multimedia computation":

1

Gong, Shaogang, and Michael Brady. "Parallel computation of optic flow." In Computer Vision — ECCV 90, 124–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/bfb0014858.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hartley, Richard I. "Computation of the quadrifocal tensor." In Computer Vision — ECCV'98, 20–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0055657.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kim, Sujung, Hee-Dong Kim, Wook-Joong Kim, and Seong-Dae Kim. "Fast Computation of a Visual Hull." In Computer Vision – ACCV 2010, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19282-1_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Günyel, Bertan, Rodrigo Benenson, Radu Timofte, and Luc Van Gool. "Stixels Motion Estimation without Optical Flow Computation." In Computer Vision – ECCV 2012, 528–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33783-3_38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhong, Yiran, Yuchao Dai, and Hongdong Li. "Stereo Computation for a Single Mixture Image." In Computer Vision – ECCV 2018, 441–56. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01240-3_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhou, Yi-Tong, and Rama Chellappa. "Computation of Optical Flow." In Artificial Neural Networks for Computer Vision, 83–121. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2834-9_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Barath, Daniel, Michal Polic, Wolfgang Förstner, Torsten Sattler, Tomas Pajdla, and Zuzana Kukelova. "Making Affine Correspondences Work in Camera Geometry Computation." In Computer Vision – ECCV 2020, 723–40. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58621-8_42.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bab-Hadiashar, Alireza, and David Suter. "Robust total least squares based optic flow computation." In Computer Vision — ACCV'98, 566–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63930-6_168.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hu, Ronghang, Jacob Andreas, Trevor Darrell, and Kate Saenko. "Explainable Neural Computation via Stack Neural Module Networks." In Computer Vision – ECCV 2018, 55–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01234-2_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Park, Sang-Cheol, and Seong-Whan Lee. "Fast Distance Computation with a Stereo Head-Eye System." In Biologically Motivated Computer Vision, 434–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45482-9_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Computer vision and multimedia computation":

1

Olague, Dr Gustavo. "Evolutionary computer vision." In GECCO '20: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3377929.3389861.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cheng, Wen-Huang. "Fashion Meets Computer Vision." In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3552468.3554360.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sotirios, Manitsaris, and Pekos Georgios. "Computer Vision Method in Music Interaction." In 2009 First International Conference on Advances in Multimedia (MMEDIA). IEEE, 2009. http://dx.doi.org/10.1109/mmedia.2009.34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yan, Tao, Jiamin Wu, Tiankuang Zhou, Hao Xie, Feng Xu, Jingtao Fan, Lu Fang, Xing Lin, and Qionghai Dai. "Solving computer vision tasks with diffractive neural networks." In Optoelectronic Imaging and Multimedia Technology VI, edited by Qionghai Dai, Tsutomu Shimura, and Zhenrong Zheng. SPIE, 2019. http://dx.doi.org/10.1117/12.2545609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kruger, Antonio, and Xiaoyi Jiang. "Improving Human Computer Interaction Through Embedded Vision Technology." In 2007 International Conference on Multimedia & Expo. IEEE, 2007. http://dx.doi.org/10.1109/icme.2007.4284743.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Jingya, Mohammed Korayem, Saul Blanco, and David J. Crandall. "Tracking Natural Events through Social Media and Computer Vision." In MM '16: ACM Multimedia Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2964284.2984067.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Loten, Tom, and Richard Green. "Embedded computer vision framework on a multimedia processor." In 2008 23rd International Conference Image and Vision Computing New Zealand (IVCNZ). IEEE, 2008. http://dx.doi.org/10.1109/ivcnz.2008.4762142.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Schlessman, Jason, Mark Lodato, Burak Ozer, and Wayne Wolf. "Heterogeneous MPSoC Architectures for Embedded Computer Vision." In Multimedia and Expo, 2007 IEEE International Conference on. IEEE, 2007. http://dx.doi.org/10.1109/icme.2007.4285039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zadnik, Jakub, Markku Makitalo, and Pekka Jaaskelainen. "Pruned Lightweight Encoders for Computer Vision." In 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2022. http://dx.doi.org/10.1109/mmsp55362.2022.9949477.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Martorell, O., A. Buades, and B. Coll. "Matching of Line Segment for Stereo Computation." In International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2017. http://dx.doi.org/10.5220/0006121004100417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії