Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computer vision and multimedia computation.

Статті в журналах з теми "Computer vision and multimedia computation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Computer vision and multimedia computation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Babar, Muhammad, Mohammad Dahman Alshehri, Muhammad Usman Tariq, Fasee Ullah, Atif Khan, M. Irfan Uddin, and Ahmed S. Almasoud. "IoT-Enabled Big Data Analytics Architecture for Multimedia Data Communications." Wireless Communications and Mobile Computing 2021 (December 17, 2021): 1–9. http://dx.doi.org/10.1155/2021/5283309.

Повний текст джерела
Анотація:
The present spreading out of the Internet of Things (IoT) originated the realization of millions of IoT devices connected to the Internet. With the increase of allied devices, the gigantic multimedia big data (MMBD) vision is also gaining eminence and has been broadly acknowledged. MMBD management offers computation, exploration, storage, and control to resolve the QoS issues for multimedia data communications. However, it becomes challenging for multimedia systems to tackle the diverse multimedia-enabled IoT settings including healthcare, traffic videos, automation, society parking images, and surveillance that produce a massive amount of big multimedia data to be processed and analyzed efficiently. There are several challenges in the existing structural design of the IoT-enabled data management systems to handle MMBD including high-volume storage and processing of data, data heterogeneity due to various multimedia sources, and intelligent decision-making. In this article, an architecture is proposed to process and store MMBD efficiently in an IoT-enabled environment. The proposed architecture is a layered architecture integrated with a parallel and distributed module to accomplish big data analytics for multimedia data. A preprocessing module is also integrated with the proposed architecture to prepare the MMBD and speed up the processing mechanism. The proposed system is realized and experimentally tested using real-time multimedia big data sets from athentic sources that discloses the effectiveness of the proposed architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

WANG, CHIA-JEN, SHWU-HUEY YEN, and PATRICK S. WANG. "A MULTIMEDIA WATERMARKING TECHNIQUE BASED ON SVMs." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 08 (December 2008): 1487–511. http://dx.doi.org/10.1142/s0218001408006934.

Повний текст джерела
Анотація:
In this paper we present an improved support vector machines (SVMs) watermarking system for still images and video sequences. By a thorough study on feature selection for training SVM, the proposed system shows significant improvements on computation efficiency and robustness to various attacks. The improved algorithm is extended to be a scene-based video watermarking technique. In a given scene, the algorithm uses the first h' frames to train an embedding SVM, and uses the trained SVM to watermark the rest of the frames. In the extracting phrase, the detector uses only the center h frames of the first h' frames to train an extracting SVM. The final extracted watermark in a given scene is the average of watermarks extracted from the remaining frames. Watermarks are embedded in l longest scenes of a video such that it is computationally efficient and capable to resist possible frames swapping/deleting/duplicating attacks. Two collusion attacks, namely temporal frame averaging and watermark estimation remodulation, on video watermarking are discussed and examined. The proposed video watermarking algorithm is shown to be robust to compression and collusion attacks, and it is novel and practical for SVM-applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tellaeche Iglesias, Alberto, Ignacio Fidalgo Astorquia, Juan Ignacio Vázquez Gómez, and Surajit Saikia. "Gesture-Based Human Machine Interaction Using RCNNs in Limited Computation Power Devices." Sensors 21, no. 24 (December 8, 2021): 8202. http://dx.doi.org/10.3390/s21248202.

Повний текст джерела
Анотація:
The use of gestures is one of the main forms of human machine interaction (HMI) in many fields, from advanced robotics industrial setups, to multimedia devices at home. Almost every gesture detection system uses computer vision as the fundamental technology, with the already well-known problems of image processing: changes in lighting conditions, partial occlusions, variations in color, among others. To solve all these potential issues, deep learning techniques have been proven to be very effective. This research proposes a hand gesture recognition system based on convolutional neural networks and color images that is robust against environmental variations, has a real time performance in embedded systems, and solves the principal problems presented in the previous paragraph. A new CNN network has been specifically designed with a small architecture in terms of number of layers and total number of neurons to be used in computationally limited devices. The obtained results achieve a percentage of success of 96.92% on average, a better score than those obtained by previous algorithms discussed in the state of the art.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

LAI, JIAN HUANG, and PONG C. YUEN. "FACE AND EYE DETECTION FROM HEAD AND SHOULDER IMAGE ON MOBILE DEVICES." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 07 (November 2006): 1053–75. http://dx.doi.org/10.1142/s0218001406005150.

Повний текст джерела
Анотація:
With the advance of semiconductor technology, the current mobile devices support multimodal input and multimedia output. In turn, human computer communication applications can be developed in mobile devices such as mobile phone and PDA. This paper addresses the research issues of face and eye detection on mobile devices. The major obstacles that we need to overcome are the relatively low processor speed, low storage memory and low image (CMOS senor) quality. To solve these problems, this paper proposes a novel and efficient method for face and eye detection. The proposed method is based on color information because the computation time is small. However, the color information is sensitive to the illumination changes. In view of this limitation, this paper proposes an adaptive Illumination Insensitive (AI2) Algorithm, which dynamically calculates the skin color region based on an image color distribution. Moreover, to solve the strong sunlight effect, which turns the skin color pixel into saturation, a dual-color-space model is also developed. Based on AI2algorithm and face boundary information, face region is located. The eye detection method is based on an average integral of density, projection techniques and Gabor filters. To quantitatively evaluate the performance of the face and eye detection, a new metric is proposed. 2158 head & shoulder images captured under uncontrolled indoor and outdoor lighting conditions are used for evaluation. The accuracy in face detection and eye detection are 98% and 97% respectively. Moreover, the average computation time of one image using Matlab code in Pentium III 700MHz computer is less than 15 seconds. The computational time will be reduced to tens hundreds of millisecond (ms) if low level programming language is used for implementation. The results are encouraging and show that the proposed method is suitable for mobile devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mahmoudi, Sidi Ahmed, Mohammed Amin Belarbi, El Wardani Dadi, Saïd Mahmoudi, and Mohammed Benjelloun. "Cloud-Based Image Retrieval Using GPU Platforms." Computers 8, no. 2 (June 14, 2019): 48. http://dx.doi.org/10.3390/computers8020048.

Повний текст джерела
Анотація:
The process of image retrieval presents an interesting tool for different domains related to computer vision such as multimedia retrieval, pattern recognition, medical imaging, video surveillance and movements analysis. Visual characteristics of images such as color, texture and shape are used to identify the content of images. However, the retrieving process becomes very challenging due to the hard management of large databases in terms of storage, computation complexity, temporal performance and similarity representation. In this paper, we propose a cloud-based platform in which we integrate several features extraction algorithms used for content-based image retrieval (CBIR) systems. Moreover, we propose an efficient combination of SIFT and SURF descriptors that allowed to extract and match image features and hence improve the process of image retrieval. The proposed algorithms have been implemented on the CPU and also adapted to fully exploit the power of GPUs. Our platform is presented with a responsive web solution that offers for users the possibility to exploit, test and evaluate image retrieval methods. The platform offers to users a simple-to-use access for different algorithms such as SIFT, SURF descriptors without the need to setup the environment or install anything while spending minimal efforts on preprocessing and configuring. On the other hand, our cloud-based CPU and GPU implementations are scalable, which means that they can be used even with large database of multimedia documents. The obtained results showed: 1. Precision improvement in terms of recall and precision; 2. Performance improvement in terms of computation time as a result of exploiting GPUs in parallel; 3. Reduction of energy consumption.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Li, Xiaojing, and Shuting Ding. "Interpersonal Interface System of Multimedia Intelligent English Translation Based on Deep Learning." Scientific Programming 2022 (April 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/8027003.

Повний текст джерела
Анотація:
Artificial intelligence is a very challenging science, and people who are engaged in this work must understand computer knowledge, psychology, and philosophy. Artificial intelligence includes a wide range of sciences; it is composed of different fields, such as machine learning and computer vision. In recent years, with the rise and joint drive of technologies such as the Internet, big data, the Internet of Things, and voice recognition, the rapid development of artificial intelligence technology has presented new features such as deep learning, cross-border integration, and human-machine collaboration. Intelligent English translation is an innovation and experiment of the English language industry in the field of science and technology. Machine translation is the use of computer programmes to translate natural language into a specific natural language. This interdisciplinary subject includes three subjects: artificial intelligence, computational linguistics, and mathematical logic. Machine translation is becoming more and more popular with the public, and its advantages are mainly that machine translation is much cheaper than manual translation. From this study, it can be seen that the accuracy rate of the traditional ICTCLAS method is 76.40%, while the accuracy rate of the research method in this article is 94.58%, indicating that the research method used in this article is better. Machine translation greatly reduces the cost of translation because there are very few processes that require human participation, and the translation is basically done by computers. It is also an important research content in the field of English major research. Due to the wide range of language cultures and the influence of local culture, thought, and semantic environment, traditional human translation methods have many shortcomings. Nowadays, the demand for translation is unsatisfactory. Based on this, this paper analyzes the application status of intelligent translation, the prospect of intelligent translation, and realizes the optimization design of the human interface system for the intelligent translation of English. Through experiments, it is found that the multimedia intelligent English translation system based on deep learning not only improves the accuracy of English translation but also greatly improves the efficiency of people learning English.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lee, Hyowon, Nazlena Mohamad Ali, and Lynda Hardman. "Designing Interactive Applications to Support Novel Activities." Advances in Human-Computer Interaction 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/180192.

Повний текст джерела
Анотація:
R&D in media-related technologies including multimedia, information retrieval, computer vision, and the semantic web is experimenting on a variety of computational tools that, if sufficiently matured, could support many novel activities that are not practiced today. Interactive technology demonstration systems produced typically at the end of their projects show great potential for taking advantage of technological possibilities. These demo systems or “demonstrators” are, even if crude or farfetched, a significant manifestation of the technologists’ visions in transforming emerging technologies into novel usage scenarios and applications. In this paper, we reflect on design processes and crucial design decisions made while designing some successful, web-based interactive demonstrators developed by the authors. We identify methodological issues in applying today’s requirement-driven usability engineering method to designing this type of novel applications and solicit a clearer distinction between designing mainstream applications and designing novel applications. More solution-oriented approaches leveraging design thinking are required, and more pragmatic evaluation criteria is needed that assess the role of the system in exploiting the technological possibilities to provoke further brainstorming and discussion. Such an approach will support a more efficient channelling of the technology-to-application transformation which are becoming increasingly crucial in today’s context of rich technological possibilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

He, Guanqi, and Guo Lu. "Research on saliency detection method based on depth and width neural network." Journal of Physics: Conference Series 2037, no. 1 (September 1, 2021): 012034. http://dx.doi.org/10.1088/1742-6596/2037/1/012034.

Повний текст джерела
Анотація:
Abstract Image saliency detection is to segment the most important areas in the image. Solving the problem of image saliency detection usually involves knowledge in computer vision, neuroscience, cognitive psychology and other fields. In recent years, as deep learning has made great achievements in the field of computer vision, the application of deep learning has also played a good role in image saliency detection. Therefore, algorithms based on deep convolutional neural networks have become solutions to image saliency The most effective method of detection. For researchers, improving the computational efficiency of neural network-based saliency detection algorithms generally starts from two perspectives. One is to tailor the network structure and combine traditional feature extraction methods for processing. The other is to use a lighter network to solve the saliency detection problem. Based on these two points of thinking, this paper proposes two efficient and accurate neural network-based saliency detection algorithms. In recent years, with the rapid development of multimedia and Internet technologies, a huge amount of picture information is generated on blogs, social networking or shopping platforms every day. Such a lot of information not only enriches people’s lives, but also provides efficient and accurate network management platforms. The management of these image information brings difficulties. Therefore, how to understand and process these image information more intelligently and efficiently has become a hot topic for many image processing and computer vision researchers. Among them, the saliency detection technology plays a key role in solving the problem of intelligent understanding and processing of images. To put it simply, saliency detection is a technology to automatically calculate or detect the most important areas in an image, and its processing results provide a basis for understanding and processing the image content. Saliency detection is a basic problem in computer vision, neuroscience and visual perception. The algorithm detects and extracts the most interesting or significant areas in the image.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Wenqiang, Jiemin Fang, Xinggang Wang, and Wenyu Liu. "EfficientPose: Efficient human pose estimation with neural architecture search." Computational Visual Media 7, no. 3 (April 7, 2021): 335–47. http://dx.doi.org/10.1007/s41095-021-0214-z.

Повний текст джерела
Анотація:
AbstractHuman pose estimation from image and video is a key task in many multimedia applications. Previous methods achieve great performance but rarely take efficiency into consideration, which makes it difficult to implement the networks on lightweight devices. Nowadays, real-time multimedia applications call for more efficient models for better interaction. Moreover, most deep neural networks for pose estimation directly reuse networks designed for image classification as the backbone, which are not optimized for the pose estimation task. In this paper, we propose an efficient framework for human pose estimation with two parts, an efficient backbone and an efficient head. By implementing a differentiable neural architecture search method, we customize the backbone network design for pose estimation, and reduce computational cost with negligible accuracy degradation. For the efficient head, we slim the transposed convolutions and propose a spatial information correction module to promote the performance of the final prediction. In experiments, we evaluate our networks on the MPII and COCO datasets. Our smallest model requires only 0.65 GFLOPs with 88.1% PCKh@0.5 on MPII and our large model needs only 2 GFLOPs while its accuracy is competitive with the state-of-the-art large model, HRNet, which takes 9.5 GFLOPs.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Garcia Freitas, Pedro, Luísa da Eira, Samuel Santos, and Mylene Farias. "On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment." Journal of Imaging 4, no. 10 (October 4, 2018): 114. http://dx.doi.org/10.3390/jimaging4100114.

Повний текст джерела
Анотація:
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Costa Pereira, Jose, Emanuele Coviello, Gabriel Doyle, Nikhil Rasiwasia, Gert R. G. Lanckriet, Roger Levy, and Nuno Vasconcelos. "On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval." IEEE Transactions on Pattern Analysis and Machine Intelligence 36, no. 3 (March 2014): 521–35. http://dx.doi.org/10.1109/tpami.2013.142.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Khan, Seemab, Muhammad Attique Khan, Majed Alhaisoni, Usman Tariq, Hwan-Seung Yong, Ammar Armghan, and Fayadh Alenezi. "Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion." Sensors 21, no. 23 (November 28, 2021): 7941. http://dx.doi.org/10.3390/s21237941.

Повний текст джерела
Анотація:
Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tian, Jing, and Li Chen. "Computer vision for multimedia." Multimedia Tools and Applications 69, no. 1 (August 30, 2013): 1–2. http://dx.doi.org/10.1007/s11042-013-1680-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Pransky, Joanne. "The Pransky interview: Dr James Kuffner, CEO at Toyota Research Institute Advanced Development, Coinventor of the rapidly, exploring random tree algorithm." Industrial Robot: the international journal of robotics research and application 47, no. 1 (December 7, 2019): 7–11. http://dx.doi.org/10.1108/ir-11-2019-0226.

Повний текст джерела
Анотація:
Purpose The following article is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal experience of a prominent, robotic industry PhD-turned entrepreneur regarding his pioneering efforts of bringing technological inventions to market. The paper aims to discuss these issues. Design/methodology/approach The interviewee is Dr James Kuffner, CEO at Toyota Research Institute Advanced Development (TRI-AD). Kuffner is a proven entrepreneur and inventor in robot and motion planning and cloud robotics. In this interview, Kuffner shares his personal and professional journey from conceptualization to commercial realization. Findings Dr Kuffner received BS, MS and PhD degrees from the Stanford University’s Department of Computer Science Robotics Laboratory. He was a Japan Society for the Promotion of Science (JSPS) Postdoctoral Research Fellow at the University of Tokyo where he worked on software and planning algorithms for humanoid robots. He joined the faculty at Carnegie Mellon University’s Robotics Institute in 2002 where he served until March 2018. Kuffner was a Research Scientist and Engineering Director at Google from 2009 to 2016. In January 2016, he joined TRI where he was appointed the Chief Technology Officer and Area Lead, Cloud Intelligence and is presently an Executive Advisor. He has been CEO of TRI-AD since April of 2018. Originality/value Dr Kuffner is perhaps best known as the co-inventor of the rapidly exploring random tree (RRT) algorithm, which has become a key standard benchmark for robot motion planning. He is also known for introducing the term “Cloud Robotics” in 2010 to describe how network-connected robots could take advantage of distributed computation and data stored in the cloud. Kuffner was part of the initial engineering team that built Google’s self-driving car. He was appointed Head of Google’s Robotics Division in 2014, which he co-founded with Andy Rubin to help realize the original Cloud Robotics concept. Kuffner also co-founded Motion Factory, where he was the Senior Software Engineer and a member of the engineering team to develop C++ based authoring tools for high-level graphic animation and interactive multimedia content. Motion Factory was acquired by SoftImage in 2000. In May 2007, Kuffner founded, and became the Director of Robot Autonomy where he coordinated research and software consulting for industrial and consumer robotics applications. In 2008, he assisted in the iOS development of Jibbigo, the first on-phone, real-time speech recognition, translation and speech synthesis application for the iPhone. Jibbigo was acquired by Facebook in 2013. Kuffner is one of the most highly cited authors in the field of robotics and motion planning, with over 15,000 citations. He has published over 125 technical papers and was issued more than 50 patents related to robotics and computer vision technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Venkanna, Mood, and Rameshwar Rao. "Static Worst-Case Execution Time Optimization using DPSO for ASIP Architecture." Ingeniería Solidaria 14, no. 25 (May 1, 2018): 1–11. http://dx.doi.org/10.16925/.v14i0.2230.

Повний текст джерела
Анотація:
Introduction: The application of specific instructions significantly improves energy, performance, and code size of configurable processors. The design of these instructions is performed by the conversion of patterns related to application-specific operations into effective complex instructions. This research was presented at the icitkm Conference, University of Delhi, India in 2017.Methods: Static analysis was a prominent research method during late the 1980’s. However, end-to-end measurements consist of a standard approach in industrial settings. Both static analysis tools perform at a high-level in order to determine the program structure, which works on source code, or is executable in a disassembled binary. It is possible to work at a low-level if the real hardware timing information for the executable task has the desired features.Results: We experimented, tested and evaluated using a H.264 encoder application that uses nine cis, covering most of the computation intensive kernels. Multimedia applications are frequently subject to hard real time constraints in the field of computer vision. The H.264 encoder consists of complicated control flow with more number of decisions and nested loops. The parameters evaluated were different numbers of A partitions (300 slices on a Xilinx Virtex 7each), reconfiguration bandwidths, as well as relations of cpu frequency and fabric frequency fCPU/ffabric. ffabric remains constant at 100MHz, and we selected a multiplicity of its values for fCPU that resemble realistic units. Note that while we anticipate the wcet in seconds (wcetcycles/ f CPU) to be lower (better) with higher fCPU, the wcet cycles increase (at a constant ffabric) because hardware cis perform less computations on the reconfigurable fabric within one cpu cycle.Conclusions: The method is similar to tree hybridization and path-based methods which are less precise, and to the global ipet method, which is more precise. Optimization is evaluated with the Discrete Particle Swarm Optimization (dpso) algorithm for wcet. For several real-world applications involving embedded processors, the proposed technique develops improved instruction sets in comparison to native instruction sets.Originality: For wcet estimation, flow analysis, low-level analysis and calculation phases of the program need to be considered. Flow analysis phase or the high-level of analysis helps to extract the program’s dynamic behavior that gives information on functions being called, number of loop iteration, dependencies among if-statements, etc. This is due to the fact that the analysis is unaware of the execution path corresponding to the longest execution time.Limitations: This path is executed within a kernel iteration that relies upon the nature of mb, either i-mb or p-mb, determined by the motion estimation kernel, that is, its’ input depends on the i-mb and p-mb paths ,which also contain separate cis leading to the instability of the worst-case path, that is, adding more partitions to the current worst-case path can result in the other path becoming the worst case. The pipeline stalls for the reconfiguration delay and continues when entering the kernel once the reconfiguration process finishes.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Sebe, Nicu, Qi Tian, Michael S. Lew, and Thomas S. Huang. "Similarity Matching in Computer Vision and Multimedia." Computer Vision and Image Understanding 110, no. 3 (June 2008): 309–11. http://dx.doi.org/10.1016/j.cviu.2008.04.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Bai, Li, William Tompkinson, and Yan Wang. "Computer vision techniques for traffic flow computation." Pattern Analysis and Applications 7, no. 4 (December 2004): 365–72. http://dx.doi.org/10.1007/s10044-004-0238-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Luo, Jiebo, Dhiraj Joshi, Jie Yu, and Andrew Gallagher. "Geotagging in multimedia and computer vision—a survey." Multimedia Tools and Applications 51, no. 1 (October 19, 2010): 187–211. http://dx.doi.org/10.1007/s11042-010-0623-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

DRAPER, BRUCE A., and J. ROSS BEVERIDGE. "TEACHING IMAGE COMPUTATION: FROM COMPUTER GRAPHICS TO COMPUTER VISION." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 05 (August 2001): 823–31. http://dx.doi.org/10.1142/s0218001401001179.

Повний текст джерела
Анотація:
This paper describes a course in image computation that is designed to follow and build up an established course in computer graphics. The course is centered on images: how they are generated, manipulated, matched and symbolically described. It builds on the student's knowledge of coordinate systems and the perspective projection pipeline. It covers image generation techniques not covered by the computer graphics course, most notably ray tracing. It introduces students to basic image processing concepts such as Fourier analysis and then to basic computer vision topics such as principal components analysis, edge detection and symbolic feature matching. The goal is to prepare students for advanced work in either computer vision or computer graphics.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hirota, Toshio Fukudand Kaoru. "Message from Editors-in-Chief." Journal of Advanced Computational Intelligence and Intelligent Informatics 1, no. 1 (October 20, 1997): 0. http://dx.doi.org/10.20965/jaciii.1997.p0000.

Повний текст джерела
Анотація:
We are very pleased and honored to have an opportunity to publish a new journal the "International Journal of Advanced Computational Intelligence" (JACI). The JACI is a new, bimonthly journal covering the field of computer science. This journal focuses on advanced computational intelligence, including the synergetic integration of neural networks, fuzzy logic and evolutionary computations, in order to assist in fostering the application of intelligent systems to industry. This new field is called computational intelligence or soft computing. It has already been studied by many researchers, but no single, integrated journal exists anywhere in the world. This new journal gives readers the state of art of the theory and application of Advanced Computational Intelligence. The Topics include, but are not limited to: Fuzzy Logic, Neural Networks, GA and Evolutionary Computation, Hybrid Systems, Network Systems, Multimedia, the Human Interface, Biologically-Inspired Evolutionary Systems, Artificial Life, Chaos, Fractal, Wavelet Analysis, Scientific Applications and Industrial Applications. The journal, JACI, is supported by many researchers and scientific organizations, e.g., the International Fuzzy Systems Association (IFSA), the Japan Society of Fuzzy Theory and Systems (SOFT), the Brazilian Society of Automatics (SBA) and The Society of Instrument and Control Engineers (SICE), and we are currently negotiating with the John von Neumann Computer Society (in Hungary). Our policy is to have world-wide communication with many societies and researchers in this field. We would appreciate it if those organizations and people who have an interest in co-sponsorship or have proposals for special issues in this journal, as well as paper submissions, could contact us. Finally our special thanks go to the editorial office of Fuji Technology Press Ltd., especially to its president, Mr. K. Hayashi, and to the editor, Mr. Y. Inoue, for their efforts in publishing this new journal. Lotti A. Zadeh The publication of the International Journal of Advanced Computational Intelligence (JACI) is an important milestone in the advancement of our understanding of how intelligent systems can be conceived, designed, built, and deployed. When one first hears of computational intelligence, a question that naturally arises is: What is the difference, if any, between computational intelligence (CI) and artificial intelligence (AI)? As one who has witnessed the births of both AI and CI, I should like to suggest an answer. As a branch of science and technology, artificial intelligence was born about four decades ago. From the outset, AI was based on classical logic and symbol manipulation. Numerical computations were not welcomed and probabilistic techniques were proscribed. Mainstream AI continued to evolve in this spirit, with symbol manipulation still occupying the center of the stage, but not to the degree that it did in the past. Today, probabilistic techniques and neurocomputing are not unwelcome, but the focus is on distributed intelligence, agents, man-machine interfaces, and networking. With the passage of time, it became increasing clear that symbol manipulation is quite limited in its ability to serve as a foundation for the design of intelligent systems, especially in the realms of robotics, computer vision, motion planning, speech recognition, handwriting recognition, fault diagnosis, planning, and related fields. The inability of mainstream AI to live up to expectations in these application areas has led in the mid-eighties to feelings of disenchantment and widespread questioning of the effectiveness of AI's armamentarium. It was at this point that the name computational intelligence was employed by Professor Nick Cercone of Simon Fraser University in British Columbia to start a new journal named Computational Intelligence -a journal that was, and still is, intended to provide a broader conceptual framework for the conception and design of intelligent systems than was provided by mainstream AI. Another important development took place. The concept of soft computing (SC) was introduced in 1990-91 to describe an association of computing methodologies centering on fuzzy logic (FL), neurocomputing (NC), genetic (or evolutionary) computing (GC), and probabilistic computing (PC). In essence, soft computing differs from traditional hard computing in that it is tolerant of imprecision, uncertainty and partial truth. The basic guiding principle of SC is: Exploit the tolerance for imprecision, uncertainty, and partial truth to achieve tractability, robustness, low solution cost, and better rapport with reality. More recently, the concept of computational intelligence had reemerged with a meaning that is substantially different from that which it had in the past. More specifically, in its new sense, CI, like AI, is concerned with the conception, design, and deployment of intelligent systems. However, unlike mainstream AI, CI methodology is based not on predicate logic and symbol manipulation but on the methodologies of soft computing and, more particularly, on fuzzy logic, neurocomputing, genetic(evolutionary) computing, and probabilistic computing. In this sense, computational intelligence and soft computing are closely linked but not identical. In basic ways, the importance of computational intelligence derives in large measure from the effectiveness of the techniques of fuzzy logic, neurocomputing, genetic (evolutionary) computing, and probabilistic computing in the conception and design of information/intelligent systems, as defined in the statements of the aims and scope of the new journal of Advanced Computational Intelligence. There is one important aspect of both computational intelligence and soft computing that should be stressed. The methodologies which lie at the center of CI and SC, namely, FL, NC, genetic (evolutionary) computing, and PC are for the most part complementary and synergistic, rather than competitive. Thus, in many applications, the effectiveness of FL, NC, GC, and PC can be enhanced by employing them in combination, rather than in isolation. Intelligent systems in which FL, NC, GC, and PC are used in combination are frequently referred to as hybrid intelligent systems. Such systems are likely to become the norm in the not distant future. The ubiquity of hybrid intelligent systems is likely to have a profound impact on the ways in which information/intelligent systems are conceived, designed, built, and interacted with. At this juncture, the most visible hybrid intelligent systems are so-called neurofuzzy systems, which are for the most part fuzzy-rule-based systems in which neural network techniques are employed for system identification, rule induction, and tuning. The concept of neurofuzzy systems was originated by Japanese scientists and engineers in the late eighties, and in recent years has found a wide variety of applications, especially in the realms of industrial control, consumer products, and financial engineering. Today, we are beginning to see a widening of the range of applications of computational intelligence centered on the use of neurofuzzy, fuzzy-genetic, neurogenetic, neurochaotic and neuro-fuzzy-genetic systems. The editors-in-chief of Advanced Computational Intelligence, Professors Fukuda and Hirota, have played and are continuing to play majors roles both nationally and internationally in the development of fuzzy logic, soft computing, and computational intelligence. They deserve our thanks and congratulations for conceiving the International Journal of Advanced Computational Intelligence and making it a reality. International in both spirit and practice, JACI is certain to make a major contribution in the years ahead to the advancement of the science and technology of man-made information/intelligence systems -- systems that are at the center of the information revolution, which is having a profound impact on the ways in which we live, communicate, and interact with the real world. Lotfi A. Zadeh Berkeley, CA, July 24, 1997
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhu, Hongliang, Meiqi Chen, Maohua Sun, Xin Liao, and Lei Hu. "Outsourcing Set Intersection Computation Based on Bloom Filter for Privacy Preservation in Multimedia Processing." Security and Communication Networks 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/5841967.

Повний текст джерела
Анотація:
With the development of cloud computing, the advantages of low cost and high computation ability meet the demands of complicated computation of multimedia processing. Outsourcing computation of cloud could enable users with limited computing resources to store and process distributed multimedia application data without installing multimedia application software in local computer terminals, but the main problem is how to protect the security of user data in untrusted public cloud services. In recent years, the privacy-preserving outsourcing computation is one of the most common methods to solve the security problems of cloud computing. However, the existing computation cannot meet the needs for the large number of nodes and the dynamic topologies. In this paper, we introduce a novel privacy-preserving outsourcing computation method which combines GM homomorphic encryption scheme and Bloom filter together to solve this problem and propose a new privacy-preserving outsourcing set intersection computation protocol. Results show that the new protocol resolves the privacy-preserving outsourcing set intersection computation problem without increasing the complexity and the false positive probability. Besides, the number of participants, the size of input secret sets, and the online time of participants are not limited.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Gupta, Akash. "Implementation of Fabric Defects Detection in TI-OMAP." Journal of Electronics and Communication Systems 7, no. 2 (August 22, 2022): 23–27. http://dx.doi.org/10.46610/joecs.2022.v07i02.004.

Повний текст джерела
Анотація:
Breakages of both warp and weft yarns are one of the most prevalent issues in textile technology today. This issue lowers the quality of the fabric produced as well as the production pace. Numerous forms of fabric flaws still need to be found by hand examination in the case of contemporary weaving techniques. The purpose of this work is to present an efficient and precise technique for automatically identifying fabric defects. Using Aphelion Dev software, statistical characteristics including mean, standard deviation, kurtosis, skewness, and histogram representation of fabric photographs are taken into consideration when comparing fabrics with and without faults. To provide a precise result for fabric flaws, morphological approaches are applied. The finished image has been integrated into an open platform for multimedia applications, speeding up processing time by 60ns. This method excels at finding fabric flaws with greater accuracy, effectiveness, and in less time.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Moens, Marie-Francine, Katerina Pastra, Kate Saenko, and Tinne Tuytelaars. "Vision and Language Integration Meets Multimedia Fusion." IEEE MultiMedia 25, no. 2 (April 2018): 7–10. http://dx.doi.org/10.1109/mmul.2018.023121160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Harjanto, Arif. "Perancangan Media Pembelajaran Region Based-Segmentation Pada Mata Kuliah Computer Vision Berbasis Web Multimedia." Informatika Mulawarman : Jurnal Ilmiah Ilmu Komputer 10, no. 2 (September 11, 2015): 30. http://dx.doi.org/10.30872/jim.v10i2.188.

Повний текст джерела
Анотація:
Pembelajaran mata kuliah computer vision pada materi region-based segmentation bagi sebagian mahasiswa jurusan teknik elektro sukar dipahami. Berdasarkan data kuisioner yang diperoleh dari 13 mahasiswa yang pernah mengambil mata kuliah computer vision, dapat diperoleh prosentase penilaian mata kuliah computer vision pada materi region-based segmentation bahwa 13,46% mahasiswa paham akan materi, dan 86,53% mahasiswa sukar memahami materi region-based segmentation, terutama dalam proses perubahan antara citra asli dengan hasil citra yang telah dilakukan proses filtering. Penelitian ini bertujuan untuk membangun alat bantu pembelajaran mata kuliah computer vision pada materi region-based segmentation berbasis web multimedia yang diharapkan dapat membantu mahasiswa dalam memahami materi region-based segmentation. Subjek penelitian ini adalah aplikasi multimedia sebagai alat pembelajaran mata kuliah computer vision pada materi region-based segmentation. Pengumpulan data dalam penelitian ini menggunakan metode Study literature, metode observasi, dan metode wawancara. Aplikasi ini disusun melalui prosedur yang mencakup indentifikasi masalah, studi kelayakan, analisis kebutuhan sistem, merancang konsep, merancang isi, design document, merancang naskah, merancang grafik, memproduksi sistem, pengetesan sistem dengan black box test dan alpha test. Hasil penelitian ini adalah aplikasi web multimedia sebagai alat bantu pembelajaran mata kuliah computer vision pada materi region-based segmentation. Berdasarkan hasil pengujian yang telah dilakukan dengan menggunakan Black Box Test dan Alpha Test menunjukkan bahwa penilaian terhadap sistem yaitu SS (sangat setuju) 52,5% , S (setuju) 42,5% , KS (kurang setuju) 5%, TS (tidak setuju) 0% . Maka disimpulkan bahwa sistem layak dipergunakan sebagai sebagai alat bantu pembelajaran mata kuliah computer vision pada materi region-based segmentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

WECHSLER, HARRY. "AN OVERVIEW OF PARALLEL HARDWARE ARCHITECTURES FOR COMPUTER VISION." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 04 (October 1992): 629–49. http://dx.doi.org/10.1142/s0218001492000333.

Повний текст джерела
Анотація:
The sheer complexity of the visual task and the need for robust behavior provide a unifying theme for computer vision — what can be accomplished is constrained by relatively limited computational resources, but the resulting performance must be robust. As a consequence parallel computation over space and time becomes essential for machine vision systems. Parallel computation is all encompassing and includes both image representations and processing. We give an overview in this paper of a taxonomy of parallel hardware architectures which includes pipelining, Single instruction multiple data (SIMD), multiple instruction multiple data (MIMD), data-flow, and neurocomputing. Specific applications and the relevance of parallel hardware architectures for computer vision tasks are discussed as well.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Tsai, Yichang (James), Jianping Wu, Zhaohua Wang, and Zhaozheng Hu. "Horizontal Roadway Curvature Computation Algorithm Using Vision Technology." Computer-Aided Civil and Infrastructure Engineering 25, no. 2 (February 2010): 78–88. http://dx.doi.org/10.1111/j.1467-8667.2009.00622.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Belyakov, P. V., M. B. Nikiforov, E. R. Muratov, and O. V. Melnik. "Stereo vision-based variational optical flow estimation." E3S Web of Conferences 224 (2020): 01027. http://dx.doi.org/10.1051/e3sconf/202022401027.

Повний текст джерела
Анотація:
Optical flow computation is one of the most important tasks in computer vision. The article deals with a modification of the variational method of the optical flow computation, according to its application in stereo vision. Such approaches are traditionally based on a brightness constancy assumption and a gradient constancy assumption during pixels motion. Smoothness assumption also restricts motion discontinuities, i.e. the smoothness of the vector field of pixel velocity is assumed. It is proposed to extend the functional of the optical flow computation in a similar way by adding a priori known stereo cameras extrinsic parameters and minimize such jointed model of optical flow computation. The article presents a partial differential equations framework in image processing and numerical scheme for its implementation. Performed experimental evaluation demonstrates that the proposed method gives smaller errors than traditional methods of optical flow computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

CAPPELLINI, V., A. MECOCCI, and A. DEL BIMBO. "MOTION ANALYSIS AND REPRESENTATION IN COMPUTER VISION." Journal of Circuits, Systems and Computers 03, no. 04 (December 1993): 797–831. http://dx.doi.org/10.1142/s0218126693000472.

Повний текст джерела
Анотація:
Motion analysis is of high interest in many different fields for a number of crucial applications. Short-term motion analysis addresses the computation of motion parameters or the qualitative estimation of the motion field. Long-term motion analysis aims at the understanding of motion and includes reasoning on motion properties. Image sequences are in general processed to perform the above motion analysis. These subjects are considered in this review with reference to the more significant results in the literature both at theory and application levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zhang, Qingchen, Laurence T. Yang, Xingang Liu, Zhikui Chen, and Peng Li. "A Tucker Deep Computation Model for Mobile Multimedia Feature Learning." ACM Transactions on Multimedia Computing, Communications, and Applications 13, no. 3s (August 10, 2017): 1–18. http://dx.doi.org/10.1145/3063593.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Wang, Lin, Yangfan Zhou, Xin Wang, Zhihang Ji, and Xin Liu. "A Projection-Free Adaptive Momentum Optimization Algorithm for Mobile Multimedia Computing." Wireless Communications and Mobile Computing 2022 (April 20, 2022): 1–12. http://dx.doi.org/10.1155/2022/8533687.

Повний текст джерела
Анотація:
In mobile multimedia applications, deep learning has received significant interest. Due to the limited computation and storage resources of mobile devices, however, general training methods are hardly suited for mobile multimedia computing. For this reason, we propose an adaptive momentum training (FWAdaBound) algorithm to reduce computation and storage cost, where the Frank-Wolfe method is employed. Furthermore, we rigorously prove the regret bound in order that O T 3 / 4 can be achieved, where T is a time horizon. Finally, the convergence, cost-reduction, and generalization ability of FWAdaBound are validated through various experiments on public datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Pun, T., T. I. Alecu, G. Chanel, J. Kronegg, and S. Voloshynovskiy. "Brain-Computer Interaction Research at the Computer Vision and Multimedia Laboratory, University of Geneva." IEEE Transactions on Neural Systems and Rehabilitation Engineering 14, no. 2 (June 2006): 210–13. http://dx.doi.org/10.1109/tnsre.2006.875544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Amerini, Irene, Aris Anagnostopoulos, Luca Maiano, and Lorenzo Ricciardi Celsi. "Deep Learning for Multimedia Forensics." Foundations and Trends® in Computer Graphics and Vision 12, no. 4 (2021): 309–457. http://dx.doi.org/10.1561/0600000096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Hu, Zhaozheng, and Yichang (James) Tsai. "Homography-Based Vision Algorithm for Traffic Sign Attribute Computation." Computer-Aided Civil and Infrastructure Engineering 24, no. 6 (August 2009): 385–400. http://dx.doi.org/10.1111/j.1467-8667.2009.00598.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Yin, Zhouping. "Application and Development of Computer Intelligent Vision Based on Evolutionary Computation." Journal of Computational and Theoretical Nanoscience 13, no. 12 (December 1, 2016): 9857–63. http://dx.doi.org/10.1166/jctn.2016.5941.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Saleh, Shadi. "Artificial Intelligence & Machine Learning in Computer Vision Applications." Embedded Selforganising Systems 7, no. 1 (February 20, 2020): 2–3. http://dx.doi.org/10.14464/ess71432.

Повний текст джерела
Анотація:
Deep learning and machine learning innovations are at the core of the ongoing revolution in Artificial Intelligence for the interpretation and analysis of multimedia data. The convergence of large-scale datasets and more affordable Graphics Processing Unit (GPU) hardware has enabled the development of neural networks for data analysis problems that were previously handled by traditional handcrafted features. Several deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM)/Gated Recurrent Unit (GRU), Deep Believe Networks (DBN), and Deep Stacking Networks (DSNs) have been used with new open source software and libraries options to shape an entirely new scenario in computer vision processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Li, Xianwei, Guolong Chen, Liang Zhao, and Bo Wei. "Multimedia Applications Processing and Computation Resource Allocation in MEC-Assisted SIoT Systems with DVS." Mathematics 10, no. 9 (May 7, 2022): 1593. http://dx.doi.org/10.3390/math10091593.

Повний текст джерела
Анотація:
Due to the advancements of information technologies and the Internet of Things (IoT), the number of distributed sensors and IoT devices in the social IoT (SIoT) systems is proliferating. This has led to various multimedia applications, face recognition and augmented reality (AR). These applications are computation-intensive and delay-sensitive and have become popular in our daily life. However, IoT devices are well-known for their constrained computational resources, which hinders the execution of these applications. Mobile edge computing (MEC) has appeared and been deemed a prospective paradigm to solve this issue. Migrating the applications of IoT devices to be executed in the edge cloud can not only provide computational resources to process these applications but also lower the transmission latency between the IoT devices and the edge cloud. In this paper, computation resource allocation and multimedia applications offloading in MEC-assisted SIoT systems are investigated. We aim to optimize the resource allocation and application offloading by jointly minimizing the execution latency of multimedia applications and the consumed energy of IoT devices. The studied problem is a formulation of the total computation overhead minimization problem by optimizing the computational resources in the edge servers. Besides, as the technology of dynamic voltage scaling (DVS) can offer more flexibility for the MEC system design, we incorporate it into the application offloading. Since the studied problem is a mixed-integer nonlinear programming (MINP) problem, an efficient method is proposed to address it. By comparing with the baseline schemes, the theoretic analysis and simulation results demonstrate that the proposed multimedia applications offloading method can improve the performances of MEC-assisted SIoT systems for the most part.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Gati, Nicholaus J., Laurence T. Yang, Jun Feng, Yijun Mo, and Mamoun Alazab. "Differentially Private Tensor Train Deep Computation for Internet of Multimedia Things." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 3s (January 8, 2021): 1–20. http://dx.doi.org/10.1145/3421276.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Oswal, Prateek, and Divakar Singh. "Survey paper on various mining methods on multimedia Images." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 8, no. 3 (June 30, 2013): 898–901. http://dx.doi.org/10.24297/ijct.v8i3.3400.

Повний текст джерела
Анотація:
Multimedia mining is a young but challenging subfield in data mining .Multimedia explanation represents an application of computer vision that presents the recognition of objects or ideas related to a multimedia document as a image. There is not unified conclusion in the concept, content and methods of Multimedia mining, Multimedia mining architecture and framework has to be further studied. there are various mining methods that we can apply on multimedia images like association rule mining, sequence mining, sequence pattern mining etc. In this survey paper we are focusing all this methods. We also discussed feature selection methods of various images.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kanatani, Kenichi, and Yasuyuki Sugaya. "Compact Fundamental Matrix Computation." IPSJ Transactions on Computer Vision and Applications 2 (2010): 59–70. http://dx.doi.org/10.2197/ipsjtcva.2.59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

GREWE, LYNNE. "EFFECTIVE COMPUTER VISION INSTRUCTION THROUGH EXPERIMENTAL LEARNING EXPERIENCES." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 05 (August 2001): 805–21. http://dx.doi.org/10.1142/s021800140100112x.

Повний текст джерела
Анотація:
The inclusion of video and image-based communications over the Internet and the explosion of multimedia in the last two decades have lead to a demand for education in computer imaging and vision at the undergraduate level. A curriculum developed for such an audience that features experimental-based learning experiences as a vehicle for student education and exploration in the field of computer imaging/vision is discussed. As computer imaging/vision has many unsolved problems and challenges, it is crucial that we impart to students not only knowledge but, the investigative skills necessary for tackling these problems. This paper presents the results of student work on a number of such experiences involving both software development and vision hardware system configuration. The paper culminates with the results of a student-composed survey discussing one of these experimental learning experiences, which illustrates the usefulness of this form of learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Yan, Yan, Yahong Han, Petia Radeva, and Qi Tian. "Guest Editorial: Intermediate representation for vision and multimedia applications." Journal of Visual Communication and Image Representation 44 (April 2017): 227–28. http://dx.doi.org/10.1016/j.jvcir.2017.01.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

ZHU, H., I. I. LUICAN, and F. BALASA. "Memory Size Computation for Real-Time Multimedia Applications Based on Polyhedral Decomposition." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E89-A, no. 12 (December 1, 2006): 3378–86. http://dx.doi.org/10.1093/ietfec/e89-a.12.3378.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Bhatia, Surbhi, Razan Ibrahim Alsuwailam, Deepsubhra Guha Roy, and Arwa Mashat. "Improved Multimedia Object Processing for the Internet of Vehicles." Sensors 22, no. 11 (May 29, 2022): 4133. http://dx.doi.org/10.3390/s22114133.

Повний текст джерела
Анотація:
The combination of edge computing and deep learning helps make intelligent edge devices that can make several conditional decisions using comparatively secured and fast machine learning algorithms. An automated car that acts as the data-source node of an intelligent Internet of vehicles or IoV system is one of these examples. Our motivation is to obtain more accurate and rapid object detection using the intelligent cameras of a smart car. The competent supervision camera of the smart automobile model utilizes multimedia data for real-time automation in real-time threat detection. The corresponding comprehensive network combines cooperative multimedia data processing, Internet of Things (IoT) fact handling, validation, computation, precise detection, and decision making. These actions confront real-time delays during data offloading to the cloud and synchronizing with the other nodes. The proposed model follows a cooperative machine learning technique, distributes the computational load by slicing real-time object data among analogous intelligent Internet of Things nodes, and parallel vision processing between connective edge clusters. As a result, the system increases the computational rate and improves accuracy through responsible resource utilization and active–passive learning. We achieved low latency and higher accuracy for object identification through real-time multimedia data objectification.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Shaptala, V. S., and M. V. Korman. "DCT-IV computation." Pattern Recognition and Image Analysis 18, no. 1 (January 2008): 58–62. http://dx.doi.org/10.1134/s1054661808010070.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Papakostas, G. A., E. G. Karakasis, and D. E. Koulouriotis. "Accurate and speedy computation of image Legendre moments for computer vision applications." Image and Vision Computing 28, no. 3 (March 2010): 414–23. http://dx.doi.org/10.1016/j.imavis.2009.06.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Zhang, Baicheng. "Research on the Construction and Development Prospect of Aided Business English Teaching System Based on Computer Multimedia Technology." Mobile Information Systems 2022 (July 31, 2022): 1–9. http://dx.doi.org/10.1155/2022/1807062.

Повний текст джерела
Анотація:
The intervention of multimedia technology has had a profound impact on education, which has led to the renewal of educators’ ideas, the renewal of educational means, and the renewal of educational forms. To some extent, the influence of multimedia technology on educational technology has opened a new chapter in educational reform. It can be said that the powerful advantages of multimedia education technology promote the pace of teaching reform and conform to the needs of the development of the times. This paper mainly analyzes the research on the construction and development prospects of the business English teaching system based on computer multimedia technology, uses the computer vision technology to collect teaching information, and then uses VR for teaching. Computer vision mainly studies how to use video sensing devices and computers to simulate the human visual system to collect and process external visual information, so that machines have the same visual ability as humans and even have the ability to “think.” Through virtual teaching design and research virtual teaching methods, combined with the characteristics of business English subjects, the computer vision technology is connected with business English course teaching to develop new teaching methods for business English subjects; analyze the teaching purpose, content, and feasibility; and investigate the difficulties and needs of students in business English subject learning to determine the educational methods in a smart environment application in actual business English teaching. This article uses literature research method, questionnaire method, experimental research method, and interview method to conduct research. At the same time, this article also uses Maya and other software multidimensional design functions to explain the educational methods in a smart environment; set up a virtual hypothesis experiment to objectively analyze the development prospects of educational methods in an intelligent environment in business English teaching from the aspects of curriculum teaching, classroom effects, and learner efficiency; and summarize the parts that need improvement.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Schreer, O., R. Englert, P. Eisert, and R. Tanger. "Real-Time Vision and Speech Driven Avatars for Multimedia Applications." IEEE Transactions on Multimedia 10, no. 3 (April 2008): 352–60. http://dx.doi.org/10.1109/tmm.2008.917336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sharma, Neha. "Report from ACM Multimedia Systems 2021." ACM SIGMultimedia Records 13, no. 4 (December 2021): 1. http://dx.doi.org/10.1145/3578508.3578510.

Повний текст джерела
Анотація:
Neha Sharma (@NehaSharma) is a PhD student working with Dr Mohamed Hefeeda in Network and Multimedia Systems Lab at Simon Fraser University. Her research interests are in computer vision and machine learning with a focus on next-generation multimedia systems and applications. Her current work focuses on designing an inexpensive hyperspectral camera using a hybrid approach by leveraging both hardware and software solutions. She has been awarded as Best Social Media Reporter of the conference to promote the sharing among researchers on social networks. To celebrate this award, here is a more complete report on the conference.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tojo, Takuya, Tomoya Enokido, and Makoto Takizawa. "Notification-Based QOS Control Protocol for Group Communication." Journal of Interconnection Networks 04, no. 02 (June 2003): 211–25. http://dx.doi.org/10.1142/s0219265903000830.

Повний текст джерела
Анотація:
This paper discusses a group communication protocol to exchange multimedia messages in a group of multiple peer processes. Quality of Service (QoS) required by an application has to be supported. In traditional protocols like RTP, a process can reliably deliver messages to one or more than one process. In the group communication, each process sends a message to multiple processes while receiving messages from multiple processes in a group. Due to the limited computation and communication capacity, we cannot adopt traditional one-to-one or one-to-many transmission ways to the group communication. We propose a notification-based data transmission procedure to exchange multimedia messages among multiple processes so as to satisfy QoS requirement.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Doyle, Joseph, Vasileios Giotsas, Mohammad Ashraful Anam, and Yiannis Andreopoulos. "Dithen: A Computation-as-a-Service Cloud Platform for Large-Scale Multimedia Processing." IEEE Transactions on Cloud Computing 7, no. 2 (April 1, 2019): 509–23. http://dx.doi.org/10.1109/tcc.2016.2617363.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії