Dissertations / Theses on the topic 'Videos'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Videos.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Lindskog, Eric, and Wrang Jesper. "Design of video players for branched videos." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148592.
Full textOgata, Atsushi. "Meditative videos." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/78990.
Full textIncludes bibliographical references (leaves 82-85). Filmography: leaves 86-87. Videography: leaves 88-92.
My intention is to provide "meditative" moments to all of us who must struggle with the fast pace of the modern world. These "meditative" moments are both calming and engaging. They resemble the moment of "satori," or "opening of mind," in Zen. Zen, embedded in the culture of Japan, is closely related to my work and sensibility. In realizing my intention, I have chosen the medium of video. Video can reach a wide audience through broadcasting and home videos. Its photographic ability allows us to directly record and celebrate our natural environment. As video is an experience in time, it can create quiet soothing sounds and slow subtle movements. It allows us time to "tune in" to the rhythm of the piece. In my video work, I depict basic natural elements such as light, water, and clouds, and their relationship to animate beings.
by Atsushi Ogata.
M.S.V.S.
Stewart, Richard Christopher. "Effective audio for music videos : the production of an instructional video outlining audio production techniques for amateur music videos." Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 1996. http://www.kutztown.edu/library/services/remote_access.asp.
Full textSedlařík, Vladimír. "Informační strategie firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223526.
Full textLindgren, Björn. "Erfarenheter och åsikter om videos : Instrumentlärare om videos som undervisningsmaterial." Thesis, Linnéuniversitetet, Institutionen för musik och bild (MB), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-65800.
Full textChen, Juan. "Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4256.
Full textPotapov, Danila. "Supervised Learning Approaches for Automatic Structuring of Videos." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM023/document.
Full textAutomatic interpretation and understanding of videos still remains at the frontier of computer vision. The core challenge is to lift the expressive power of the current visual features (as well as features from other modalities, such as audio or text) to be able to automatically recognize typical video sections, with low temporal saliency yet high semantic expression. Examples of such long events include video sections where someone is fishing (TRECVID Multimedia Event Detection), or where the hero argues with a villain in a Hollywood action movie (Inria Action Movies). In this manuscript, we present several contributions towards this goal, focusing on three video analysis tasks: summarization, classification, localisation.First, we propose an automatic video summarization method, yielding a short and highly informative video summary of potentially long videos, tailored for specified categories of videos. We also introduce a new dataset for evaluation of video summarization methods, called MED-Summaries, which contains complete importance-scorings annotations of the videos, along with a complete set of evaluation tools.Second, we introduce a new dataset, called Inria Action Movies, consisting of long movies, and annotated with non-exclusive semantic categories (called beat-categories), whose definition is broad enough to cover most of the movie footage. Categories such as "pursuit" or "romance" in action movies are examples of beat-categories. We propose an approach for localizing beat-events based on classifying shots into beat-categories and learning the temporal constraints between shots.Third, we overview the Inria event classification system developed within the TRECVID Multimedia Event Detection competition and highlight the contributions made during the work on this thesis from 2011 to 2014
Liu, Yunjun 1977. "Creating animated mosaic videos." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84053.
Full textCui, Yingnan S. M. Massachusetts Institute of Technology. "On learning from videos/." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120233.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 95-97).
The robot phone disassembly task is difficult in many ways: It has requirements on high precision, high speed, and should be general to all types of cell phones. Previous works on robot learning from demonstration are hardly applicable due to the complexity of teaching, huge amounts of data and difficulty in generalization. To tackle these problems, we try to learn from videos and extract useful information for the robot. To reduce the amounts of data we need to process, we generate a mask for the video and observe only the region of interest. Inspired by the idea that spatio-temporal interest point (STIP) detector may give meaningful points such as the contact point between the tool and the part, we design a new method of detecting STIPs based on optical flow. We also design a new descriptor by modifying the histogram of optical flow. The STIP detector and descriptor together can make sure that the features are invariant to scale, rotation and noises. Using the modified histogram of optical flow descriptor, we show that even without considering raw pixels of the original video, we can achieve pretty good classification results.
by Yingnan Cui.
S.M.
Touliatou, Georgia. "Diegetic stories in a video mediation : a narrative analysis of four videos." Thesis, University of Surrey, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397132.
Full textKarlsson, Simon Gustav. "Subgenres in Swedish music videos - A neo-formalistic analysis of fifteen music videos." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96274.
Full textBerigny, Wall Caitilin de. "Documentary transforms into video installation via the processes of intertextuality and detournement /." Canberra : University of Canberra, 2006. http://erl.canberra.edu.au/public/adt-AUC20070723.103335/index.html.
Full textSubmitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy at the University of Canberra, May 2007. Includes filmography (leaves 124-126) and bibliography (leaves 130-136). Also available online.
Anegekuh, Louis. "Video content-based QoE prediction for HEVC encoded videos delivered over IP networks." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3377.
Full textDye, Brigham R. "Reliability of Pre-Service Teachers Coding of Teaching Videos Using Video-Annotation Tools." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/990.
Full textNaji, Yassine. "Abnormal events detection in videos." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG008.
Full textThe detection of abnormal events in videos is a challenging task due to the wide variety of possible anomalies, the limited availability of labeled anomaly data for model training, and the contextual nature of normality. Moreover, normal and abnormal data can exhibit significant intra-class variability, often making their distinction difficult. An additional challenge lies in the lack of explainability in deep learning-based anomaly detection methods which, although effective, often remain opaque. These factors make anomaly detection in videos an open research problem. To address the imbalance between the abundance of normal data and the scarcity of anomalous data, video anomaly detection is usually approached using the "One-Class" learning paradigm. In this approach, models learn a distribution of normal data and identify anomalies as outliers relative to this learned distribution. In this thesis, we introduce approaches to better represent the diversity of normal data. Additionally, we propose a method that not only detects anomalies but also provides explanations for them. Finally, we present new metrics to evaluate the explainability performance of anomaly detection models
Rossi, Silvia. "Content characterization of 3D Videos." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10192/.
Full textKrause, Uwe. "Videos related to the maps." Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6574/.
Full textChaptini, Bassam H. 1978. "Intelligent segmentation of lecture videos." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84314.
Full textWang, Ami M. "Lifecycle of viral YouTube videos." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/97377.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (page 28).
YouTube was founded in 2005 as a video-sharing website. Today, it's a powerhouse social media platform where users can upload, view, comment, and share content. For many, it's the first site visited when looking for songs, music videos, TV shows, or just general entertainment. Along with the sharing potential provided by social media like Twitter, Facebook, Tumblr, and more, YouTube videos have the potential to spread like wildfire. A term that has been coined to describe such videos is "viral videos." This comes from the scientific definition of viral, which involves the contagious nature of the spread of a virus. Virality on the Internet is not a new concept. Back when email was the hottest new technology, chain e-mails spreading hoaxes and scams were widely shared by emailing back and forth. As the Internet aged, however, new forms of virality have evolved. This thesis looks at a series of 20 viral videos as case studies and analyzes their growth over time via the Lifecycle Theory. By analyzing viral videos in this manner, it aids in a deeper understanding of the human consciousness's affinity for content, the sociology of online sharing, and the context of today's media culture. This thesis proposes that the phenomenon of virality supports the claim of Internet as heterotopia.
by Ami M. Wang.
S.B.
Kandakatla, Rajeshwari. "Identifying Offensive Videos on YouTube." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1484751212961772.
Full textSun, Shuyang. "Designing Motion Representation in Videos." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/19724.
Full textFan, Quanfu. "Matching Slides to Presentation Videos." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/195757.
Full textEstrada, Rayna Allison. "Appropriate exercise videos for adolescents." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2165.
Full textDye, Brigham R. "Reliability of pre-service teachers' coding of teaching videos using a video-analysis tool /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2020.pdf.
Full textJuul, Lisa. "Examining 360° storytelling in immersive music videos." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20828.
Full textPortocarrero, Rodriguez Marco Antonio. "Diseño de la arquitectura de transformada discreta directa e inversa del coseno para un decodificador HEVC." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2018. http://tesis.pucp.edu.pe/repositorio/handle/123456789/13002.
Full textTesis
Wang, Yi. "Design and Evaluation of Contextualized Video Interfaces." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28798.
Full textPh. D.
Dalal, Navneet. "Finding People in Images and Videos." Phd thesis, Grenoble INPG, 2006. http://tel.archives-ouvertes.fr/tel-00390303.
Full textWang, Ping. "Social game retrieval from unstructured videos." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34673.
Full textErdem, Elif. "Constructing Panoramic Scenes From Aerial Videos." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609083/index.pdf.
Full textSivic, Josef. "Efficient visual search of images videos." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436952.
Full textDubba, Krishna Sandeep Reddy. "Learning relational event models from videos." Thesis, University of Leeds, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.590428.
Full textRaza, Syed H. "Temporally consistent semantic segmentation in videos." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53455.
Full textMehran, Ramin. "Analysis of behaviors in crowd videos." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4801.
Full textID: 031001560; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed August 26, 2013).; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 100-104).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Pusiol, Guido Thomas. "Découvertes d'activités humaines dans des videos." Phd thesis, Université Nice Sophia Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00944617.
Full textFernandez, Arguedas Virginia. "Automatic object classification for surveillance videos." Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/3354.
Full textRoss, Candace Cheronda. "Grounded semantic parsing using captioned videos." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118036.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 45-47).
We develop a semantic parser which is trained in a grounded setting using pairs of videos captioned with sentences. This setting is both data-efficient requiring little annotation and far more similar to the experience of children where they observe their environment and listen to speakers. The semantic parser recovers the meaning of English sentences despite not having access to any annotated sentences and despite the ambiguity inherent in vision where a sentence may refer to any combination of objects, object properties, relations or actions taken by any agent in a video. We introduce a new corpus for grounded language acquisition. Learning to understand language, turn sentences into logical forms, by using captioned video will significantly expand the range of data that parsers can be trained on, lower the effort of training a semantic parser, and ultimately lead to a better understanding of child language acquisition.
by Candace Cheronda Ross.
S.M.
Sotomaior, Gabriel de Barcelos 1982. "Auto-representação em videos na internet." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/284043.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes
Made available in DSpace on 2018-08-14T18:45:22Z (GMT). No. of bitstreams: 1 Sotomaior_GabrieldeBarcelos_M.pdf: 1533142 bytes, checksum: 6e4f31f9f8fc104224c6c30cbbcc8356 (MD5) Previous issue date: 2008
Resumo: O que acontece quando viramos a câmera para nós mesmos? Este trabalho estudará o fenômeno da auto-representação em vídeos na internet. A pesquisa faz uma reflexão sobre os processos de subjetivação e a ação performática de sujeitos que se representam com a utilização das novas tecnologias, em especial a internet. Pretendo compreender as conseqüências para a transformação do audiovisual, observando algumas possíveis tendências dentro da cultura contemporânea. Pensando nessas questões, fiz a análise de diferentes vídeos na internet, além do estudo do ambiente hipertextual em que estes trabalhos estão inseridos. O trabalho aponta para a importância do protagonismo de novos indivíduos em um cenário muito mais múltiplo, diverso e "em construção", mas questiona a ideologia de uma tecnologia "salvadora", que por si só já traria as grandes transformações que a sociedade necessita.
Abstract: What happened when we turn the camera to ourselves? This work will study the phenomenon of self-representation in internet videos. The research makes a reflection about subjectvations process and the performative acts of the subjects who are representing themselves with the new technologies, mainly the internet. I intend understand the consequences to the audiovisual transformation, looking for some possible tendencies inside the contemporary culture. Thinking these questions, I did the analysis of different kind of videos in the internet, and the study of their hypertextual surroundings. The work points out to the importance of a new individual protagonism, in a much more multiple scenery, diverse and "under construction", but it questions the ideology of a "salvage" technology, who by itself could bring all the transformations that we need.
Mestrado
Mestre em Multimeios
Duraivelan, Shreenivasan. "Group Trajectory Analysis in Sport Videos." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1619636056814278.
Full textMahfoudi, Gaël. "Authentication of Digital Images and Videos." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0043.
Full textDigital media are parts of our day-to-day lives. With years of photojournalism, we have been used to consider them as an objective testimony of the truth. But images and video retouching software are becoming increasingly more powerful and easy to use and allow counterfeiters to produce highly realistic image forgery. Consequently, digital media authenticity should not be taken for granted any more. Recent Anti-Money Laundering (AML) relegation introduced the notion of Know Your Customer (KYC) which enforced financial institutions to verify their customer identity. Many institutions prefer to perform this verification remotely relying on a Remote Identity Verification (RIV) system. Such a system relies heavily on both digital images and videos. The authentication of those media is then essential. This thesis focuses on the authentication of images and videos in the context of a RIV system. After formally defining a RIV system, we studied the various attacks that a counterfeiter may perform against it. We attempt to understand the challenges of each of those threats to propose relevant solutions. Our approaches are based on both image processing methods and statistical tests. We also proposed new datasets to encourage research on challenges that are not yet well studied
LAAKE, REBECCA A. "DEPICTION OF SEXUALITY IN MUSIC VIDEOS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1104784016.
Full textEl, Ghazouani Anas. "Spatial Immersion Dimensions in 360º Videos." Thesis, Södertörns högskola, Medieteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-32972.
Full textDias, Moreira De Souza Fillipe. "Semantic Description of Activities in Videos." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6649.
Full textWang, Dongang. "Action Recognition in Multi-view Videos." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/19740.
Full textSharma, Nabin. "Multi-lingual Text Processing from Videos." Thesis, Griffith University, 2015. http://hdl.handle.net/10072/367489.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology.
Science, Environment, Engineering and Technology
Full Text
Kapoor, Aditi. "Saliency detection in images and videos." Thesis, IIT Delhi, 2017. http://localhost:8080/xmlui/handle/12345678/7234.
Full textHouten, Ynze van. "Searching for videos the structure of video interaction in the framework of information foraging theory /." Enschede : University of Twente [Host], 2009. http://doc.utwente.nl/60628.
Full textBai, Yannan. "Video analytics system for surveillance videos." Thesis, 2018. https://hdl.handle.net/2144/30739.
Full textSINGHAL, AKSHAT. "DETECTING FAKE VIDEOS." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16585.
Full textLu, Yi-Chun, and 魯怡君. "Video Summarization for Multi-intensity Illuminated Infrared Videos." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/37670354304076414799.
Full text國立交通大學
多媒體工程研究所
101
In nighttime video surveillance, proper illumination plays a key role for the image quality. For ordinary IR-illuminators with fixed intensity, faraway objects are often hard to be identified due to insufficient illumination while nearby objects may suffer from over-exposure, resulting in image foreground/background of poor quality. In this thesis we proposed a novel video summarization method which utilizes a multi-intensity IR-illuminator to generate images of human activities with different illumination levels. First, a GMM-based foreground extraction procedure is adopted for images acquired under each illumination level. With quality assessment of the outcome of such procedure, the system than selects visually most plausible foreground regions from different illumination levels to generate a set of new input data. Finally, an automatic video summary method is developed to identify key frames for these data and merge them with a preselected representation for still background. The result brings out a reasonable video summary for moving foreground, which is generally unachievable for nighttime surveillance videos.