Academic literature on the topic 'Photographs and videos'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Photographs and videos.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Photographs and videos"
Sebastian, Maciej, Agata Sebastian, and Jerzy Rudnicki. "Recommendation for Photographic Documentation of Safe Laparoscopic Cholecystectomy." World Journal of Surgery 45, no. 1 (September 4, 2020): 81–87. http://dx.doi.org/10.1007/s00268-020-05776-9.
Full textPevec, Iza, and Lukas Birk. "Keeping a Story Alive: Interview with Lukas Birk." Membrana Journal of Photography, Vol. 3, no. 2 (2018): 4–13. http://dx.doi.org/10.47659/m5.004.int.
Full textChandrappa, Ashok Basur, Pradeep Kumar Nagaraj, Srikanth Vasudevan, Anantheswar Yelampalli Nagaraj, Krithika Jagadish, and Ankit Shah. "Use of selfie sticks and iPhones to record operative photos and videos in plastic surgery." Indian Journal of Plastic Surgery 50, no. 01 (January 2017): 082–84. http://dx.doi.org/10.4103/ijps.ijps_26_17.
Full textMcHugh, Susan. "Video Dog Star: William Wegman, Aesthetic Agency, and the Animal in Experimental Video Art." Society & Animals 9, no. 3 (2001): 229–51. http://dx.doi.org/10.1163/156853001753644390.
Full textHoukin, Kiyohiro, and Satoshi Kuroda. "Digital recording in microsurgery." Journal of Neurosurgery 92, no. 1 (January 2000): 176–80. http://dx.doi.org/10.3171/jns.2000.92.1.0176.
Full textHood, C. A., T. Hope, and P. Dove. "Videos, photographs, and patient consent." BMJ 316, no. 7136 (March 28, 1998): 1009–11. http://dx.doi.org/10.1136/bmj.316.7136.1009.
Full textPallen, M., N. Loman, D. Nicholl, D. Davies, P. J. Buxton, D. J. Vasallo, J. H. Kilbey, and P. D. Welsby. "Videos, photographs, and patient consent." BMJ 317, no. 7171 (November 28, 1998): 1522. http://dx.doi.org/10.1136/bmj.317.7171.1522.
Full textDerr, Robert Ladislas. "Artist, Robert Ladislas Derr uses die rolls and cameras to map his walk through cities worldwide." Surveillance & Society 7, no. 2 (June 5, 2009): 94–97. http://dx.doi.org/10.24908/ss.v7i2.4135.
Full textRaveane, William, Pedro Luis Galdámez, and María Angélica González Arrieta. "Ear Detection and Localization with Convolutional Neural Networks in Natural Images and Videos." Processes 7, no. 7 (July 17, 2019): 457. http://dx.doi.org/10.3390/pr7070457.
Full textEt.al, Vishesh Kumar Sharma. "Focaltheorem – A portfolio Web Application." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 3809–14. http://dx.doi.org/10.17762/turcomat.v12i3.1667.
Full textDissertations / Theses on the topic "Photographs and videos"
Morago, Brittany. "Multi-Modality Fusion| Registering Photographs, Videos, and Lidar Range Scans." Thesis, University of Missouri - Columbia, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10629015.
Full text2D images and 3D LIDAR range scans provide very different but complementing information about a single subject and, when registered, can be used for a variety of exciting applications. Video sets can be fused with a 3D model and played in a single multi-dimensional environment. Imagery with temporal changes can be visualized simultaneously, unveiling changes in architecture, foliage, and human activity. Depth information for 2D photos and videos can be computed. Real-world measurements can be provided to users through simple interactions with traditional photographs. However, fusing multi-modality data is a very challenging task given the repetition and ambiguity that often occur in man-made scenes as well as the variety of properties different renderings of the same subject can possess. Image sets collected over a period of time during which the lighting conditions and scene content may have changed, different artistic renderings, varying sensor types, focal lengths, and exposure values can all contribute to visual variations in data sets. This dissertation addresses these obstacles using the common theme of incorporating contextual information to visualize regional properties that intuitively exist in each imagery source. We combine hard features that quantify the strong, stable edges that are often present in imagery along object boundaries and depth changes with soft features that capture distinctive texture information that can be unique to specific areas. We show that our detector and descriptor techniques can provide more accurate keypoint match sets between highly varying imagery than many traditional and state-of-the-art techniques, allowing us to fuse and align photographs, videos, and range scans containing both man-made and natural content.
Dupont, de Dinechin Grégoire. "Towards comfortable virtual reality viewing of virtual environments created from photographs of the real world." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM049.
Full textThere are many applications to capturing and digitally recreating real-world people and places for virtual reality (VR), such as preserving and promoting cultural heritage sites, placing users face-to-face with faraway family and friends, and creating photorealistic replicas of specific locations for therapy and training. This is typically done by transforming sets of input images, i.e. photographs and videos, into immersive 360° scenes and interactive 3D objects. However, such image-based virtual environments are often flawed such that they fail to provide users with a comfortable viewing experience. In particular, accurately recovering the scene's 3D geometry is a difficult task, causing many existing approaches to make approximations that are likely to cause discomfort, e.g. as the scene appears distorted or seems to move with the viewer during head motion. In the same way, existing solutions most often fail to accurately render the scene's visual appearance in a comfortable fashion. Standard 3D reconstruction pipelines thus commonly average out captured view-dependent effects such as specular reflections, whereas complex image-based rendering algorithms often fail to achieve VR-compatible framerates, and are likely to cause distracting visual artifacts outside of a small range of head motion. Finally, further complications arise when the goal is to virtually recreate people, as inaccuracies in the appearance of the displayed 3D characters or unconvincing responsive behavior may be additional sources of unease. Therefore, in this thesis, we investigate the extent to which users can be made more comfortable when viewing digital replicas of the real world in VR, by enhancing, combining, and designing new solutions for creating virtual environments from input sets of photographs. We thus demonstrate and evaluate solutions for (1) providing motion parallax during the viewing of 360° images, using a VR interface for estimating depth information, (2) automatically generating responsive 3D virtual agents from 360° videos, by combining pre-trained deep learning networks, and (3) rendering captured view-dependent effects at high framerates in a game engine widely used for VR development, which we apply to digitally recreate a museum's mineralogy collection. We evaluate and discuss each approach by way of user studies, and make our codebase available as an open-source toolkit
Alhazmi, Nouran Husain. "Maintenance as Spectacle: Imagery of the Ka’ba’s Cleaning and Kiswa." Ohio University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1618917807830704.
Full textGregory, Ronald Joseph. "Test target display : an M.F.A. photography portfolio as applied to optical laser disc /." Online version of thesis, 1987. http://hdl.handle.net/1850/10314.
Full textPark, Ja Yong. "Aux frontières du virtuel : depuis ma fenêtre." Thesis, Paris 1, 2015. http://www.theses.fr/2015PA010558.
Full textWe always situate "the borders" to contemplate: a front door, a window, a frame, a space or a mirror. My reflection starts from my window, my intimate space. "In the virtual borders: from my window'' drives us to try to find out where things gravity hiding or is the essence of things or their "pure existence" through the concept of virtual which is hardly noticeable. This thesis has three main parts: Virtuality: internal and external space; Virtuality: the appearance beyond: photographs and reflections in the art; virtuality beyond the disappearance. In Eastern philosophy (Buddhism and Taoism) and Western philosophy, the concept of virtual respect all universal phenomena around us, including respect to our own eyes or mind. It evokes some conflicting concepts: "the visible and the invisible", "the presence and absence", "The appearance and disappearance", "the emergence and evanescence" etc. It binds both of them as two sides of the same coin. In the Eastern countries, the virtual can be found in the vacuum concept which is intimately linked to that of existence, as this research regarding what is not always visible, rarely discernible by eye. In painting Shan shui, the vacuum may mean a cloud, wind or water. We always situate in a time of transition or a physical place where it is difficult to see what will appear and what is present, facing us
Glen, Gregory D. "High-Speed Photography Using Television Techniques." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/611602.
Full textThere are many applications for High-speed photography, and most rely on film as the primary medium of data acquisition. One such application of interest to the military services is the study of stores separation from aircraft. This type of testing has traditionally used high-speed film to gather data, however, there are many disadvantages to using film, such as the high cost of raw film, as well as the high processing expense after it has been exposed. In addition, there is no way to review data from film until it has been processed, nor is there any way to preview in real-time other conditions such as lighting which may affect the outcome of a test event. This paper discusses the characteristics of television systems with respect to motion picture systems, the challenges of recording and transmitting pictures, as well as the nature of what the first and eventual desired systems might be.
Thömmes, Katja [Verfasser]. "The Aesthetic Appeal of Photographs : Leveraging Instagram Data in Empirical Aesthetics / Katja Thömmes." Konstanz : KOPS Universität Konstanz, 2020. http://d-nb.info/1226093035/34.
Full textThoma, Andrea. "Thought dwellings : time and space in painting, photography and video." Thesis, University of Leeds, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.658561.
Full textMagagnoli, P. "Reclaiming the past : historical representation in contemporary photography and video art." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1355956/.
Full textSmith, Ian Richard. "Optimising international links between departments of photography, film and video production." Thesis, University of Southampton, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259958.
Full textBooks on the topic "Photographs and videos"
Baldridge, Aimee. Organize your digital life: How to store your photographs, music, videos, and personal documents in a digital world. Washington, D.C: National Geographic, 2009.
Find full textOrganize your digital life: How to store your photographs, music, videos, and personal documents in a digital world. Washington, D.C: National Geographic, 2009.
Find full textBaldridge, Aimee. Organize your digital life: How to store your photographs, music, videos, and personal documents in a digital world. Washington, D.C: National Geographic, 2009.
Find full textRay, Sidney F. Applied photographic optics: Imaging systems for photography, film, and video. London: Focal Press, 1988.
Find full textVideo animation and photography. North Mankato, Minnesota: Rourke Educational Media, 2018.
Find full textDrew, Marian. Marian Drew: Photographs + video works. Bulimba, Qld: Queensland Centre for Photography, 2006.
Find full textMuseum, Solomon R. Guggenheim, and Museo Guggenheim Bilbao, eds. Haunted: Contemporary photography, video, performance. New York, N.Y: Guggenheim Museum Publications, 2010.
Find full textBook chapters on the topic "Photographs and videos"
Meyer, Jeanine. "Origami Directions: Using Math-Based Line Drawings, Photographs, and Videos." In HTML5 and JavaScript Projects, 225–82. Berkeley, CA: Apress, 2011. http://dx.doi.org/10.1007/978-1-4302-4033-4_7.
Full textMeyer, Jeanine. "Origami Directions: Using Math-Based Line Drawings, Photographs, and Videos." In HTML5 and JavaScript Projects, 223–89. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3864-6_7.
Full textSato, Yuri, and Koji Mineshima. "Depicting Negative Information in Photographs, Videos, and Comics: A Preliminary Analysis." In Diagrammatic Representation and Inference, 485–89. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54249-8_40.
Full textDi Stefano, John. "The Presence of Video." In Photography and Ontology, 104–18. New York, NY: Routledge, 2018. | Series: Routledge history of photography; 4: Routledge, 2018. http://dx.doi.org/10.4324/9781351187756-8.
Full textPulli, Kari, and Alejandro Troccoli. "Mobile Computational Photography with FCam." In Registration and Recognition in Images and Videos, 257–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-44907-9_11.
Full textElmansy, Rafiq. "Shooting and Editing Video." In Developing Professional iPhone Photography, 259–86. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-3186-9_9.
Full textLiu, Dongwei, and Reinhard Klette. "Blur Estimation for Natural Edge Appearance in Computational Photography." In Image and Video Technology, 300–310. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92753-4_24.
Full textLiu, Dongwei, Haokun Geng, and Reinhard Klette. "Star-Effect Simulation for Photography Using Self-calibrated Stereo Vision." In Image and Video Technology, 228–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29451-3_19.
Full textJin, Ge, and James K. Hahn. "High-Resolution Video from Series of Still Photographs." In Advances in Visual Computing, 901–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11919476_90.
Full textBuades, A., and J. L. Lisani. "Patch-Based Methods for Video Denoising." In Denoising of Photographic Images and Video, 175–205. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96029-6_7.
Full textConference papers on the topic "Photographs and videos"
Shomin, Michael, and Jonathan Fiene. "Teaching Manipulator Kinematics by Painting With Light." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47670.
Full textSchrapp, H., U. Stark, I. Goltz, G. Kosyna, and S. Bross. "Structure of the Rotor Tip Flow in a Highly-Loaded Single-Stage Axial-Flow Pump Approaching Stall: Part I — Breakdown of the Tip-Clearance Vortex." In ASME 2004 Heat Transfer/Fluids Engineering Summer Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/ht-fed2004-56780.
Full textKelm, Pascal, Sebastian Schmiedeke, and Thomas Sikora. "A hierarchical, multi-modal approach for placing videos on the map using millions of Flickr photographs." In the 2011 ACM workshop. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2072627.2072634.
Full textMehdizadeh, N. Z., and S. Chandra. "Effect of Impact Velocity and Substrate Temperature on Boiling of Water Droplets Impinging on a Hot Stainless Steel Surface." In ASME 2004 Heat Transfer/Fluids Engineering Summer Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/ht-fed2004-56179.
Full textUğur, Latif Onur, and Kadir Penbe. "A Social Media Supported Distance Education Application for the Building Cost Course Given in Civil Engineering Education During the COVID 19 Quarantine." In 4th International Conference of Contemporary Affairs in Architecture and Urbanism – Full book proceedings of ICCAUA2020, 20-21 May 2021. Alanya Hamdullah Emin Paşa University, 2021. http://dx.doi.org/10.38027/iccaua2021tr0030n9.
Full textTakamatsu, Misao, Kazuyuki Imaizumi, Akinori Nagai, Takashi Sekine, and Yukimoto Maeda. "Development of Observation Techniques in Reactor Vessel of Experimental Fast Reactor Joyo." In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75088.
Full textWallace, Rebekah. "P62 Is the use of patient simulation a more effective method of teaching medical students to recognise the early signs of clinical deterioration, compared to using videos and photographs?" In Abstracts of the Association for Simulation Practice in Healthcare Annual Conference, 6th to 7th November 2017, Telford, UK. The Association for Simulated Practice in Healthcare, 2017. http://dx.doi.org/10.1136/bmjstel-2017-aspihconf.144.
Full textHolloway, Jason, Aswin C. Sankaranarayanan, Ashok Veeraraghavan, and Salil Tambe. "Flutter Shutter Video Camera for compressive sensing of videos." In 2012 IEEE International Conference on Computational Photography (ICCP). IEEE, 2012. http://dx.doi.org/10.1109/iccphot.2012.6215211.
Full textEssa, Irfan. "Computational photography and video." In the working conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1385569.1385572.
Full textLiu, Wei, and Hongyun Li. "Time-lapse photography applied to educational videos." In 2012 2nd International Conference on Consumer Electronics, Communications and Networks (CECNet). IEEE, 2012. http://dx.doi.org/10.1109/cecnet.2012.6202303.
Full text