Literatura científica selecionada sobre o tema "Visual learning"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Visual learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Visual learning"

1

Sze, Daniel Y. "Visual Learning." Journal of Vascular and Interventional Radiology 32, no. 3 (2021): 331. http://dx.doi.org/10.1016/j.jvir.2021.01.265.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Nida, Diini Fitrahtun, Muhyiatul Fadilah, Ardi Ardi, and Suci Fajrina. "CHARACTERISTICS OF VISUAL LITERACY-BASED BIOLOGY LEARNING MODULE VALIDITY ON PHOTOSYNTHESIS LEARNING MATERIALS." JURNAL PAJAR (Pendidikan dan Pengajaran) 7, no. 4 (2023): 785. http://dx.doi.org/10.33578/pjr.v7i4.9575.

Texto completo da fonte
Resumo:
Visual literacy is the skill to interpret and give meaning to information in the form of images or visuals. Visual literacy is included in the list of 21st-century skills. The observation results indicate that most of the students have not mastered visual literacy well. One of the efforts that can be made to improve visual literacy is the provision of appropriate and right teaching materials. The research is an R&D (Research and Development) using a 4-D model, which is modified to 3-D (define, design, develop). The instruments used were content analysis sheets and validation questionnaires
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

A Sowe, Ebou. "Momentum Contrast for Unsupervised Visual Representation Learning." Journal of Advances in Civil and Mechanical Engineering 2, no. 1 (2025): 01–06. https://doi.org/10.64030/3067-2457.02.01.02.

Texto completo da fonte
Resumo:
This brief report presents a novel unsupervised learning representation learning method called momentum contrast. Momentum contrast uses a contrastive learning technique to learn representations by comparing features of related yet dissimilar images for efficient feature extraction and unsupervised representation learning. Similar images are grouped together, and dissimilar images are placed far apart. The method builds upon previous works in contrastive learning but includes a momentum optimisation step to improve representation learning performance and generate better quality representations
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Liu, Yan, Yang Liu, Shenghua Zhong, and Songtao Wu. "Implicit Visual Learning." ACM Transactions on Intelligent Systems and Technology 8, no. 2 (2017): 1–24. http://dx.doi.org/10.1145/2974024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Cruz, Rodrigo Santa, Basura Fernando, Anoop Cherian, and Stephen Gould. "Visual Permutation Learning." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 12 (2019): 3100–3114. http://dx.doi.org/10.1109/tpami.2018.2873701.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Jones, Rachel. "Visual learning visualized." Nature Reviews Neuroscience 4, no. 1 (2003): 10. http://dx.doi.org/10.1038/nrn1014.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lu, Zhong-Lin, Tianmiao Hua, Chang-Bing Huang, Yifeng Zhou, and Barbara Anne Dosher. "Visual perceptual learning." Neurobiology of Learning and Memory 95, no. 2 (2011): 145–51. http://dx.doi.org/10.1016/j.nlm.2010.09.010.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Richler, Jennifer J., and Thomas J. Palmeri. "Visual category learning." Wiley Interdisciplinary Reviews: Cognitive Science 5, no. 1 (2013): 75–94. http://dx.doi.org/10.1002/wcs.1268.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Guinibert, Matthew. "Learn from your environment: A visual literacy learning model." Australasian Journal of Educational Technology 36, no. 4 (2020): 173–88. http://dx.doi.org/10.14742/ajet.5200.

Texto completo da fonte
Resumo:
Based on the presupposition that visual literacy skills are not usually learned unaided by osmosis, but require targeted learning support, this article explores how everyday encounters with visuals can be leveraged as contingent learning opportunities. The author proposes that a learner’s environment can become a visual learning space if appropriate learning support is provided. This learning support may be delivered via the anytime and anywhere capabilities of mobile learning (m-learning), which facilitates peer learning in informal settings. The study propositioned a rhizomatic m-learning mo
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Taga, Tadashi, Kazuhito Yoshizaki, and Kimiko Kato. "Visual field difference in visual statistical learning." Proceedings of the Annual Convention of the Japanese Psychological Association 79 (September 22, 2015): 2EV—074–2EV—074. http://dx.doi.org/10.4992/pacjpa.79.0_2ev-074.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Teses / dissertações sobre o assunto "Visual learning"

1

Zhu, Fan. "Visual feature learning." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/8218/.

Texto completo da fonte
Resumo:
Categorization is a fundamental problem of many computer vision applications, e.g., image classification, pedestrian detection and face recognition. The robustness of a categorization system heavily relies on the quality of features, by which data are represented. The prior arts of feature extraction can be concluded in different levels, which, in a bottom up order, are low level features (e.g., pixels and gradients) and middle/high-level features (e.g., the BoW model and sparse coding). Low level features can be directly extracted from images or videos, while middle/high-level features are co
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Texto completo da fonte
Resumo:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deu
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Walker, Catherine Livesay. "Visual learning through Hypermedia." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1148.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Owens, Andrew (Andrew Hale). "Learning visual models from paired audio-visual examples." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107352.

Texto completo da fonte
Resumo:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 93-104).<br>From the clink of a mug placed onto a saucer to the bustle of a busy café, our days are filled with visual experiences that are accompanied by distinctive sounds. In this thesis, we show that these sounds can provide a rich training signal for learning visual models. First, we propose the task of predicting the sound that an object makes when struck as a way of studying physical
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Peyre, Julia. "Learning to detect visual relations." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE016.

Texto completo da fonte
Resumo:
Nous étudions le problème de détection de relations visuelles de la forme (sujet, prédicat, objet) dans les images, qui sont des entités intermédiaires entre les objets et les scènes visuelles complexes. Cette thèse s’attaque à deux défis majeurs : (1) le problème d’annotations coûteuses pour l’entrainement de modèles fortement supervisés, (2) la variation d’apparence visuelle des relations. Nous proposons un premier modèle de détection de relations visuelles faiblement supervisé, n’utilisant que des annotations au niveau de l’image, qui, étant donné des détecteurs d’objets pré-entrainés, atte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Wang, Zhaoqing. "Self-supervised Visual Representation Learning." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29595.

Texto completo da fonte
Resumo:
In general, large-scale annotated data are essential to training deep neural networks in order to achieve better performance in visual feature learning for various computer vision applications. Unfortunately, the amount of annotations is challenging to obtain, requiring a high cost of money and human resources. The dependence on large-scale annotated data has become a crucial bottleneck in developing an advanced intelligence perception system. Self-supervised visual representation learning, a subset of unsupervised learning, has gained popularity because of its ability to avoid the high cost
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Tang-Wright, Kimmy. "Visual topography and perceptual learning in the primate visual system." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:388b9658-dceb-443a-a19b-c960af162819.

Texto completo da fonte
Resumo:
The primate visual system is organised and wired in a topological manner. From the eye well into extrastriate visual cortex, a preserved spatial representation of the vi- sual world is maintained across many levels of processing. Diffusion-weighted imaging (DWI), together with probabilistic tractography, is a non-invasive technique for map- ping connectivity within the brain. In this thesis I probed the sensitivity and accuracy of DWI and probabilistic tractography by quantifying its capacity to detect topolog- ical connectivity in the post mortem macaque brain, between the lateral geniculate
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Shi, Xiaojin. "Visual learning from small training datasets /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Liu, Jingen. "Learning Semantic Features for Visual Recognition." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3358.

Texto completo da fonte
Resumo:
Visual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications such as video (image) indexing and search, intelligent surveillance, human-machine interaction, robot navigation, etc. Effective modeling of the objects, scenes and actions is critical for visual recognition. Recently, bag of visual words (BoVW) representation, in which the image patches or video cuboids are quantized into visual words (i.e., mid-level features) based on their appearance similarity using clustering, has been wi
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Beale, Dan. "Autonomous visual learning for robotic systems." Thesis, University of Bath, 2012. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558886.

Texto completo da fonte
Resumo:
This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Livros sobre o assunto "Visual learning"

1

Katsushi, Ikeuchi, and Veloso Manuela M, eds. Symbolic visual learning. Oxford University Press, 1997.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

K, Nayar Shree, and Poggio Tomaso, eds. Early visual learning. Oxford University Press, 1996.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

M, Moore David, and Dwyer Francis M, eds. Visual literacy: A spectrum of visual learning. Educational Technology Publications, 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

N, Erin Jane, ed. Visual handicaps and learning. 3rd ed. PRO-ED, 1992.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Liberty, Jesse. Learning Visual Basic .NET. O'Reilly, 2002.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Rourke, Adrianne. Improving visual teaching materials. Nova Science Publishers, 2009.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Baratta, Alex. Visual writing. Cambridge Scholars, 2010.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Manfred, Fahle, and Poggio Tomaso, eds. Perceptual learning. MIT Press, 2002.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Vakanski, Aleksandar, and Farrokh Janabi-Sharifi. Robot Learning by Visual Observation. John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119091882.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Beatty, Grace Joely. PowerPoint: The visual learning guide. Prima Pub., 1994.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Capítulos de livros sobre o assunto "Visual learning"

1

Burge, M., and W. Burger. "Learning visual ideals." In Image Analysis and Processing. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63508-4_138.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Burge, M., and W. Burger. "Learning visual ideals." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0025067.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Panciroli, Chiara, Laura Corazza, and Anita Macauda. "Visual-Graphic Learning." In Proceedings of the 2nd International and Interdisciplinary Conference on Image and Imagination. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41018-6_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Lu, Zhong-Lin, and Barbara Anne Dosher. "Visual Perceptual Learning." In Encyclopedia of the Sciences of Learning. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_258.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Lovegrove, William. "The Visual Deficit Hypothesis." In Learning Disabilities. Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4613-9133-3_8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Golon, Alexandra Shires. "Learning Styles Differentiation." In VISUAL-SPATIAL learners, 2nd ed. Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Golon, Alexandra Shires. "Learning Styles Differentiation." In VISUAL-SPATIAL learners, 2nd ed. Routledge, 2021. http://dx.doi.org/10.4324/9781003239482-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wu, Qi, Peng Wang, Xin Wang, Xiaodong He, and Wenwu Zhu. "Video Representation Learning." In Visual Question Answering. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wu, Qi, Peng Wang, Xin Wang, Xiaodong He, and Wenwu Zhu. "Deep Learning Basics." In Visual Question Answering. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0964-1_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Grobstein, Paul, and Kao Liang Chow. "Visual System Development, Plasticity." In Learning and Memory. Birkhäuser Boston, 1989. http://dx.doi.org/10.1007/978-1-4899-6778-7_22.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Visual learning"

1

Liang, Anthony, Jesse Thomason, and Erdem Bıyık. "ViSaRL: Visual Reinforcement Learning Guided by Human Saliency." In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024. https://doi.org/10.1109/iros58592.2024.10801388.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Tsiamas, Ioannis, Santiago Pascual, Chunghsin Yeh, and Joan Serrà. "Sequential Contrastive Audio-Visual Learning." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10888656.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Neuvonen, Heidi. "VISUAL LEARNING OF HIGHER EDUCATION E-LEARNING STUDENTS." In 17th annual International Conference of Education, Research and Innovation. IATED, 2024. https://doi.org/10.21125/iceri.2024.0513.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

de Luna, Robert G., Desiree M. Mendoza, Michaella R. Isada, et al. "VIScial: Visual Classification of Facial Shape Using Deep Transfer Learning." In 2024 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE, 2024. https://doi.org/10.1109/ist63414.2024.10759179.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Song, Yingjin, Denis Paperno, and Albert Gatt. "Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive Learning." In Proceedings of the 17th International Natural Language Generation Conference. Association for Computational Linguistics, 2024. https://doi.org/10.18653/v1/2024.inlg-main.32.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Buijs, Jean M., and Michael S. Lew. "Learning visual concepts." In the seventh ACM international conference. ACM Press, 1999. http://dx.doi.org/10.1145/319878.319880.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Zhao, Qi, and Christof Koch. "Learning visual saliency." In 2011 45th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2011. http://dx.doi.org/10.1109/ciss.2011.5766178.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

BERARDI, NICOLETTA, and ADRIANA FIORENTINI. "VISUAL PERCEPTUAL LEARNING." In Proceedings of the International School of Biophysics. WORLD SCIENTIFIC, 2001. http://dx.doi.org/10.1142/9789812799975_0034.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ji, Daomin, Hui Luo, and Zhifeng Bao. "Visualization Recommendation Through Visual Relation Learning and Visual Preference Learning." In 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. http://dx.doi.org/10.1109/icde55515.2023.00145.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Guangming Chang, Chunfen Yuan, and Weiming Hu. "Interclass visual similarity based visual vocabulary learning." In 2011 First Asian Conference on Pattern Recognition (ACPR 2011). IEEE, 2011. http://dx.doi.org/10.1109/acpr.2011.6166597.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Visual learning"

1

Bhanu, Bir. Learning Integrated Visual Database for Image Exploitation. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada413389.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Edelman, Shimon, Heinrich H. Buelthoff, and Erik Sklar. Task and Object Learning in Visual Recognition. Defense Technical Information Center, 1991. http://dx.doi.org/10.21236/ada259961.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Petrie, Christopher, and Katija Aladin. Spotlight: Visual Arts. HundrED, 2020. http://dx.doi.org/10.58261/azgu5536.

Texto completo da fonte
Resumo:
HundrED and Supercell believe that fostering Visual Art skills can be just as important as numeracy and literacy. Furthermore, we also believe that Visual Arts can be integrated into all learning in schools and developed in a diversity of ways. To this end, the purpose of this project is to shine a spotlight, and make globally visible, leading education innovations from around the world doing exceptional work on developing the skill of Visual Arts for all students, teachers, and leaders in schools today.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Jiang, Yuhong V. Implicit Learning of Complex Visual Contexts Under Non-Optimal Conditions. Defense Technical Information Center, 2007. http://dx.doi.org/10.21236/ada482119.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Poggio, Tomaso, and Stephen Smale. Hierarchical Kernel Machines: The Mathematics of Learning Inspired by Visual Cortex. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada580529.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Harmon, Dr Jennifer. Exploring the Efficacy of Active and Authentic Learning in the Visual Merchandising Classroom. Iowa State University, Digital Repository, 2016. http://dx.doi.org/10.31274/itaa_proceedings-180814-1524.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Mills, Kathy, Elizabeth Heck, Alinta Brown, Patricia Funnell, and Lesley Friend. Senses together : Multimodal literacy learning in primary education : Final project report. Institute for Learning Sciences and Teacher Education, Australian Catholic University, 2023. http://dx.doi.org/10.24268/acu.8zy8y.

Texto completo da fonte
Resumo:
[Executive summary] Literacy studies have traditionally focussed on the seen. The other senses are typically under-recognised in literacy studies and research, where the visual sense has been previously prioritised. However, spoken and written language, images, gestures, touch, movement, and sound are part of everyday literacy practices. Communication is no longer focussed on visual texts but is a multisensory experience. Effective communication depends then on sensory orchestration, which unifies the body and its senses. Understanding sensory orchestration is crucial to literacy learning in t
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Nahorniak, Maya. Occupation of profession: Methodology of laboratory classes from practically-oriented courses under distance learning (on an example of discipline «Radioproduction»). Ivan Franko National University of Lviv, 2022. http://dx.doi.org/10.30970/vjo.2022.51.11412.

Texto completo da fonte
Resumo:
The article deals with the peculiarities of the use of verbal, visual and practical methods in the distance learning of professional practically-oriented discipline «Radioproduction», are offered new techniques for the use of these methods during the presentation of theoretical material and the creation of a media product (audiovisual content), due to the acquisition of a specialty in conditions online. It is proved that in distance learning, this discipline is inadmissible to absolutize the significance of verbal methods (narrative, explanation, conversation, discussion, lecture) and that all
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Yu, Wanchi. Implicit Learning of Children with and without Developmental Language Disorder across Auditory and Visual Categories. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.7460.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Shepiliev, Dmytro S., Yevhenii O. Modlo, Yuliia V. Yechkalo, et al. WebAR development tools: An overview. CEUR Workshop Proceedings, 2021. http://dx.doi.org/10.31812/123456789/4356.

Texto completo da fonte
Resumo:
Web augmented reality (WebAR) development tools aimed at improving the visual aspects of learning are far from being visual and available themselves. This causing problems of selecting and testing WebAR development tools for CS undergraduatesmastering inweb-design basics. The research is aimed at conducting comparative analysis of WebAR tools to select those appropriated for beginners.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!