To see the other types of publications on this topic, follow the link: Visual model.

Journal articles on the topic 'Visual model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guinibert, Matthew. "Learn from your environment: A visual literacy learning model." Australasian Journal of Educational Technology 36, no. 4 (2020): 173–88. http://dx.doi.org/10.14742/ajet.5200.

Full text
Abstract:
Based on the presupposition that visual literacy skills are not usually learned unaided by osmosis, but require targeted learning support, this article explores how everyday encounters with visuals can be leveraged as contingent learning opportunities. The author proposes that a learner’s environment can become a visual learning space if appropriate learning support is provided. This learning support may be delivered via the anytime and anywhere capabilities of mobile learning (m-learning), which facilitates peer learning in informal settings. The study propositioned a rhizomatic m-learning mo
APA, Harvard, Vancouver, ISO, and other styles
2

Kanodiya, Rishabh, Smriti Mittal, and Shikha Jain. "Image Visual Description Model." IEIE Transactions on Smart Processing & Computing 9, no. 2 (2020): 169–76. http://dx.doi.org/10.5573/ieiespc.2020.9.2.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Orman, Levent. "A visual data model." Data & Knowledge Engineering 7, no. 3 (1992): 227–38. http://dx.doi.org/10.1016/0169-023x(92)90039-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Maeref, Mabroka Ali Mayouf, Fatma Alghali, and Khadija Abied. "An Advance Visual Model for Animating Behavior of Cryptographic Protocols." Journal of Computers 10, no. 5 (2015): 336–46. http://dx.doi.org/10.17706/jcp.10.5.336-346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Chenxi, and Nur Zaidi Bin Azraai. "Using Conceptual Blending Theory to help college students design online communication models." Dirasat: Human and Social Sciences 52, no. 2 (2024): 389–408. https://doi.org/10.35516/hum.v52i2.6327.

Full text
Abstract:
Objective: This study examines modal social media visuals using Conceptual Blending Theory (CBT) and its theoretical foundation, Conceptual Blending Network, to improve college students' visual communication on social media. Methods: The study explores social media visuals and CBN's mechanism. It goes beyond meme design to explain the key factors and reinforcement strategies needed to use CBN to create social media visuals. This study creates an empirical experimental model for validation, a novel approach. The experimental model teaches college students how to create engaging social media vis
APA, Harvard, Vancouver, ISO, and other styles
6

Dimitriadi, Ekaterina M. "Visual city model in the context of visual urbanism." Общество: философия, история, культура, no. 4 (2022): 164–68. http://dx.doi.org/10.24158/fik.2022.4.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yano, Katsuya, and Takeshi Kohama. "Visual search model considering spatial modification of visual attributes." Neuroscience Research 71 (September 2011): e256. http://dx.doi.org/10.1016/j.neures.2011.07.1116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xi, Yuling, Yanning Zhang, Songtao Ding, and Shaohua Wan. "Visual question answering model based on visual relationship detection." Signal Processing: Image Communication 80 (February 2020): 115648. http://dx.doi.org/10.1016/j.image.2019.115648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sonawale, Ruchita. "Visual Mind: Visual Question Answering (VQA) with CLIP Model." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 3843–49. http://dx.doi.org/10.22214/ijraset.2024.60786.

Full text
Abstract:
Abstract: This paper proposes a Visual Question Answering (VQA) problem using CLIP models. The proposed approach suggests an enhanced VQA-CLIP model with additional layers for better computational performance. VQA is an increasingly important task that aims to answer open-ended questions based on images. This task has numerous applications in various fields such as medicine, education, and surveillance. The VizWiz dataset, specifically designed to assist visually impaired individuals, consists of image/question pairs along with 10 answers per question, recorded by blind participants in a natur
APA, Harvard, Vancouver, ISO, and other styles
10

Haniah, Haniah, and Jumadil Jumadil. "Visual Technology Development for Arabic Learning Based on ADDIE Development Model." Jurnal Al-Maqayis 9, no. 2 (2022): 147. http://dx.doi.org/10.18592/jams.v9i2.5692.

Full text
Abstract:
This article aimed to determine the variety of visual technology and its development in learning Arabic. Visual technologies that could be used by Arabic language teachers were all kinds of media that could be used in general learning, both in the form of non-projected visual media and projected visual media, Non-projected visuals in the form of graphic media, realia (real objects), and models, while visual media that could be projected are LCDs, and digital cameras. In its development, Arabic learning could be developed with various models, one of which is the development of the ADDIE (analyz
APA, Harvard, Vancouver, ISO, and other styles
11

Wahyuni, Dewi Tri, and Risma Dwi Arisona. "PENGARUH MODEL PEMBELAJARAN TGT (TEAMS GAME TOURNAMENT) TERHADAP HASIL BELAJAR IPS." JIIPSI: Jurnal Ilmiah Ilmu Pengetahuan Sosial Indonesia 4, no. 1 (2024): 76–84. http://dx.doi.org/10.21154/jiipsi.v4i1.2773.

Full text
Abstract:
Penelitian ini membahas penerapan model pembelajaran TGT dan penggunaan media visual dan audio visual dalam pengembangan bahan ajar IPS pada materi letak Astronomis, Geografis, dan Geologis di kelas VII A SMPN 2 Jetis. Keduanya memiliki peran penting dalam proses pembelajaraan saat ini. Baik penerapan model TGT ataupun penerapan media visua seperti foto dan audio visual seperti video. Metode kualitatif digunakan dengan pengamatan langsung dan dokumentasi sebagai pendekatan untuk mengeksplorasi dampak model TGT terhadap keterlibatan siswa dan efektivitas media visual serta audio visual. Hasil p
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Ke, Xinbo Zhao, and Rong Mo. "A Bioinspired Visual Saliency Model." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 37, no. 3 (2019): 503–8. http://dx.doi.org/10.1051/jnwpu/20193730503.

Full text
Abstract:
This paper presents a bioinspired visual saliency model. The end-stopping mechanism in the primary visual cortex is introduced in to extract features that represent contour information of latent salient objects such as corners, line intersections and line endpoints, which are combined together with brightness, color and orientation features to form the final saliency map. This model is an analog for the processing mechanism of visual signals along from retina, lateral geniculate nucleus(LGN)to primary visual cortex V1:Firstly, according to the characteristics of the retina and LGN, an input im
APA, Harvard, Vancouver, ISO, and other styles
13

Langer, Gerrit G., Saul Hazledine, Tim Wiegels, Ciaran Carolan, and Victor S. Lamzin. "Visual automated macromolecular model building." Acta Crystallographica Section D Biological Crystallography 69, no. 4 (2013): 635–41. http://dx.doi.org/10.1107/s0907444913000565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bullier, Jean. "Integrated model of visual processing." Brain Research Reviews 36, no. 2-3 (2001): 96–107. http://dx.doi.org/10.1016/s0165-0173(01)00085-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Shao, Y., J. E. W. Mayhew, and Y. Zheng. "Model-driven active visual tracking." Real-Time Imaging 4, no. 5 (1998): 349–59. http://dx.doi.org/10.1016/s1077-2014(98)90004-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ban, Sang-Woo, Inwon Lee, and Minho Lee. "Dynamic visual selective attention model." Neurocomputing 71, no. 4-6 (2008): 853–56. http://dx.doi.org/10.1016/j.neucom.2007.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gómez-Pedrero, José A., and José Alonso. "Phenomenological model of visual acuity." Journal of Biomedical Optics 21, no. 12 (2016): 125005. http://dx.doi.org/10.1117/1.jbo.21.12.125005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Griggs, Kenneth A. "Visual agents that model organizations." Journal of Organizational Computing 2, no. 2 (1992): 203–24. http://dx.doi.org/10.1080/10919399209540182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nishimoto, Hiroyuki. "Geometric model in visual space." JSIAM Letters 15 (2023): 105–8. http://dx.doi.org/10.14495/jsiaml.15.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zeng, Bohan, Shanglin Li, Xuhui Liu, et al. "Controllable Mind Visual Diffusion Model." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 6935–43. http://dx.doi.org/10.1609/aaai.v38i7.28519.

Full text
Abstract:
Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models. Diffusion-based methods have recently shown promise in analyzing functional magnetic resonance imaging (fMRI) data, including the reconstruction of high-quality images consistent with original visual stimuli. Nonetheless, it remains a critical challenge to effectively harness the semantic and silhouette information extracted from brain signals. In this paper, we propose a novel approach, termed as Controllable Mind Visual Diffusion Model
APA, Harvard, Vancouver, ISO, and other styles
21

Fageria, Om Prakash. "Mathematical Model for Enhancement of Visual Acuity through Electronic System Biofeedback." Journal of Advanced Research in Applied Mathematics and Statistics 07, no. 1&2 (2022): 12–17. http://dx.doi.org/10.24321/2455.7021.202203.

Full text
Abstract:
The majority of people across the world have issues with their vision caused by human ocular refractive errors. Myopia affects 51% of adults, hyperopia affects 38% of adults, and astigmatism affects 27% of adults in the United States. This frequency is much higher in the adult population. 42 percent of global causes of visual impairment are attributable to failure to take prophylactic action against these disorders. This percentage includes presbyopia in adulthood. Visual strain brought on by an excessive use of electronic devices has led to the creation of new techniques and the development o
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Shilin, and Xunyuan Zhang. "Pedestrian Density Estimation by a Weighted Bag of Visual Words Model." International Journal of Machine Learning and Computing 5, no. 3 (2015): 214–18. http://dx.doi.org/10.7763/ijmlc.2015.v5.509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tu, Nguyen Anh, and Young-Koo Lee. "Locality Preserving Vector and Image-Specific Topic Model for Visual Recognition." Journal of Computers 10, no. 2 (2015): 81–89. http://dx.doi.org/10.17706/jcp.10.2.81-89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

So, Gil-Ja, Sang-Hyun Kim, and Jeong-Yeop Kim. "Evaluation Model of the Visual Fatigue on the 3D Stereoscopic Video." International Journal of Computer Theory and Engineering 8, no. 4 (2016): 336–42. http://dx.doi.org/10.7763/ijcte.2016.v8.1068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Dou Yan, 窦燕, 孔令富 Kong Lingfu, and 王柳锋 Wang Liufeng. "A Computational Model of Visual Attention Based on Visual Entropy." Acta Optica Sinica 29, no. 9 (2009): 2511–15. http://dx.doi.org/10.3788/aos20092909.2511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cehovin, Luka, Matej Kristan, and Ales Leonardis. "Robust Visual Tracking Using an Adaptive Coupled-Layer Visual Model." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 4 (2013): 941–53. http://dx.doi.org/10.1109/tpami.2012.145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yaghoub, Pourasad. "Developing a Modified HMAX Model Based on Combined with the Visual Featured Model." Indonesian Journal of Electrical Engineering and Computer Science 5, no. 1 (2017): 773–85. https://doi.org/10.11591/ijeecs.v7.i3.pp773-785.

Full text
Abstract:
Identify objects based on modeling the human visual system, as an effective method in intelligent identification, has attracted the attention of many researchers.Although the machines have high computational speed but are very weak as compared to humans in terms of diagnosis. Experience has shown that in many areas of image processing, algorithms that have biological backing had more simplicity and better performance. The human visual system, first select the main parts of the image which is provided by the visual featured model, then pays to object recognition which is a hierarchical operatio
APA, Harvard, Vancouver, ISO, and other styles
28

Ke, Ziyi, Ziqiang Chen, Huanlei Wang, and Liang Yin. "A Visual Human-Computer Interaction System Based on Hybrid Visual Model." Security and Communication Networks 2022 (June 30, 2022): 1–13. http://dx.doi.org/10.1155/2022/9562104.

Full text
Abstract:
The traditional human-computer interaction is mainly through the mouse, keyboard, remote control, and other peripheral equipment electromagnetic signal transmission. This paper aims to build a visual human-computer interaction system through a series of deep learning and machine vision models, so that people can achieve complete human-computer interaction only through the camera and screen. The established visual human-computer interaction system mainly includes the function modes of three basic peripherals in human-computer interaction: keyboard, mouse (X-Y position indicator), and remote con
APA, Harvard, Vancouver, ISO, and other styles
29

Jang, Hyunwoong, and Soosun Cho. "Image Classification Using Bag of Visual Words and Visual Saliency Model." KIPS Transactions on Software and Data Engineering 3, no. 12 (2014): 547–52. http://dx.doi.org/10.3745/ktsde.2014.3.12.547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Yu, Jing, Xiaoze Jiang, Zengchang Qin, Weifeng Zhang, Yue Hu, and Qi Wu. "Learning Dual Encoding Model for Adaptive Visual Understanding in Visual Dialogue." IEEE Transactions on Image Processing 30 (2021): 220–33. http://dx.doi.org/10.1109/tip.2020.3034494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hazen, T. J. "Visual model structures and synchrony constraints for audio-visual speech recognition." IEEE Transactions on Audio, Speech and Language Processing 14, no. 3 (2006): 1082–89. http://dx.doi.org/10.1109/tsa.2005.857572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chowdhury, Souvik, and Badal Soni. "ENVQA: Improving Visual Question Answering model by enriching the visual feature." Engineering Applications of Artificial Intelligence 142 (February 2025): 109948. https://doi.org/10.1016/j.engappai.2024.109948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jeannerod, M., and P. Jacob. "Visual cognition: a new look at the two-visual systems model." Neuropsychologia 43, no. 2 (2005): 301–12. http://dx.doi.org/10.1016/j.neuropsychologia.2004.11.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Diane D., and Shuzhong Zhang. "Standardized Visual Predictive Check Versus Visual Predictive Check for Model Evaluation." Journal of Clinical Pharmacology 52, no. 1 (2012): 39–54. http://dx.doi.org/10.1177/0091270010390040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Peng, Peixi, Wanshu Fan, Yue Shen, Xin Yang, and Dongsheng Zhou. "A Global Visual Information Intervention Model for Medical Visual Question Answering." Computers in Biology and Medicine 192 (June 2025): 110195. https://doi.org/10.1016/j.compbiomed.2025.110195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

MATIN, LEONARD, and WENXUN LI. "Mislocalizations of Visual Elevation and Visual Vertical Induced by Visual Pitch: the Great Circle Model." Annals of the New York Academy of Sciences 656, no. 1 Sensing and C (1992): 242–65. http://dx.doi.org/10.1111/j.1749-6632.1992.tb25213.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yanqing Cui, Yanqing Cui, Guangjie Han Yanqing Cui, and Hongbo Zhu Guangjie Han. "A Novel Online Teaching Effect Evaluation Model Based on Visual Question Answering." 網際網路技術學刊 23, no. 1 (2022): 093–100. http://dx.doi.org/10.53106/160792642022012301009.

Full text
Abstract:
<p>The paper proposes a novel visual question answering (VQA)-based online teaching effect evaluation model. Based on the text interaction between teacher and students, we give a guide-attention (GA) model to discover the directive clues. Combining the self-attention (SA) models, we reweight the vital feature to locate the critical information on the whiteboard and students’ faces and further recognize their content and facial expressions. Three branches of information are encoded into the feature vectors to be fed into a bidirectional GRU network. With the real labels of the s
APA, Harvard, Vancouver, ISO, and other styles
38

Senthamizhselvi, S., and A. Saravanan. "Cuckoo Search Algorithm with Deep Learning Driven Robust Visual Place Recognition Model." International Journal of Science and Research (IJSR) 12, no. 10 (2023): 1443–49. http://dx.doi.org/10.21275/sr231018143622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Xue, Wanli, Zhiyong Feng, Chao Xu, Tong Liu, Zhaopeng Meng, and Chengwei Zhang. "Visual tracking via improving motion model and model updater." International Journal of Advanced Robotic Systems 15, no. 1 (2018): 172988141875623. http://dx.doi.org/10.1177/1729881418756238.

Full text
Abstract:
Motion model and model updater are two necessary components for online visual tracking. On the one hand, an effective motion model needs to strike the right balance between target processing, to account for the target appearance and scene analysis, and to describe stable background information. Most conventional trackers focus on one aspect out of the two and hence are not able to achieve the correct balance. On the other hand, the admirable model update needs to consider both the tracking speed and the model drift. Most tracking models are updated on every frame or fixed frames, so it cannot
APA, Harvard, Vancouver, ISO, and other styles
40

Zhou, Lijun, Antoine Ledent, Qintao Hu, Ting Liu, Jianlin Zhang, and Marius Kloft. "Model Uncertainty Guides Visual Object Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (2021): 3581–89. http://dx.doi.org/10.1609/aaai.v35i4.16473.

Full text
Abstract:
Model object trackers largely rely on the online learning of a discriminative classifier from potentially diverse sample frames. However, noisy or insufficient amounts of samples can deteriorate the classifiers' performance and cause tracking drift. Furthermore, alterations such as occlusion and blurring can cause the target to be lost. In this paper, we make several improvements aimed at tackling uncertainty and improving robustness in object tracking. Our first and most important contribution is to propose a sampling method for the online learning of object trackers based on uncertainty adju
APA, Harvard, Vancouver, ISO, and other styles
41

Ge, Daohui, Ruyi Liu, Yunan Li, and Qiguang Miao. "Reliable Memory Model for Visual Tracking." Electronics 10, no. 20 (2021): 2488. http://dx.doi.org/10.3390/electronics10202488.

Full text
Abstract:
Effectively learning the appearance change of a target is the key point of an online tracker. When occlusion and misalignment occur, the tracking results usually contain a great amount of background information, which heavily affects the ability of a tracker to distinguish between targets and backgrounds, eventually leading to tracking failure. To solve this problem, we propose a simple and robust reliable memory model. In particular, an adaptive evaluation strategy (AES) is proposed to assess the reliability of tracking results. AES combines the confidence of the tracker predictions and the s
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Yi Huai. "Visual Simulation of Mobile Channel Model." Applied Mechanics and Materials 246-247 (December 2012): 1209–13. http://dx.doi.org/10.4028/www.scientific.net/amm.246-247.1209.

Full text
Abstract:
Simulink is the integrated environment of system modelling and simulation, which is being widespread used. This paper describes the MATLAB visual simulation of the propagation path loss model for telecommunication systems. We simulated the whole process of COST231-Walfisch-Ikegami model with high accuracy, built a visual simulation frame and the path loss curves are given. This method can be used in studying other propagation path loss models in propagation environments.
APA, Harvard, Vancouver, ISO, and other styles
43

Xue, Qing, Xiao Ming Ren, Chang Wei Zheng, and Yong Hong Li. "Research of Driver’s Visual Perception Model." Applied Mechanics and Materials 198-199 (September 2012): 932–35. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.932.

Full text
Abstract:
Driving behavior research is the hot spot in the field of transportation nowadays owning to the stern traffic safety. The visual perception model based on bounded rationality has incorporated the attention distribution and concentration; driving fatigue; driving experience and visual information retrieve which provides a vivid simulation of real driver’s visual perception procedure and it lays the foundation for the study of driver’s decision and manipulation behavior.
APA, Harvard, Vancouver, ISO, and other styles
44

Sacha, Dominik, Andreas Stoffel, Florian Stoffel, Bum Chul Kwon, Geoffrey Ellis, and Daniel A. Keim. "Knowledge Generation Model for Visual Analytics." IEEE Transactions on Visualization and Computer Graphics 20, no. 12 (2014): 1604–13. http://dx.doi.org/10.1109/tvcg.2014.2346481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Demiralp, Cagatay, Carlos E. Scheidegger, Gordon L. Kindlmann, David H. Laidlaw, and Jeffrey Heer. "Visual Embedding: A Model for Visualization." IEEE Computer Graphics and Applications 34, no. 1 (2014): 10–15. http://dx.doi.org/10.1109/mcg.2014.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, R., and G. Cottrell. "Risk Averse Visual Decision Making Model." Journal of Vision 12, no. 9 (2012): 158. http://dx.doi.org/10.1167/12.9.158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Cantor, Robert M. "A semiotic model of visual perception." Semiotica 2014, no. 200 (2014): 1–20. http://dx.doi.org/10.1515/sem-2014-0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Spratling, M. W., and M. H. Johnson. "A Feedback Model of Visual Attention." Journal of Cognitive Neuroscience 16, no. 2 (2004): 219–37. http://dx.doi.org/10.1162/089892904322984526.

Full text
Abstract:
Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing.
APA, Harvard, Vancouver, ISO, and other styles
49

Stringa, Luigi. "A VISUAL MODEL FOR PATTERN RECOGNITION." International Journal of Neural Systems 03, supp01 (1992): 31–39. http://dx.doi.org/10.1142/s0129065792000358.

Full text
Abstract:
A general model for an optical recognition system capable of simultaneous recognition of patterns at different resolution levels is outlined. The model is based on two hierarchic stages of processing networks and presents interesting analogies with the human visual system. Illustrative applications and preliminary experimental results are also briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Amit, Yali, and Donald Geman. "A Computational Model for Visual Selection." Neural Computation 11, no. 7 (1999): 1691–715. http://dx.doi.org/10.1162/089976699300016197.

Full text
Abstract:
We propose a computational model for detecting and localizing instances from an object class in static gray-level images. We divide detection into visual selection and final classification, concentrating on the former: drastically reducing the number of candidate regions that require further, usually more intensive, processing, but with a minimum of computation and missed detections. Bottom-up processing is based on local groupings of edge fragments constrained by loose geometrical relationships. They have no a priori semantic or geometric interpretation. The role of training is to select spec
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!