To see the other types of publications on this topic, follow the link: Attribut visuel.

Journal articles on the topic 'Attribut visuel'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Attribut visuel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lazaridou, Angeliki, Georgiana Dinu, Adam Liska, and Marco Baroni. "From Visual Attributes to Adjectives through Decompositional Distributional Semantics." Transactions of the Association for Computational Linguistics 3 (December 2015): 183–96. http://dx.doi.org/10.1162/tacl_a_00132.

Full text
Abstract:
As automated image analysis progresses, there is increasing interest in richer linguistic annotation of pictures, with attributes of objects (e.g., furry, brown…) attracting most attention. By building on the recent “zero-shot learning” approach, and paying attention to the linguistic nature of attributes as noun modifiers, and specifically adjectives, we show that it is possible to tag images with attribute-denoting adjectives even when no training data containing the relevant annotation are available. Our approach relies on two key observations. First, objects can be seen as bundles of attri
APA, Harvard, Vancouver, ISO, and other styles
2

de Souza, Kelly Rejane, Rogério Melloni, and Gustavo Magno dos Reis Ferreira. "Qualidade Ambiental de Áreas de Pastagem por Meio de Atributos Visuais." Revista Brasileira de Geografia Física 11, no. 5 (2018): 1776–85. http://dx.doi.org/10.26848/rbgf.v11.5.p1776-1785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SCHWEIZER, TOM A., and MIKE J. DIXON. "The influence of visual and nonvisual attributes in visual object identification." Journal of the International Neuropsychological Society 12, no. 2 (2006): 176–83. http://dx.doi.org/10.1017/s1355617706060279.

Full text
Abstract:
To elucidate the role of visual and nonvisual attribute knowledge on visual object identification, we present data from three patients, each with visual object identification impairments as a result of different etiologies. Patients were shown novel computer-generated shapes paired with different labels referencing known entities. On test trials they were shown the novel shapes alone and had to identify them by generating the label with which they were formerly paired. In all conditions the same triad of computer-generated shapes were used. In one condition, the labels (banjo, guitar, violin)
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Qiaozhe, Xin Zhao, Ran He, and Kaiqi Huang. "Visual-Semantic Graph Reasoning for Pedestrian Attribute Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8634–41. http://dx.doi.org/10.1609/aaai.v33i01.33018634.

Full text
Abstract:
Pedestrian attribute recognition in surveillance is a challenging task due to poor image quality, significant appearance variations and diverse spatial distribution of different attributes. This paper treats pedestrian attribute recognition as a sequential attribute prediction problem and proposes a novel visual-semantic graph reasoning framework to address this problem. Our framework contains a spatial graph and a directed semantic graph. By performing reasoning using the Graph Convolutional Network (GCN), one graph captures spatial relations between regions and the other learns potential sem
APA, Harvard, Vancouver, ISO, and other styles
5

Casey, Elizabeth J. "Visual Display Representation of Multidimensional Systems: The Effect of Information Correlation and Display Integrality." Proceedings of the Human Factors Society Annual Meeting 30, no. 5 (1986): 430–34. http://dx.doi.org/10.1177/154193128603000504.

Full text
Abstract:
This study provides data regarding the use of object displays and schematic face displays to present dynamic, multivariate system information. Twelve subjects detected and diagnosed failures in a system whose variables were intercorrelated. Three visual, analog displays–a bar graph display, a pentagon, and a schematic face display–represented the system. These displays differed in the degree of integrality of their component features. Detection performance yielded a speed/accuracy tradeoff with little evidence of superiority for any of the displays. However, diagnosis performance showed a supe
APA, Harvard, Vancouver, ISO, and other styles
6

Gulshad, Sadaf, and Arnold Smeulders. "Counterfactual attribute-based visual explanations for classification." International Journal of Multimedia Information Retrieval 10, no. 2 (2021): 127–40. http://dx.doi.org/10.1007/s13735-021-00208-3.

Full text
Abstract:
AbstractIn this paper, our aim is to provide human understandable intuitive factual and counterfactual explanations for the decisions of neural networks. Humans tend to reinforce their decisions by providing attributes and counterattributes. Hence, in this work, we utilize attributes as well as examples to provide explanations. In order to provide counterexplanations we make use of directed perturbations to arrive at the counterclass attribute values in doing so, we explain what is present and what is absent in the original image. We evaluate our method when images are misclassified into close
APA, Harvard, Vancouver, ISO, and other styles
7

Papathomas, T. V., I. Kovács, and A. Feher. "Interocular Grouping of Visual Attributes during Binocular Rivalry." Perception 26, no. 1_suppl (1997): 304. http://dx.doi.org/10.1068/v970377.

Full text
Abstract:
The need to revise the eye competition hypothesis of binocular rivalry, and to include the role of stimulus competition has been demonstrated recently by Kovács, Papathomas, Feher, and Yang (1996 Proceedings of the National Academy of Sciences of the USA93 15508 – 15511) and Logothetis, Leopold, and Sheinberg [1996 Nature (London)380 621 – 624]. Kovács et al showed that observers can obtain one-colour percepts when presented with chromatically rivalrous stimuli, even when there are targets of two different colours in each eye. In this study we investigate whether other attributes, in addition
APA, Harvard, Vancouver, ISO, and other styles
8

Poom, Leo. "Visual Inter-Attribute Contour Completion." Perception 30, no. 7 (2001): 855–65. http://dx.doi.org/10.1068/p3222.

Full text
Abstract:
A new visual phenomenon, inter-attribute illusory (completed) contours, is demonstrated. Contour completions are perceived between any combination of spatially separate pairs of inducing elements (Kanizsa-like ‘pacman’ figures) defined either by pictorial cues (luminance contrast or offset gratings), temporal contrast (motion, second-order-motion or ‘phantom’ contours), or binocular-disparity contrast. In a first experiment, observers reported the perceived occurrence of contour completion for all pair combinations of inducing elements. In a second experiment they rated the perceived clarity o
APA, Harvard, Vancouver, ISO, and other styles
9

Zeki, Semir. "The Ferrier Lecture 1995 Behind the Seen: The functional specialization of the brain in space and time." Philosophical Transactions of the Royal Society B: Biological Sciences 360, no. 1458 (2005): 1145–83. http://dx.doi.org/10.1098/rstb.2005.1666.

Full text
Abstract:
The visual brain consists of many different visual areas, which are functionally specialized to process and perceive different attributes of the visual scene. However, the time taken to process different attributes varies; consequently, we see some attributes before others. It follows that there is a perceptual asynchrony and hierarchy in visual perception. Because perceiving an attribute is tantamount to becoming conscious of it, it follows that we become conscious of different attributes at different times. Visual consciousness is therefore distributed in time. Given that we become conscious
APA, Harvard, Vancouver, ISO, and other styles
10

Benedetti, Ginevra. "Quando gli attributi travalicano il signum. Riflessioni sull’identità visuale degli dèi a Roma = When attributes go beyond the signum. Remarks on the visual identity of the gods in Rome." ARYS. Antigüedad: Religiones y Sociedades, no. 17 (November 20, 2019): 105. http://dx.doi.org/10.20318/arys.2019.4601.

Full text
Abstract:
Riassunto: In questo lavoro ci si propone di analizzare, attraverso le pagine degli autori latini, la costruzione semiotica sottostante la rappresentazione visuale degli dèi nella cultura romana; ognuno di loro possedeva infatti qualche attributo o combinazione di attributi in grado di identificarli con maggiore o minore certezza, ciò che gli autori antichi definivano insignia, dei “segni speciali” che guidavano l’interpretazione / identificazione di un signum. In particolare, saranno presi in esame alcuni oggetti concreti impiegati dalla cultura romana per costruire immagini divine nella loro
APA, Harvard, Vancouver, ISO, and other styles
11

WIERENGA, CHRISTINA E., WILLIAM M. PERLSTEIN, MICHELLE BENJAMIN, et al. "Neural substrates of object identification: Functional magnetic resonance imaging evidence that category and visual attribute contribute to semantic knowledge." Journal of the International Neuropsychological Society 15, no. 2 (2009): 169–81. http://dx.doi.org/10.1017/s1355617709090468.

Full text
Abstract:
AbstractRecent findings suggest that neural representations of semantic knowledge contain information about category, modality, and attributes. Although an object’s category is defined according to shared attributes that uniquely distinguish it from other category members, a clear dissociation between visual attribute and category representation has not yet been reported. We investigated the contribution of category (living and nonliving) and visual attribute (global form and local details) to semantic representation in the fusiform gyrus. During functional magnetic resonance imaging (fMRI), 4
APA, Harvard, Vancouver, ISO, and other styles
12

Satterthwaite, Matthew, and Lesley K. Fellows. "Characterization of a food image stimulus set for the study of multi-attribute decision-making." MNI Open Research 2 (August 28, 2018): 4. http://dx.doi.org/10.12688/mniopenres.12791.1.

Full text
Abstract:
Everyday decisions are generally made between options that vary on multiple different attributes. These might vary from basic biological attributes (e.g. caloric density of a food) to higher-order attributes like healthiness or aesthetic appeal. There is a long tradition of studying the processes involved in explicitly multi-attribute decisions, with information presented in a table, for example. However, most naturalistic choices require attribute information to be identified from the stimulus during evaluation or value comparison. Well-characterized stimulus sets are needed to support behavi
APA, Harvard, Vancouver, ISO, and other styles
13

Kellenbach, Marion L., Albertus A. Wijers, Marjolijn Hovius, Juul Mulder, and Gijsbertus Mulder. "Neural Differentiation of Lexico-Syntactic Categories or Semantic Features? Event-Related Potential Evidence for Both." Journal of Cognitive Neuroscience 14, no. 4 (2002): 561–77. http://dx.doi.org/10.1162/08989290260045819.

Full text
Abstract:
Event-related potentials (ERPs) were used to investigate whether processing differences between nouns and verbs can be accounted for by the differential salience of visual-perceptual and motor attributes in their semantic specifications. Three subclasses of nouns and verbs were selected, which differed in their semantic attribute composition (abstract, high visual, high visual and motor). Single visual word presentation with a recognition memory task was used. While multiple robust and parallel ERP effects were observed for both grammatical class and attribute type, there were no interactions
APA, Harvard, Vancouver, ISO, and other styles
14

Pham, D. T., and E. J. Bayro-Corrochano. "Neural Classifiers for Automated Visual Inspection." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 208, no. 2 (1994): 83–89. http://dx.doi.org/10.1243/pime_proc_1994_208_166_02.

Full text
Abstract:
This paper discusses the application of a back-propagation multi-layer perceptron and a learning vector quantization network to the classification of defects in valve stem seals for car engines. Both networks were trained with vectors containing descriptive attributes of known flaws. These attribute vectors (‘signatures’) were extracted from images of the seals captured by an industrial vision system. The paper describes the hardware and techniques used and the results obtained.
APA, Harvard, Vancouver, ISO, and other styles
15

Guan, Weili, Zhaozheng Chen, Fuli Feng, Weifeng Liu, and Liqiang Nie. "Urban Perception: Sensing Cities via a Deep Interactive Multi-task Learning Framework." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (2021): 1–20. http://dx.doi.org/10.1145/3424115.

Full text
Abstract:
Social scientists have shown evidence that visual perceptions of urban attributes, such as safe, wealthy, and beautiful perspectives of the given cities, are highly correlated to the residents’ behaviors and quality of life. Despite their significance, measuring visual perceptions of urban attributes is challenging due to the following facts: (1) Visual perceptions are subjectively contradistinctive rather than absolute. (2) Perception comparisons between image pairs are usually conducted region by region, and highly related to the specific urban attributes. And (3) the urban attributes have b
APA, Harvard, Vancouver, ISO, and other styles
16

Qi, Yuankai, Shengping Zhang, Weigang Zhang, Li Su, Qingming Huang, and Ming-Hsuan Yang. "Learning Attribute-Specific Representations for Visual Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8835–42. http://dx.doi.org/10.1609/aaai.v33i01.33018835.

Full text
Abstract:
In recent years, convolutional neural networks (CNNs) have achieved great success in visual tracking. Most of existing methods train or fine-tune a binary classifier to distinguish the target from its background. However, they may suffer from the performance degradation due to insufficient training data. In this paper, we show that attribute information (e.g., illumination changes, occlusion and motion) in the context facilitates training an effective classifier for visual tracking. In particular, we design an attribute-based CNN with multiple branches, where each branch is responsible for cla
APA, Harvard, Vancouver, ISO, and other styles
17

Greco-Vigorito, Carolyn. "Categorization Based on Attribute versus Relational Similarity in 4-To 10-Month-Old Infants." Perceptual and Motor Skills 82, no. 3 (1996): 915–27. http://dx.doi.org/10.2466/pms.1996.82.3.915.

Full text
Abstract:
4- to 10-month-old infants were tested in 2 experiments to determine whether they used a similar attribute or a similar relationship among attributes to make visual judgments of similarity and categorization. In Exp. 1 infants were familiarized with a single stimulus composed of several attributes and a prescribed relationship among the attributes, left wing smaller than right wing. When tested in a novelty-preference procedure with novel stimuli that either preserved a single attribute but violated the relationship (Attribute Test Stimulus) or preserved the relationship with a new set of attr
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Yang, Huahu Xu, Minjie Bian, and Junsheng Xiao. "Attention Based CNN-ConvLSTM for Pedestrian Attribute Recognition." Sensors 20, no. 3 (2020): 811. http://dx.doi.org/10.3390/s20030811.

Full text
Abstract:
As a result of its important role in video surveillance, pedestrian attribute recognition has become an attractive facet of computer vision research. Because of the changes in viewpoints, illumination, resolution and occlusion, the task is very challenging. In order to resolve the issue of unsatisfactory performance of existing pedestrian attribute recognition methods resulting from ignoring the correlation between pedestrian attributes and spatial information, in this paper, the task is regarded as a spatiotemporal, sequential, multi-label image classification problem. An attention-based neur
APA, Harvard, Vancouver, ISO, and other styles
19

Balcombe, Kelvin, Iain Fraser, and Eugene McSorley. "Visual Attention and Attribute Attendance in Multi-Attribute Choice Experiments." Journal of Applied Econometrics 30, no. 3 (2014): 447–67. http://dx.doi.org/10.1002/jae.2383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Spalek, Thomas M., Jun-ichiro Kawahara, and Vincent Di Lollo. "Flicker is a primitive visual attribute in visual search." Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 63, no. 4 (2009): 319–22. http://dx.doi.org/10.1037/a0015716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Ke, Kui Jia, Zhaoxiang Zhang, and Joni-Kristian Kämäräinen. "Spectral attribute learning for visual regression." Pattern Recognition 66 (June 2017): 74–81. http://dx.doi.org/10.1016/j.patcog.2017.01.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Jing, Fu-Wu Li, Wei-Zhi Nie, Wen-Hui Li, and Yu-Ting Su. "Visual attribute detction for pedestrian detection." Multimedia Tools and Applications 78, no. 19 (2016): 26833–50. http://dx.doi.org/10.1007/s11042-016-4258-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Shaoxin, Shiguang Shan, Shuicheng Yan, and Xilin Chen. "Relative Forest for Visual Attribute Prediction." IEEE Transactions on Image Processing 25, no. 9 (2016): 3991–4003. http://dx.doi.org/10.1109/tip.2016.2580939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zheng, Jingjing, Zhuolin Jiang, and Rama Chellappa. "Submodular Attribute Selection for Visual Recognition." IEEE Transactions on Pattern Analysis and Machine Intelligence 39, no. 11 (2017): 2242–55. http://dx.doi.org/10.1109/tpami.2016.2636827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mirjalili, Fereshteh, and Jon Yngve Hardeberg. "Appearance perception of textiles: a tactile and visual texture study." Color and Imaging Conference 2019, no. 1 (2019): 43–48. http://dx.doi.org/10.2352/issn.2169-2629.2019.27.9.

Full text
Abstract:
Texture analysis and characterization based on human perception has been continuously sought after by psychology and computer vision researchers. However, the fundamental question of how humans truly perceive texture still remains. In the present study, using a series of textile samples, the most important perceptual attributes people use to interpret and evaluate the texture properties of textiles were accumulated through the verbal description of texture by a group of participants. Smooth, soft, homogeneous, geometric variation, random, repeating, regular, color variation, strong, and compli
APA, Harvard, Vancouver, ISO, and other styles
26

Saghafi, Mohammadali, Aini Hussain, Mohamad Hanif Md. Saad, Mohd Asyraf Zulkifley, Nooritawati Md Tahir, and Mohd Faisal Ibrahim. "Pose and Illumination Invariance of Attribute Detectors in Person Re-identification." International Journal of Engineering & Technology 7, no. 4.11 (2018): 174. http://dx.doi.org/10.14419/ijet.v7i4.11.20796.

Full text
Abstract:
The use of attributes in person re-identification and video surveillance applications has grabbed attentions of many researchers in recent times. Attributes are suitable tools for mid-level representation of a part or a region in an image as it is more similar to human perception as compared to the quantitative nature of the normal visual features description of those parts. Hence, in this paper, the preliminary experimental results to evaluate the robustness of attribute detectors against pose and light variations in contrast to the use of local appearance features is discussed. Results attai
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Qiang, Xinyu Xiao, Chunxia Zhang, Lifei Song, and Chunhong Pan. "Extracting Effective Image Attributes with Refined Universal Detection." Sensors 21, no. 1 (2020): 95. http://dx.doi.org/10.3390/s21010095.

Full text
Abstract:
Recently, image attributes containing high-level semantic information have been widely used in computer vision tasks, including visual recognition and image captioning. Existing attribute extraction methods map visual concepts to the probabilities of frequently-used words by directly using Convolutional Neural Networks (CNNs). Typically, two main problems exist in those methods. First, words of different parts of speech (POSs) are handled in the same way, but non-nominal words can hardly be mapped to visual regions through CNNs only. Second, synonymous nominal words are treated as independent
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Gaochao, Jun Yang, and Jing Jin. "ASSESSING RELATIONS AMONG LANDSCAPE PREFERENCE, INFORMATIONAL VARIABLES, AND VISUAL ATTRIBUTES." Journal of Environmental Engineering and Landscape Management 29, no. 3 (2021): 294–304. http://dx.doi.org/10.3846/jeelm.2021.15584.

Full text
Abstract:
The theory of preference matrix proposes coherence and complexity as informational variables to explain landscape preferences. To understand the relationship between the perceived coherence/complexity and the visual attributes of landscape scenes, we constructed multivariate generalized linear models based on a questionnaire study. A total of 488 respondents’ ratings of the preference, the perceived coherence and complexity, and four visual attributes, namely, the openness of visual scale (openness), the richness of composing elements (richness), the orderliness of organization (orderliness),
APA, Harvard, Vancouver, ISO, and other styles
29

Carbonaro, Michael. "Making a Connection between Computational Modeling and Educational Research." Journal of Educational Computing Research 28, no. 1 (2003): 63–81. http://dx.doi.org/10.2190/l1th-3v6m-2w5q-8ltj.

Full text
Abstract:
Bruner, Goodnow, and Austin's (1956) research on concept development is re-examined from a connectionist perspective. A neural network was constructed which associates positive and negative instances of a concept with their corresponding attribute values. Two methods were used to help preserve the ecological validity of the input: 1) closely mapping the input to the actual visual stimuli; and 2) structuring the output layer based on Gagne's (1962, 1985) work on human concept learning. This resulted in the addition of output units referred to as attribute context constraints. These units requir
APA, Harvard, Vancouver, ISO, and other styles
30

Wijaya, Marvin Chandra, Zulisman Maksom, and Muhammad Haziq Lim Abdullah. "A Brief of Review: Multimedia Authoring Tool Attributes." Ingénierie des systèmes d information 26, no. 1 (2021): 1–11. http://dx.doi.org/10.18280/isi.260101.

Full text
Abstract:
Multimedia authoring is the process of assembling various types of media content such as audio, video, text, images, and animation into a multimedia presentation using tools. Multimedia Authoring Tool is a useful tool that helps authors to create multimedia presentations. Multimedia presentations are very widely used in various fields, such as broadcast digital information delivery, digital visual communication in smart cars, and others. The Multimedia Authoring tool attributes are the factors that determine the quality of a multimedia authoring tool. A multimedia authoring tool needs to have
APA, Harvard, Vancouver, ISO, and other styles
31

&NA;. "Vigabatrin-attributed visual field defects." Reactions Weekly &NA;, no. 790 (2000): 5. http://dx.doi.org/10.2165/00128415-200007900-00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gorea, Andrei, Florent Caetta, and Dov Sagi. "Criteria interactions across visual attributes." Vision Research 45, no. 19 (2005): 2523–32. http://dx.doi.org/10.1016/j.visres.2005.03.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Marzoli, S. Bianchi, and A. Criscuoli. "Headaches attributed to visual disturbances." Neurological Sciences 36, S1 (2015): 85–88. http://dx.doi.org/10.1007/s10072-015-2167-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Japar, Nurul, Ven Jyn Kok, and Chee Seng Chan. "Collectiveness analysis with visual attributes." Neurocomputing 463 (November 2021): 77–90. http://dx.doi.org/10.1016/j.neucom.2021.08.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zeng, Qiong, Wenzheng Chen, Zhuo Han, et al. "Group optimization for multi-attribute visual embedding." Visual Informatics 2, no. 3 (2018): 181–89. http://dx.doi.org/10.1016/j.visinf.2018.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

KASAI, Seito, Naofumi AKIMOTO, Masaki HAYASHI, and Yoshimitsu AOKI. "Visual Attribute Manipulation Using Natural Language Commands." Journal of the Japan Society for Precision Engineering 85, no. 12 (2019): 1102–9. http://dx.doi.org/10.2493/jjspe.85.1102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Morita, Masahiko, Shigemitsu Morokami, and Hiromi Morita. "Attribute Pair-Based Visual Recognition and Memory." PLoS ONE 5, no. 3 (2010): e9571. http://dx.doi.org/10.1371/journal.pone.0009571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gratzl, Samuel, Alexander Lex, Nils Gehlenborg, Hanspeter Pfister, and Marc Streit. "LineUp: Visual Analysis of Multi-Attribute Rankings." IEEE Transactions on Visualization and Computer Graphics 19, no. 12 (2013): 2277–86. http://dx.doi.org/10.1109/tvcg.2013.173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liao, Jing, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. "Visual attribute transfer through deep image analogy." ACM Transactions on Graphics 36, no. 4 (2017): 1–15. http://dx.doi.org/10.1145/3072959.3073683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Grebitus, Carola, and Jutta Roosen. "Influence of non-attendance on choices with varying complexity." European Journal of Marketing 52, no. 9/10 (2018): 2151–72. http://dx.doi.org/10.1108/ejm-02-2017-0143.

Full text
Abstract:
Purpose The purpose of this research is to test how varying the numbers of attributes and alternatives affects the use of heuristics and selective information processing in discrete choice experiments (DCEs). The effects of visual attribute and alternative non-attendance (NA) on respondent choices are analyzed. Design/methodology/approach Two laboratory experiments that combined eye tracking and DCEs were conducted with 109 and 117 participants in the USA. The DCEs varied in task complexity by the number of product attributes and alternatives. Findings Results suggest that participants ignore
APA, Harvard, Vancouver, ISO, and other styles
41

Jonas, Clare, Mary Jane Spiller, and Paul Hibbard. "Summation of visual attributes in auditory–visual crossmodal correspondences." Psychonomic Bulletin & Review 24, no. 4 (2017): 1104–12. http://dx.doi.org/10.3758/s13423-016-1215-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Yano, Katsuya, and Takeshi Kohama. "Visual search model considering spatial modification of visual attributes." Neuroscience Research 71 (September 2011): e256. http://dx.doi.org/10.1016/j.neures.2011.07.1116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Oliveira, Nuno, Varanda Pereira, Rangel Henriques, Cruz Da, and Bastian Cramer. "VisualLISA: A visual environment to develop attribute grammars." Computer Science and Information Systems 7, no. 2 (2010): 265–89. http://dx.doi.org/10.2298/csis1002265o.

Full text
Abstract:
The focus of this paper is on crafting a new visual language for attribute grammars (AGs), and on the development of the associated programming environment. We present a solution for rapid development of VisualLISA editor using DEViL. DEViL uses traditional attribute grammars, to specify the language's syntax and semantics, extended by visual representations to be associated with grammar symbols. From these specifications a visual programming environment is automatically generated. In our case, the environment allows us to edit a visual description of an AG that is automatically translated int
APA, Harvard, Vancouver, ISO, and other styles
44

DeLong, Karen L., Konstantinos G. Syrengelas, Carola Grebitus, and Rodolfo M. Nayga. "Visual versus Text Attribute Representation in Choice Experiments." Journal of Behavioral and Experimental Economics 94 (October 2021): 101729. http://dx.doi.org/10.1016/j.socec.2021.101729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Jie, and Qi Wang. "Study on Building Materials and Building Color Attribute Changes in Cold Regional for Weather Factors and Distance." Advanced Materials Research 1014 (July 2014): 263–66. http://dx.doi.org/10.4028/www.scientific.net/amr.1014.263.

Full text
Abstract:
The sensory experience of visual perception and quantification of physical properties of colors are combined in this paper, and with the colors commonly used in buildings materials in the cold region of China as an example, based on the visual perception principle, the changes in such color attributes of buildings as chromaticness, blackness and hue in vision in different weather and observation distance conditions are analyzed. The result shows that the stimulus degree of chromaticness and blackness decreases with the increase in observation distance, directly related to weather changes, whil
APA, Harvard, Vancouver, ISO, and other styles
46

Jiang, Yuhong V., Joshua M. Shupe, Khena M. Swallow, and Deborah H. Tan. "Memory for recently accessed visual attributes." Journal of Experimental Psychology: Learning, Memory, and Cognition 42, no. 8 (2016): 1331–37. http://dx.doi.org/10.1037/xlm0000231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Xiao, Tan, Chao Zhang, and Hongbin Zha. "Anomaly Detection via Midlevel Visual Attributes." Mathematical Problems in Engineering 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/343869.

Full text
Abstract:
Automatically discovering anomalous events and objects from surveillance videos plays an important role in real-world application and has attracted considerable attention in computer vision community. However it is still a challenging issue. In this paper, a novel approach for automatic anomaly detection is proposed. Our approach is highly efficient; thus it can perform real-time detection. Furthermore, it can also handle multiscale detection and can cope with spatial and temporal anomalies. Specifically, local features capturing both appearance and motion characteristics of videos are extract
APA, Harvard, Vancouver, ISO, and other styles
48

Coltheart, M. "A semantic subsystem of visual attributes." Neurocase 4, no. 4 (1998): 353a—370. http://dx.doi.org/10.1093/neucas/4.4.353-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Coltheart, Max, Lesley Inglls, Linda Cupples, Pat Michie, Andree Bates, and Bill Budd. "A Semantic Subsystem of Visual Attributes." Neurocase 4, no. 4-5 (1998): 353–70. http://dx.doi.org/10.1080/13554799808410632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kaas, Jon H. "Why Does the Brain Have So Many Visual Areas?" Journal of Cognitive Neuroscience 1, no. 2 (1989): 121–35. http://dx.doi.org/10.1162/jocn.1989.1.2.121.

Full text
Abstract:
Mammals vary in number of visual areas from a few to 20 or more as a result of new visual areas being added to the middle levels of processing hierarchies. Having more visual areas probably increases visual abilities, perhaps in part by allowing more stimulus parameters to be considered. Proposals that each visual area computes and thereby “detects” a specific stimulus attribute have so far dealt with attributes that most mammals can detect and thus do not relate to the issue of species differences in numbers of areas. The problem of forming and maintaining complex patterns of interconnections
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!