Academic literature on the topic 'Attribut visuel'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Attribut visuel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Attribut visuel"

1

Lazaridou, Angeliki, Georgiana Dinu, Adam Liska, and Marco Baroni. "From Visual Attributes to Adjectives through Decompositional Distributional Semantics." Transactions of the Association for Computational Linguistics 3 (December 2015): 183–96. http://dx.doi.org/10.1162/tacl_a_00132.

Full text
Abstract:
As automated image analysis progresses, there is increasing interest in richer linguistic annotation of pictures, with attributes of objects (e.g., furry, brown…) attracting most attention. By building on the recent “zero-shot learning” approach, and paying attention to the linguistic nature of attributes as noun modifiers, and specifically adjectives, we show that it is possible to tag images with attribute-denoting adjectives even when no training data containing the relevant annotation are available. Our approach relies on two key observations. First, objects can be seen as bundles of attri
APA, Harvard, Vancouver, ISO, and other styles
2

de Souza, Kelly Rejane, Rogério Melloni, and Gustavo Magno dos Reis Ferreira. "Qualidade Ambiental de Áreas de Pastagem por Meio de Atributos Visuais." Revista Brasileira de Geografia Física 11, no. 5 (2018): 1776–85. http://dx.doi.org/10.26848/rbgf.v11.5.p1776-1785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SCHWEIZER, TOM A., and MIKE J. DIXON. "The influence of visual and nonvisual attributes in visual object identification." Journal of the International Neuropsychological Society 12, no. 2 (2006): 176–83. http://dx.doi.org/10.1017/s1355617706060279.

Full text
Abstract:
To elucidate the role of visual and nonvisual attribute knowledge on visual object identification, we present data from three patients, each with visual object identification impairments as a result of different etiologies. Patients were shown novel computer-generated shapes paired with different labels referencing known entities. On test trials they were shown the novel shapes alone and had to identify them by generating the label with which they were formerly paired. In all conditions the same triad of computer-generated shapes were used. In one condition, the labels (banjo, guitar, violin)
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Qiaozhe, Xin Zhao, Ran He, and Kaiqi Huang. "Visual-Semantic Graph Reasoning for Pedestrian Attribute Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8634–41. http://dx.doi.org/10.1609/aaai.v33i01.33018634.

Full text
Abstract:
Pedestrian attribute recognition in surveillance is a challenging task due to poor image quality, significant appearance variations and diverse spatial distribution of different attributes. This paper treats pedestrian attribute recognition as a sequential attribute prediction problem and proposes a novel visual-semantic graph reasoning framework to address this problem. Our framework contains a spatial graph and a directed semantic graph. By performing reasoning using the Graph Convolutional Network (GCN), one graph captures spatial relations between regions and the other learns potential sem
APA, Harvard, Vancouver, ISO, and other styles
5

Casey, Elizabeth J. "Visual Display Representation of Multidimensional Systems: The Effect of Information Correlation and Display Integrality." Proceedings of the Human Factors Society Annual Meeting 30, no. 5 (1986): 430–34. http://dx.doi.org/10.1177/154193128603000504.

Full text
Abstract:
This study provides data regarding the use of object displays and schematic face displays to present dynamic, multivariate system information. Twelve subjects detected and diagnosed failures in a system whose variables were intercorrelated. Three visual, analog displays–a bar graph display, a pentagon, and a schematic face display–represented the system. These displays differed in the degree of integrality of their component features. Detection performance yielded a speed/accuracy tradeoff with little evidence of superiority for any of the displays. However, diagnosis performance showed a supe
APA, Harvard, Vancouver, ISO, and other styles
6

Gulshad, Sadaf, and Arnold Smeulders. "Counterfactual attribute-based visual explanations for classification." International Journal of Multimedia Information Retrieval 10, no. 2 (2021): 127–40. http://dx.doi.org/10.1007/s13735-021-00208-3.

Full text
Abstract:
AbstractIn this paper, our aim is to provide human understandable intuitive factual and counterfactual explanations for the decisions of neural networks. Humans tend to reinforce their decisions by providing attributes and counterattributes. Hence, in this work, we utilize attributes as well as examples to provide explanations. In order to provide counterexplanations we make use of directed perturbations to arrive at the counterclass attribute values in doing so, we explain what is present and what is absent in the original image. We evaluate our method when images are misclassified into close
APA, Harvard, Vancouver, ISO, and other styles
7

Papathomas, T. V., I. Kovács, and A. Feher. "Interocular Grouping of Visual Attributes during Binocular Rivalry." Perception 26, no. 1_suppl (1997): 304. http://dx.doi.org/10.1068/v970377.

Full text
Abstract:
The need to revise the eye competition hypothesis of binocular rivalry, and to include the role of stimulus competition has been demonstrated recently by Kovács, Papathomas, Feher, and Yang (1996 Proceedings of the National Academy of Sciences of the USA93 15508 – 15511) and Logothetis, Leopold, and Sheinberg [1996 Nature (London)380 621 – 624]. Kovács et al showed that observers can obtain one-colour percepts when presented with chromatically rivalrous stimuli, even when there are targets of two different colours in each eye. In this study we investigate whether other attributes, in addition
APA, Harvard, Vancouver, ISO, and other styles
8

Poom, Leo. "Visual Inter-Attribute Contour Completion." Perception 30, no. 7 (2001): 855–65. http://dx.doi.org/10.1068/p3222.

Full text
Abstract:
A new visual phenomenon, inter-attribute illusory (completed) contours, is demonstrated. Contour completions are perceived between any combination of spatially separate pairs of inducing elements (Kanizsa-like ‘pacman’ figures) defined either by pictorial cues (luminance contrast or offset gratings), temporal contrast (motion, second-order-motion or ‘phantom’ contours), or binocular-disparity contrast. In a first experiment, observers reported the perceived occurrence of contour completion for all pair combinations of inducing elements. In a second experiment they rated the perceived clarity o
APA, Harvard, Vancouver, ISO, and other styles
9

Zeki, Semir. "The Ferrier Lecture 1995 Behind the Seen: The functional specialization of the brain in space and time." Philosophical Transactions of the Royal Society B: Biological Sciences 360, no. 1458 (2005): 1145–83. http://dx.doi.org/10.1098/rstb.2005.1666.

Full text
Abstract:
The visual brain consists of many different visual areas, which are functionally specialized to process and perceive different attributes of the visual scene. However, the time taken to process different attributes varies; consequently, we see some attributes before others. It follows that there is a perceptual asynchrony and hierarchy in visual perception. Because perceiving an attribute is tantamount to becoming conscious of it, it follows that we become conscious of different attributes at different times. Visual consciousness is therefore distributed in time. Given that we become conscious
APA, Harvard, Vancouver, ISO, and other styles
10

Benedetti, Ginevra. "Quando gli attributi travalicano il signum. Riflessioni sull’identità visuale degli dèi a Roma = When attributes go beyond the signum. Remarks on the visual identity of the gods in Rome." ARYS. Antigüedad: Religiones y Sociedades, no. 17 (November 20, 2019): 105. http://dx.doi.org/10.20318/arys.2019.4601.

Full text
Abstract:
Riassunto: In questo lavoro ci si propone di analizzare, attraverso le pagine degli autori latini, la costruzione semiotica sottostante la rappresentazione visuale degli dèi nella cultura romana; ognuno di loro possedeva infatti qualche attributo o combinazione di attributi in grado di identificarli con maggiore o minore certezza, ciò che gli autori antichi definivano insignia, dei “segni speciali” che guidavano l’interpretazione / identificazione di un signum. In particolare, saranno presi in esame alcuni oggetti concreti impiegati dalla cultura romana per costruire immagini divine nella loro
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Attribut visuel"

1

Gast, Alexander. "Identification with Game Characters : Effects of visual attributes on the identification process between players and characters." Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-32341.

Full text
Abstract:
Concept of identity within digital games is believed to be a prominent subject as the bond between the player and the character could potentially enhance the gameplay experience. There is as yet a lack of studies addressing the visual identification of predefined game characters. Therefore, this study aims to examine how the identification is established through visual attributes of a game character. To this end, a qualitative online survey was undertaking, gathering responses from 350 respondents. The responses were analysed using thematic analysis, and the elicited themes indicate that the i
APA, Harvard, Vancouver, ISO, and other styles
2

Nilsson, Emma. "Menstruella klichéer - En visuell diskursanalys om hur visuella attribut påverkar föreställningen om menstruation." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-21875.

Full text
Abstract:
I denna uppsats analyseras nio stycken förpackningar för bindor från tre olika varumärken; Always, Ica och Libresse. Detta för att se hur de visuella attributen påverkar föreställningen om menstruation och den menstruerande personen. Förpackningar har olika roller men i denna uppsats är det rollen som kommunikatör som står i fokus, alltså den envägskommunikation som förpackningarna står för. Analysen har skett i två steg med en visuell analys samt en diskursanalys och grundar sig i ett genusvetenskapligt perspektiv med teorier som till största del följer den inriktningen. Att det ligger vissa
APA, Harvard, Vancouver, ISO, and other styles
3

Nébouy, David. "Printing quality assessment by image processing and color prediction models." Thesis, Saint-Etienne, 2015. http://www.theses.fr/2015STET4018/document.

Full text
Abstract:
L'impression, bien qu'étant une technique ancienne pour la coloration de surfaces, a connu un progrès considérable ces dernières années essentiellement grâce à la révolution du numérique. Les professionnels souhaitant remplir les exigences en termes de qualité du rendu visuel de leurs clients veulent donc savoir dans quelle mesure des observateurs humains sont sensibles à la dégradation d'une image. De telles questions concernant la qualité perçue d'une image reproduite peuvent être séparées en deux sujets différents: La qualité de l'impression, comme la capacité d'un système d'impression à re
APA, Harvard, Vancouver, ISO, and other styles
4

Eymond, Cécile. "L'attention sélective et les traits visuels dans la correspondance transsaccadique." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB234.

Full text
Abstract:
Chaque saccade oculaire décale brusquement l'image projetée sur la rétine. Pourtant notre perception du monde reste stable et uniforme car le système visuel fait correspondre les informations avant et après chaque saccade. Pour établir cette correspondance, les mécanismes attentionnels seraient fondamentaux. Jusqu'à présent, ce lien transsaccadique a été mis en évidence par des études portant essentiellement sur le traitement des informations spatiales - à savoir, comment la position rétinienne d'un objet est corrigée à chaque saccade pour maintenir une perception stable du monde. Le traitemen
APA, Harvard, Vancouver, ISO, and other styles
5

Nilsson, Nathalie. "Stereotyper och Yrkesroller : En undersökning om igenkänning via yrkesrelaterade attribut." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12457.

Full text
Abstract:
Detta är en studie vilken har undersökt huruvida en karaktärs kläder eller andra attribut påverkar betraktarens uppfattning om karaktären. Främst har indikationer på vilka stereotypa attribut som har störst inverkan eftersökts, kläder eller andra föremål och accessoarer? Fokus i undersökningen har således varit karaktärens visuellt applicerbara attribut, dess kläder och accessoarer, inte dess fysionomi, genus eller etnicitet. En kvinnlig figur utformad som en kentaur har använts som en form av klippdocka att applicera de olika attributen på.
APA, Harvard, Vancouver, ISO, and other styles
6

Grossmann, Jon K. "Competition in multistable vision is attribute-specific." Birmingham, Ala. : University of Alabama at Birmingham, 2007. https://www.mhsl.uab.edu/dt/2007r/grossmann.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Alabama at Birmingham, 2007.<br>Additional advisors: Timothy Gawne, Richard Gray, Michael Loop, Michael Sloane, Donald Twieg. Description based on contents viewed Mar. 3, 2008; title from title screen. Includes bibliographical references (p. 88-97).
APA, Harvard, Vancouver, ISO, and other styles
7

Hanwell, David. "Weakly supervised learning of visual semantic attributes." Thesis, University of Bristol, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.687063.

Full text
Abstract:
There are at present many billions of images on the internet, only a fraction of which are labelled according to their semantic content. To automatically provide labels for the rest, models of visual semantic concepts must be created. Such models are traditionally trained using images which have been manually acquired, segmented, and labelled. In this thesis, we submit that such models can be learned automatically using those few images which have already been labelled, either directly by their creators, or indirectly by their associated text. Such imagery can be acquired easily, cheaply, and
APA, Harvard, Vancouver, ISO, and other styles
8

Conway, Miriam. "Investigation into visual defects attributed to Vigabatrin." Thesis, Aston University, 2003. http://publications.aston.ac.uk/14652/.

Full text
Abstract:
Vigabatrin (VGB) is a transaminase inhibitor that elicits its anitepileptic effect by increasing GABA concentrations in the brain and retina. - Assess whether certain factors predispose patients to develop severe visual field loss. - Develop a sensitive algorithm for investigating the progression of visual field loss. - Determine the most sensitive clinical regimen for diagnosing VGB-attributed visual field loss. - Investigate whether the reports of central retinal sparing are accurate. The investigations have resulted in a number of significant findings: - The anatomical evidence in combinati
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Yanwei. "Attribute learning for image/video understanding." Thesis, Queen Mary, University of London, 2015. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8920.

Full text
Abstract:
For the past decade computer vision research has achieved increasing success in visual recognition including object detection and video classification. Nevertheless, these achievements still cannot meet the urgent needs of image and video understanding. The recently rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. In particular, these types of media data usually contain very complex social activities of a group of people (e.g. YouTube video of a wedding reception) and are captured by consumer devices with poor vis
APA, Harvard, Vancouver, ISO, and other styles
10

Mei, Yuanxun. "Visualization of Wine Attributes." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-6159.

Full text
Abstract:
<p>As the development of the Internet and the rapid increase of data, information visualization is becoming more and more popular. Since human eyes receive visual information very quick and easy, the visualization can make complex and large data more understandable.</p><p>Describing sensory perceptions, such as taste, is a challenging task. For a customer, the visualization of the taste of a specific wine together with the other wine attributes such as color and grape type would help him/her choose the right one.    In the thesis, two suitable representations of wine attributes are implemented
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Attribut visuel"

1

Feris, Rogerio Schmidt, Christoph Lampert, and Devi Parikh, eds. Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jörgensen, Corinne. Image attributes: An investigation. UMI, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Price, Andrew John. A STUDY TO DETERMINE THE EFFECT OF CERTAIN VISUAL ATTRIBUTES ON BASKETBALL SHOOTING ACCURACY. S.G.I.H.E., 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ham, Tao Yao. A cross-cultural comparison of preference for visual attributes in interior environments: America and China. UMI, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Parikh, Devi, Christoph Lampert, and Rogerio Schmidt Feris. Visual Attributes. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Agrawal, Anurag. The expressive power and declarative attributes of exception handling in Forms/3. 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Traul, David E. Postoperative Visual Loss in Spine Surgery. Edited by David E. Traul and Irene P. Osborn. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190850036.003.0026.

Full text
Abstract:
Postoperative visual loss (POVL) is a rare but devastating condition associated with many types of nonocular surgery. In spine surgery, the most common causes of POVL are ischemic optic neuropathy (ION), central retinal artery occlusion (CRAO), and cortical blindness. Although the association of POVL with spine surgery has long been recognized, the low incidence of this complication hinders the identification of patient and perioperative risk factors and limits our understanding of the causes of POVL. In adult spine surgery, POVL is most frequently attributed to ION whereas CRAO is more common
APA, Harvard, Vancouver, ISO, and other styles
8

Hurlbert, Anya. The Chromatic Mach Card. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0049.

Full text
Abstract:
The object colors that we see are constructed by the visual brain and may therefore be significantly influenced by other visual attributes we perceive the object to possess. This chapter describes an illusion that illustrates one such interdependence between perceived object shape and color. The Chromatic Mach Card is a folded concave card, one side painted white and the other magenta. When the card is perceived in inverted depth, or convex, the pinkish reflections cast by the magenta side onto the white side appear deeper in saturation and painted thereon. Like the nineteenth-century Mach Car
APA, Harvard, Vancouver, ISO, and other styles
9

Sperling, George, Son-Hee Lyu, Chia-Huei Tseng, and Zhong-Lin Lu. The Motion Standstill Illusion. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0078.

Full text
Abstract:
In the motion standstill illusion, a pattern that is moving quite rapidly is perceived as being absolutely motionless, and yet its details are not blurred but clearly visible. The illusion can be observed in a wide variety of special moving stimuli that either disadvantage or fatigue the motion systems to the point where no motion is perceived but where the shape, texture, color, and depth systems are still able to function sufficiently to extract a stable image from the moving display. It demonstrates that visual processing systems for attributes such as shape, texture, color, and depth extra
APA, Harvard, Vancouver, ISO, and other styles
10

Stevenson, Alice. Predynastic Egyptian Figurines. Edited by Timothy Insoll. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199675616.013.004.

Full text
Abstract:
Anthropomorphic figurines attributed to fourth millennium bc predynastic Egypt are exceptionally rare. This chapter focuses its attention on the even smaller subset of those representations that can be contextualized archaeologically. This more selective treatment is intended to shift the core of the discussion of these artefacts from the usual focus upon visual representation towards consideration of embodiment and the spaces in which these things were made, encountered, and experienced. In particular, it is argued that figurines were affective devices that elicited emotional attention within
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Attribut visuel"

1

Feris, Rogerio Schmidt, Christoph Lampert, and Devi Parikh. "Introduction to Visual Attributes." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maji, Subhransu. "A Taxonomy of Part and Attribute Discovery Techniques." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Patterson, Genevieve, and James Hays. "The SUN Attribute Database: Organizing Scenes by Affordances, Materials, and Layout." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rohrbach, Marcus. "Attributes as Semantic Units Between Natural Language and Visual Recognition." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Silberer, Carina. "Grounding the Meaning of Words with Visual Attributes." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Romera-Paredes, Bernardino, and Philip H. S. Torr. "An Embarrassingly Simple Approach to Zero-Shot Learning." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sharmanska, Viktoriia, and Novi Quadrianto. "In the Era of Deep Convolutional Features: Are Attributes Still Useful Privileged Data?" In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Chao-Yeh, Dinesh Jayaraman, Fei Sha, and Kristen Grauman. "Divide, Share, and Conquer: Multi-task Attribute Learning with Selective Sharing." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kovashka, Adriana, and Kristen Grauman. "Attributes for Image Retrieval." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Aron, and Kristen Grauman. "Fine-Grained Comparisons with Attributes." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Attribut visuel"

1

Liang, Kongming, Yuhong Guo, Hong Chang, and Xilin Chen. "Incomplete Attribute Learning with auxiliary labels." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/313.

Full text
Abstract:
Visual attribute learning is a fundamental and challenging problem for image understanding. Considering the huge semantic space of attributes, it is economically impossible to annotate all their presence or absence for a natural image via crowd-sourcing. In this paper, we tackle the incompleteness nature of visual attributes by introducing auxiliary labels into a novel transductive learning framework. By jointly predicting the attributes from the input images and modeling the relationship of attributes and auxiliary labels, the missing attributes can be recovered effectively. In addition, the
APA, Harvard, Vancouver, ISO, and other styles
2

Chhabra, Saheb, Richa Singh, Mayank Vatsa, and Gaurav Gupta. "Anonymizing k Facial Attributes via Adversarial Perturbations." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/91.

Full text
Abstract:
A face image not only provides details about the identity of a subject but also reveals several attributes such as gender, race, sexual orientation, and age. Advancements in machine learning algorithms and popularity of sharing images on the World Wide Web, including social media websites, have increased the scope of data analytics and information profiling from photo collections. This poses a serious privacy threat for individuals who do not want to be profiled. This research presents a novel algorithm for anonymizing selective attributes which an individual does not want to share without aff
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Qiaozhe, Xin Zhao, Ran He, and Kaiqi Huang. "Pedestrian Attribute Recognition by Joint Visual-semantic Reasoning and Knowledge Distillation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/117.

Full text
Abstract:
Pedestrian attribute recognition in surveillance is a challenging task in computer vision due to significant pose variation, viewpoint change and poor image quality. To achieve effective recognition, this paper presents a graph-based global reasoning framework to jointly model potential visual-semantic relations of attributes and distill auxiliary human parsing knowledge to guide the relational learning. The reasoning framework models attribute groups on a graph and learns a projection function to adaptively assign local visual features to the nodes of the graph. After feature projection, grap
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Xin, Liufang Sang, Guiguang Ding, Yuchen Guo, and Xiaoming Jin. "Grouping Attribute Recognition for Pedestrian with Joint Recurrent Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/441.

Full text
Abstract:
Pedestrian attributes recognition is to predict attribute labels of pedestrian from surveillance images, which is a very challenging task for computer vision due to poor imaging quality and small training dataset. It is observed that semantic pedestrian attributes to be recognised tend to show semantic or visual spatial correlation. Attributes can be grouped by the correlation while previous works mostly ignore this phenomenon. Inspired by Recurrent Neural Network (RNN)'s super capability of learning context correlations, this paper proposes an end-to-end Grouping Recurrent Learning (GRL) mode
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Hui, Guiguang Ding, Zijia Lin, Sicheng Zhao, and Jungong Han. "Show, Observe and Tell: Attribute-driven Attention Model for Image Captioning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/84.

Full text
Abstract:
Despite the fact that attribute-based approaches and attention-based approaches have been proven to be effective in image captioning, most attribute-based approaches simply predict attributes independently without taking the co-occurrence dependencies among attributes into account. Besides, most attention-based captioning models directly leverage the feature map extracted from CNN, in which many features may be redundant in relation to the image content. In this paper, we focus on training a good attribute-inference model via the recurrent neural network (RNN) for image captioning, where the c
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Xiaofeng, Ivor W. Tsang, Xiaofeng Cao, Ruiheng Zhang, and Chuancai Liu. "Learning Image-Specific Attributes by Hyperbolic Neighborhood Graph Propagation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/554.

Full text
Abstract:
As a kind of semantic representation of visual object descriptions, attributes are widely used in various computer vision tasks. In most of existing attribute-based research, class-specific attributes (CSA), which are class-level annotations, are usually adopted due to its low annotation cost for each class instead of each individual image. However, class-specific attributes are usually noisy because of annotation errors and diversity of individual images. Therefore, it is desirable to obtain image-specific attributes (ISA), which are image-level annotations, from the original class-specific a
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Luojun, Lingyu Liang, Lianwen Jin, and Weijie Chen. "Attribute-Aware Convolutional Neural Networks for Facial Beauty Prediction." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/119.

Full text
Abstract:
Facial beauty prediction (FBP) aims to develop a machine that automatically makes facial attractiveness assessment. To a large extent, the perception of facial beauty for a human is involved with the attributes of facial appearance, which provides some significant visual cues for FBP. Deep convolution neural networks (CNNs) have shown its power for FBP, but convolution filters with fixed parameters cannot take full advantage of the facial attributes for FBP. To address this problem, we propose an Attribute-aware Convolutional Neural Network (AaNet) that modulates the filters of the main networ
APA, Harvard, Vancouver, ISO, and other styles
8

May, Thorsten, James Davey, and Jorn Kohlhammer. "Combining statistical independence testing, visual attribute selection and automated analysis to find relevant attributes for classification." In 2010 IEEE Symposium on Visual Analytics Science and Technology (VAST). IEEE, 2010. http://dx.doi.org/10.1109/vast.2010.5654445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sylcott, Brian, Jeremy J. Michalek, and Jonathan Cagan. "Towards Understanding the Role of Interaction Effects in Visual Conjoint Analysis." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12622.

Full text
Abstract:
We investigate consumer preference interactions in visual choice-based conjoint analysis, where the conjoint attributes are parameters that define shapes shown to the respondent as images. Interaction effects are present when preference for the level of one attribute is dependent on the level of another attribute. When interaction effects are negligible, a main-effects fractional factorial experimental design can be used to reduce data requirements and survey cost. This is particularly important when the presence of many parameters or levels makes full factorial designs intractable. However, i
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Wenqi, Satoshi Oyama, and Masahito Kurihara. "Generating Natural Counterfactual Visual Explanations." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/742.

Full text
Abstract:
Counterfactual explanations help users to understand the behaviors of machine learning models by changing the inputs for the existing outputs. For an image classification task, an example counterfactual visual explanation explains: "for an example that belongs to class A, what changes do we need to make to the input so that the output is more inclined to class B." Our research considers changing the attribute description text of class A on the basis of the attributes of class B and generating counterfactual images on the basis of the modified text. We can use the prediction results of the mode
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!