To see the other types of publications on this topic, follow the link: - different object task.

Journal articles on the topic '- different object task'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '- different object task.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Barrett, Maeve M., and Fiona N. Newell. "Developmental processes in audiovisual object recognition and object location." Seeing and Perceiving 25 (2012): 38. http://dx.doi.org/10.1163/187847612x646604.

Full text
Abstract:
This study investigated whether performance in recognising and locating target objects benefited from the simultaneous presentation of a crossmodal cue. Furthermore, we examined whether these ‘what’ and ‘where’ tasks were affected by developmental processes by testing across different age groups. Using the same set of stimuli, participants conducted either an object recognition task, or object location task. For the recognition task, participants were required to respond to two of four target objects (animals) and withhold response to the remaining two objects. For the location task, participa
APA, Harvard, Vancouver, ISO, and other styles
2

Tyler, L. K., E. A. Stamatakis, P. Bright, et al. "Processing Objects at Different Levels of Specificity." Journal of Cognitive Neuroscience 16, no. 3 (2004): 351–62. http://dx.doi.org/10.1162/089892904322926692.

Full text
Abstract:
How objects are represented and processed in the brain is a central topic in cognitive neuroscience. Previous studies have shown that knowledge of objects is represented in a featurebased distributed neural system primarily involving occipital and temporal cortical regions. Research with nonhuman primates suggest that these features are structured in a hierarchical system with posterior neurons in the inferior temporal cortex representing simple features and anterior neurons in the perirhinal cortex representing complex conjunctions of features (Bussey & Saksida, 2002; Murray & Bussey,
APA, Harvard, Vancouver, ISO, and other styles
3

Quaney, Barbara M., Randolph J. Nudo, and Kelly J. Cole. "Can Internal Models of Objects be Utilized for Different Prehension Tasks?" Journal of Neurophysiology 93, no. 4 (2005): 2021–27. http://dx.doi.org/10.1152/jn.00599.2004.

Full text
Abstract:
We examined if object information obtained during one prehension task is used to produce fingertip forces for handling the same object in a different prehension task. Our observations address the task specificity of the internal models presumed to issue commands for grasping and transporting objects. Two groups participated in a 2-day experiment in which they lifted a novel object (230 g; 1.2 g/cm3). On Day One, the high force group (HFG) lifted the object by applying 10 N of grip force prior to applying vertical lift force. This disrupted the usual coordination of grip and lift forces and rep
APA, Harvard, Vancouver, ISO, and other styles
4

Mecklinger, A., and N. Müller. "Dissociations in the Processing of “What” and “Where” Information in Working Memory: An Event-Related Potential Analysis." Journal of Cognitive Neuroscience 8, no. 5 (1996): 453–73. http://dx.doi.org/10.1162/jocn.1996.8.5.453.

Full text
Abstract:
Based on recent research that suggests that the processing of spatial and object information in the primate brain involves functionally and anatomically different systems, we examined whether the encoding and retention of object and spatial information in working memory are associated with different ERP components. In a study-test procedure subjects were asked to either remember simple geometric objects presented in a 4 by 4 spatial matrix irrespective of their position (object memory task) or to remember spatial positions of the objects irrespective of their forms (spatial memory task). The E
APA, Harvard, Vancouver, ISO, and other styles
5

Proud, Keaton, James B. Heald, James N. Ingram, Jason P. Gallivan, Daniel M. Wolpert, and J. Randall Flanagan. "Separate motor memories are formed when controlling different implicitly specified locations on a tool." Journal of Neurophysiology 121, no. 4 (2019): 1342–51. http://dx.doi.org/10.1152/jn.00526.2018.

Full text
Abstract:
Skillful manipulation requires forming and recalling memories of the dynamics of objects linking applied force to motion. It has been assumed that such memories are associated with entire objects. However, we often control different locations on an object, and these locations may be associated with different dynamics. We have previously demonstrated that multiple memories can be formed when participants are explicitly instructed to control different visual points marked on an object. A key question is whether this novel finding generalizes to more natural situations in which control points are
APA, Harvard, Vancouver, ISO, and other styles
6

Kitayama, Shinobu, Sean Duffy, Tadashi Kawamura, and Jeff T. Larsen. "Perceiving an Object and Its Context in Different Cultures." Psychological Science 14, no. 3 (2003): 201–6. http://dx.doi.org/10.1111/1467-9280.02432.

Full text
Abstract:
In two studies, a newly devised test (framed-line test) was used to examine the hypothesis that individuals engaging in Asian cultures are more capable of incorporating contextual information and those engaging in North American cultures are more capable of ignoring contextual information. On each trial, participants were presented with a square frame, within which was printed a vertical line. Participants were then shown another square frame of the same or different size and asked to draw a line that was identical to the first line in either absolute length (absolute task) or proportion to th
APA, Harvard, Vancouver, ISO, and other styles
7

Soans, Melisa Andrea. "Review on Different Methods for Real Time Object Detection for Visually Impaired." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (2022): 3414–21. http://dx.doi.org/10.22214/ijraset.2022.41438.

Full text
Abstract:
Abstract: Real-time object detection is the task of doing object detection in real-time with fast inference while main- taining a base level of accuracy. Real time object detection helps the visually impaired detect the objects around them. Object detection can be done using different models such as the yolov3 model and the ssd mobilenet model. This paper aims to review and analyze the implementation and performance of various methodologies for real time object detection which will help the visually impaired. Each technique has its advantages and limitations. This paper helps in the review of
APA, Harvard, Vancouver, ISO, and other styles
8

Şık, Ayhan, Petra van Nieuwehuyzen, Jos Prickaerts, and Arjan Blokland. "Performance of different mouse strains in an object recognition task." Behavioural Brain Research 147, no. 1-2 (2003): 49–54. http://dx.doi.org/10.1016/s0166-4328(03)00117-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tinguria, Ajay, and R. Sudhakar. "Extracting Task Designs Using Fuzzy and Neuro-Fuzzy Approaches." International Journal of Computer Science and Mobile Computing 11, no. 7 (2022): 72–82. http://dx.doi.org/10.47760/ijcsmc.2022.v11i07.007.

Full text
Abstract:
Several applications generate large volumes of data on movements including vehicle navigation, fleet management, wildlife tracking and in the near future cell phone tracking. Such applications require support to manage the growing volumes of movement data. Understanding how an object moves in space and time is fundamental to the development of an appropriate movement model of the object. Many objects are dynamic and their positions change with time. The ability to reason about the changing positions of moving objects over time thus becomes crucial. Explanations on movements of an object requir
APA, Harvard, Vancouver, ISO, and other styles
10

Müller, Dagmar, István Winkler, Urte Roeber, Susann Schaffer, István Czigler, and Erich Schröger. "Visual Object Representations Can Be Formed outside the Focus of Voluntary Attention: Evidence from Event-related Brain Potentials." Journal of Cognitive Neuroscience 22, no. 6 (2010): 1179–88. http://dx.doi.org/10.1162/jocn.2009.21271.

Full text
Abstract:
There is an ongoing debate whether visual object representations can be formed outside the focus of voluntary attention. Recently, implicit behavioral measures suggested that grouping processes can occur for task-irrelevant visual stimuli, thus supporting theories of preattentive object formation (e.g., Lamy, D., Segal, H., & Ruderman, L. Grouping does not require attention. Perception and Psychophysics, 68, 17–31, 2006; Russell, C., & Driver, J. New indirect measures of “inattentive” visual grouping in a change-detection task. Perception and Psychophysics, 67, 606–623, 2005). We devel
APA, Harvard, Vancouver, ISO, and other styles
11

Smith, Edward E., John Jonides, Robert A. Koeppe, Edward Awh, Eric H. Schumacher, and Satoshi Minoshima. "Spatial versus Object Working Memory: PET Investigations." Journal of Cognitive Neuroscience 7, no. 3 (1995): 337–56. http://dx.doi.org/10.1162/jocn.1995.7.3.337.

Full text
Abstract:
We used positron emission tomography (PET) to answer the following question: Is working memory a unitary storage system, or does it instead include different storage buffers for different kinds of information? In Experiment 1, PET measures were taken while subjects engaged in either a spatial-memory task (retain the position of three dots for 3 sec) or an object-memory task (retain the identity of two objects for 3 sec). The results manifested a striking double dissociation, as the spatial task activated only right-hemisphere regions, whereas the object task activated primarily left-hemisphere
APA, Harvard, Vancouver, ISO, and other styles
12

Grekov, R., and A. Borisov. "CHARACTERIZATION OF THE EFFICIENCY OF THE FEATURES AGGREGATE IN FUZZY PATTERN RECOGNITION TASK." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 1 (June 27, 1997): 78. http://dx.doi.org/10.17770/etr1997vol1.1858.

Full text
Abstract:
Let a set of objects exist each of which is described by N features X1? ..., XN, where each feature X} is a real number. So each object is set by N-dimensional vector (Xl5 ..., XN) and represents a point in the space of object descriptions, RN.There are also set objects for which degrees of membership in either class are unknown. A decision rule should be determined that could enable estimation of the membership of either object with unknown degrees of membership in the given classes (Ozols and Borisov, 1996). To determine the decision rule, such features should be found which give a possibili
APA, Harvard, Vancouver, ISO, and other styles
13

Muto, Hiroyuki. "Correlational Evidence for the Role of Spatial Perspective-Taking Ability in the Mental Rotation of Human-Like Objects." Experimental Psychology 68, no. 1 (2021): 41–48. http://dx.doi.org/10.1027/1618-3169/a000505.

Full text
Abstract:
Abstract. People can mentally rotate objects that resemble human bodies more efficiently than nonsense objects in the same/different judgment task. Previous studies proposed that this human-body advantage in mental rotation is mediated by one's projections of body axes onto a human-like object, implying that human-like objects elicit a strategy shift, from an object-based to an egocentric mental rotation. To test this idea, we investigated whether mental rotation performance involving a human-like object had a stronger association with spatial perspective-taking, which entails egocentric menta
APA, Harvard, Vancouver, ISO, and other styles
14

Jeong, Su Keun, and Yaoda Xu. "Task-context-dependent Linear Representation of Multiple Visual Objects in Human Parietal Cortex." Journal of Cognitive Neuroscience 29, no. 10 (2017): 1778–89. http://dx.doi.org/10.1162/jocn_a_01156.

Full text
Abstract:
A host of recent studies have reported robust representations of visual object information in the human parietal cortex, similar to those found in ventral visual cortex. In ventral visual cortex, both monkey neurophysiology and human fMRI studies showed that the neural representation of a pair of unrelated objects can be approximated by the averaged neural representation of the constituent objects shown in isolation. In this study, we examined whether such a linear relationship between objects exists for object representations in the human parietal cortex. Using fMRI and multivoxel pattern ana
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Jianqiu, Jiazhou He, Pan Jiang, and Yuwei Yin. "SOMC:A Object-Level Data Augmentation for Sea Surface Object Detection." Journal of Physics: Conference Series 2171, no. 1 (2022): 012033. http://dx.doi.org/10.1088/1742-6596/2171/1/012033.

Full text
Abstract:
Abstract The deep learning model is a data-driven model and more high-quality data will bring it better results. In the task of Unmanned Surface Vessel’s object detection based on optical images or videos, the object is sparser than the target in the natural scene. The current datasets of sea scenes often have some disadvantages such as high image acquisition costs, wide range of changes in object size, imbalance in the number of different objects and so on, which limit the generalization of the model for the detection of sea surface objects. In order to solve problems of insufficient scene an
APA, Harvard, Vancouver, ISO, and other styles
16

Marful, Alejandra, Daniela Paolieri, and M. Teresa Bajo. "Is naming faces different from naming objects? Semantic interference in a face- and object-naming task." Memory & Cognition 42, no. 3 (2013): 525–37. http://dx.doi.org/10.3758/s13421-013-0376-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

SALLEH, Ahmad Faizal, Ryojun IKEURA, Soichiro HAYAKAWA, and Hideki SAWAI. "Cooperative Object Transfer: Effect of Observing Different Part of the Object on the Cooperative Task Smoothness." Journal of Biomechanical Science and Engineering 6, no. 4 (2011): 343–60. http://dx.doi.org/10.1299/jbse.6.343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Korjoukov, Ilia, Danique Jeurissen, Niels A. Kloosterman, Josine E. Verhoeven, H. Steven Scholte, and Pieter R. Roelfsema. "The Time Course of Perceptual Grouping in Natural Scenes." Psychological Science 23, no. 12 (2012): 1482–89. http://dx.doi.org/10.1177/0956797612443832.

Full text
Abstract:
Visual perception starts with localized filters that subdivide the image into fragments that undergo separate analyses. The visual system has to reconstruct objects by grouping image fragments that belong to the same object. A widely held view is that perceptual grouping occurs in parallel across the visual scene and without attention. To test this idea, we measured the speed of grouping in pictures of animals and vehicles. In a classification task, these pictures were categorized efficiently. In an image-parsing task, participants reported whether two cues fell on the same or different object
APA, Harvard, Vancouver, ISO, and other styles
19

Chiatti, Agnese, Gianluca Bardaro, Emanuele Bastianelli, Ilaria Tiddi, Prasenjit Mitra, and Enrico Motta. "Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching." Electronics 9, no. 3 (2020): 380. http://dx.doi.org/10.3390/electronics9030380.

Full text
Abstract:
To assist humans with their daily tasks, mobile robots are expected to navigate complex and dynamic environments, presenting unpredictable combinations of known and unknown objects. Most state-of-the-art object recognition methods are unsuitable for this scenario because they require that: (i) all target object classes are known beforehand, and (ii) a vast number of training examples is provided for each class. This evidence calls for novel methods to handle unknown object classes, for which fewer images are initially available (few-shot recognition). One way of tackling the problem is learnin
APA, Harvard, Vancouver, ISO, and other styles
20

Jeong, Su Keun, and Yaoda Xu. "Neural Representation of Targets and Distractors during Object Individuation and Identification." Journal of Cognitive Neuroscience 25, no. 1 (2013): 117–26. http://dx.doi.org/10.1162/jocn_a_00298.

Full text
Abstract:
In many everyday activities, we need to attend and encode multiple target objects among distractor objects. For example, when driving a car on a busy street, we need to simultaneously attend objects such as traffic signs, pedestrians, and other cars, while ignoring colorful and flashing objects in display windows. To explain how multiple visual objects are selected and encoded in visual STM and in perception in general, the neural object file theory argues that, whereas object selection and individuation is supported by inferior intraparietal sulcus (IPS), the encoding of detailed object featu
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Yaoda. "Distinctive Neural Mechanisms Supporting Visual Object Individuation and Identification." Journal of Cognitive Neuroscience 21, no. 3 (2009): 511–18. http://dx.doi.org/10.1162/jocn.2008.21024.

Full text
Abstract:
Many everyday activities, such as driving on a busy street, require the encoding of distinctive visual objects from crowded scenes. Given resource limitations of our visual system, one solution to this difficult and challenging task is to first select individual objects from a crowded scene (object individuation) and then encode their details (object identification). Using functional magnetic resonance imaging, two distinctive brain mechanisms were recently identified that support these two stages of visual object processing. While the inferior intraparietal sulcus (IPS) selects a fixed number
APA, Harvard, Vancouver, ISO, and other styles
22

Suzuki, Wendy A., Earl K. Miller, and Robert Desimone. "Object and Place Memory in the Macaque Entorhinal Cortex." Journal of Neurophysiology 78, no. 2 (1997): 1062–81. http://dx.doi.org/10.1152/jn.1997.78.2.1062.

Full text
Abstract:
Suzuki, Wendy A., Earl K. Miller, and Robert Desimone. Object and place memory in the macaque entorhinal cortex. J. Neurophysiol. 78: 1062–1081, 1997. Lesions of the entorhinal cortex in humans, monkeys, and rats impair memory for a variety of kinds of information, including memory for objects and places. To begin to understand the contribution of entorhinal cells to different forms of memory, responses of entorhinal cells were recorded as monkeys performed either an object or place memory task. The object memory task was a variation of delayed matching to sample. A sample picture was presente
APA, Harvard, Vancouver, ISO, and other styles
23

Arshad, Usama. "Object Detection in Last Decade - A Survey." Scientific Journal of Informatics 8, no. 1 (2021): 60–70. http://dx.doi.org/10.15294/sji.v8i1.28956.

Full text
Abstract:
In the last decade, object detection is one of the interesting topics that played an important role in revolutionizing the presentera. Especially when it comes to computervision, object detection is a challenging and most fundamental problem. Researchersin the last decade enhanced object detection and made many advance discoveries using thetechnological advancements. When wetalk about object detection, we also must talk about deep learning and its advancements over the time. This research work describes theadvancements in object detection over last10 years (2010-2020). Different papers publish
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Guan-Ting, Vinay Malligere Shivanna, and Jiun-In Guo. "A Deep-Learning Model with Task-Specific Bounding Box Regressors and Conditional Back-Propagation for Moving Object Detection in ADAS Applications." Sensors 20, no. 18 (2020): 5269. http://dx.doi.org/10.3390/s20185269.

Full text
Abstract:
This paper proposes a deep-learning model with task-specific bounding box regressors (TSBBRs) and conditional back-propagation mechanisms for detection of objects in motion for advanced driver assistance system (ADAS) applications. The proposed model separates the object detection networks for objects of different sizes and applies the proposed algorithm to achieve better detection results for both larger and tinier objects. For larger objects, a neural network with a larger visual receptive field is used to acquire information from larger areas. For the detection of tinier objects, the networ
APA, Harvard, Vancouver, ISO, and other styles
25

Todd, Steven, and Arthur F. Kramer. "Attentional Guidance in Visual Attention." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 19 (1993): 1378–82. http://dx.doi.org/10.1518/107118193784162290.

Full text
Abstract:
Earlier research has shown that a task-irrelevant sudden onset of an object will capture or draw an observer's visual attention to that object's location (e.g., Yantis & Jonides, 1984). In the four experiments reported here, we explore the question of whether task-irrelevant properties other than sudden-onset may capture attention. Our results suggest that a uniquely colored or luminous object, as well as an irrelevant boundary, may indeed capture or guide attention, though apparently to a lesser degree than a sudden onset: it appears that the degree of attentional capture is dependent on
APA, Harvard, Vancouver, ISO, and other styles
26

Srikesavan, Cynthia S., Barbara Shay, and Tony Szturm. "Test-Retest Reliability and Convergent Validity of a Computer Based Hand Function Test Protocol in People with Arthritis." Open Orthopaedics Journal 9, no. 1 (2015): 57–67. http://dx.doi.org/10.2174/1874325001509010057.

Full text
Abstract:
Objectives: A computer based hand function assessment tool has been developed to provide a standardized method for quantifying task performance during manipulations of common objects/tools/utensils with diverse physical properties and grip/grasp requirements for handling. The study objectives were to determine test-retest reliability and convergent validity of the test protocol in people with arthritis. Methods: Three different object manipulation tasks were evaluated twice in forty people with rheumatoid arthritis (RA) or hand osteoarthritis (HOA). Each object was instrumented with a motion s
APA, Harvard, Vancouver, ISO, and other styles
27

Taniguchi, Kosuke, Kana Kuraguchi, and Yukuo Konishi. "Task Difficulty Makes ‘No’ Response Different From ‘Yes’ Response in Detection of Fragmented Object Contours." Perception 47, no. 9 (2018): 943–65. http://dx.doi.org/10.1177/0301006618787395.

Full text
Abstract:
Two-alternative forced choice tasks are often used in object detection, which regards detecting an object as a ‘yes’ response and detecting no object as a ‘no’ response. Previous studies have suggested that the processing of yes/no responses arises from identical or similar processing. In this study, we investigated the difference of processing between detecting an object (‘yes’ response) and not detecting any object (‘no’ response) by controlling the task difficulty in terms of fragment length and stimulus duration. The results indicated that a ‘yes’ response depends on accurate and stable de
APA, Harvard, Vancouver, ISO, and other styles
28

Nassar, Ahmed Samy, Sébastien Lefèvre, and Jan Dirk Wegner. "Multi-View Instance Matching with Learned Geometric Soft-Constraints." ISPRS International Journal of Geo-Information 9, no. 11 (2020): 687. http://dx.doi.org/10.3390/ijgi9110687.

Full text
Abstract:
We present a new approach for matching urban object instances across multiple ground-level images for the ultimate goal of city-scale mapping of objects with high positioning accuracy. What makes this task challenging is the strong change in view-point, different lighting conditions, high similarity of neighboring objects, and variability in scale. We propose to turn object instance matching into a learning task, where image-appearance and geometric relationships between views fruitfully interact. Our approach constructs a Siamese convolutional neural network that learns to match two views of
APA, Harvard, Vancouver, ISO, and other styles
29

Zhao, Binglei, and Sergio Della Sala. "Different representations and strategies in mental rotation." Quarterly Journal of Experimental Psychology 71, no. 7 (2018): 1574–83. http://dx.doi.org/10.1080/17470218.2017.1342670.

Full text
Abstract:
It is still debated whether holistic or piecemeal transformation is applied to carry out mental rotation (MR) as an aspect of visual imagery. It has been recently argued that various mental representations could be flexibly generated to perform MR tasks. To test the hypothesis that imagery ability and types of stimuli interact to affect the format of representation and the choice of strategy in performing MR task, participants, grouped as good or poor imagers, were assessed using four MR tasks, comprising two sets of ‘Standard’ cube figures and two sets of ‘non-Standard’ ones, designed by with
APA, Harvard, Vancouver, ISO, and other styles
30

Yokoi, Isao, Atsumichi Tachibana, Takafumi Minamimoto, Naokazu Goda, and Hidehiko Komatsu. "Dependence of behavioral performance on material category in an object-grasping task with monkeys." Journal of Neurophysiology 120, no. 2 (2018): 553–63. http://dx.doi.org/10.1152/jn.00748.2017.

Full text
Abstract:
Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in mo
APA, Harvard, Vancouver, ISO, and other styles
31

Ellis, R., D. A. Allport, G. W. Humphreys, and J. Collis. "Varieties of Object Constancy." Quarterly Journal of Experimental Psychology Section A 41, no. 4 (1989): 775–96. http://dx.doi.org/10.1080/14640748908402393.

Full text
Abstract:
Three experiments are described in which two pictures of isolated man-made objects were presented in succession. The subjects’ task was to decide, as rapidly as possible, whether the two pictured objects had the same name. With a stimulus-onset asynchrony (SOA) of above 200 msec two types of facilitation were observed: (1) the response latency was reduced if the pictures showed the same object, even though seen from different viewpoints (object benefit); (2) decision time was reduced further if the pictures showed the same object from the same angle of view (viewpoint benefit). These facilitat
APA, Harvard, Vancouver, ISO, and other styles
32

Sabes, Philip N., Boris Breznen, and Richard A. Andersen. "Parietal Representation of Object-Based Saccades." Journal of Neurophysiology 88, no. 4 (2002): 1815–29. http://dx.doi.org/10.1152/jn.2002.88.4.1815.

Full text
Abstract:
When monkeys make saccadic eye movements to simple visual targets, neurons in the lateral intraparietal area (LIP) display a retinotopic, or eye-centered, coding of the target location. However natural saccadic eye movements are often directed at objects or parts of objects in the visual scene. In this paper we investigate whether LIP represents saccadic eye movements differently when the target is specified as part of a visually displayed object. Monkeys were trained to perform an object-based saccade task that required them to make saccades to previously cued parts of an abstract object afte
APA, Harvard, Vancouver, ISO, and other styles
33

Flittner*, Jonathan, John Luksas, and Joseph L. Gabbard. "Predicting User Performance in Augmented Reality User Interfaces with Image Analysis Algorithms." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 2108–12. http://dx.doi.org/10.1177/1071181320641511.

Full text
Abstract:
This study determines how to apply existing image analysis measures of visual clutter to augmented reality user interfaces, in conjunction with other factors that may affect performance such as the percentage of virtual objects compared to real objects in an interface, and the type of object a user is searching for (real or virtual). Image analysis measures of clutter were specifically chosen as they can be applied to complex and naturalistic images as is common to experience while using an AR UI. The end goal of this research is to develop an algorithm capable of predicting user performance f
APA, Harvard, Vancouver, ISO, and other styles
34

Gregorics, Tibor. "Object-oriented backtracking." Acta Universitatis Sapientiae, Informatica 9, no. 2 (2017): 144–61. http://dx.doi.org/10.1515/ausi-2017-0010.

Full text
Abstract:
Abstract Several versions of the backtracking are known. In this paper, those versions are in focus which solve the problems whose problem space can be described with a special directed tree. The traversal strategies of this tree will be analyzed and they will be implemented in object-oriented style. In this way, the traversal is made by an enumerator object which iterates over all the paths (partial solutions) of the tree. Two different “acktracking enumerators” are going to be presented and the backtracking algorithm will be a linear search over one of these enumerators. Since these algorith
APA, Harvard, Vancouver, ISO, and other styles
35

Grill-Spector, Kalanit, and Nancy Kanwisher. "Visual Recognition." Psychological Science 16, no. 2 (2005): 152–60. http://dx.doi.org/10.1111/j.0956-7976.2005.00796.x.

Full text
Abstract:
What is the sequence of processing steps involved in visual object recognition? We varied the exposure duration of natural images and measured subjects' performance on three different tasks, each designed to tap a different candidate component process of object recognition. For each exposure duration, accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds) than on a perceptual categorization task (e.g., birds vs. cars). However, strikingly, at each exposure duration, subjects performed just as quickly and accurately o
APA, Harvard, Vancouver, ISO, and other styles
36

GREENWALD, HAL S., and DAVID C. KNILL. "A comparison of visuomotor cue integration strategies for object placement and prehension." Visual Neuroscience 26, no. 1 (2009): 63–72. http://dx.doi.org/10.1017/s0952523808080668.

Full text
Abstract:
AbstractVisual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for
APA, Harvard, Vancouver, ISO, and other styles
37

Kunimatsu, Jun, Shinya Yamamoto, Kazutaka Maeda, and Okihide Hikosaka. "Environment-based object values learned by local network in the striatum tail." Proceedings of the National Academy of Sciences 118, no. 4 (2021): e2013623118. http://dx.doi.org/10.1073/pnas.2013623118.

Full text
Abstract:
Basal ganglia contribute to object-value learning, which is critical for survival. The underlying neuronal mechanism is the association of each object with its rewarding outcome. However, object values may change in different environments and we then need to choose different objects accordingly. The mechanism of this environment-based value learning is unknown. To address this question, we created an environment-based value task in which the value of each object was reversed depending on the two scene-environments (X and Y). After experiencing this task repeatedly, the monkeys became able to s
APA, Harvard, Vancouver, ISO, and other styles
38

Yoon, Eun Young, Glyn W. Humphreys, Sanjay Kumar, and Pia Rotshtein. "The Neural Selection and Integration of Actions and Objects: An fMRI Study." Journal of Cognitive Neuroscience 24, no. 11 (2012): 2268–79. http://dx.doi.org/10.1162/jocn_a_00256.

Full text
Abstract:
There is considerable evidence that there are anatomically and functionally distinct pathways for action and object recognition. However, little is known about how information about action and objects is integrated. This study provides fMRI evidence for task-based selection of brain regions associated with action and object processing, and on how the congruency between the action and the object modulates neural response. Participants viewed videos of objects used in congruent or incongruent actions and attended either to the action or the object in a one-back procedure. Attending to the action
APA, Harvard, Vancouver, ISO, and other styles
39

Hasson, Christopher J., Tian Shen, and Dagmar Sternad. "Energy margins in dynamic object manipulation." Journal of Neurophysiology 108, no. 5 (2012): 1349–65. http://dx.doi.org/10.1152/jn.00019.2012.

Full text
Abstract:
Many tasks require humans to manipulate dynamically complex objects and maintain appropriate safety margins, such as placing a cup of coffee on a coaster without spilling. This study examined how humans learn such safety margins and how they are shaped by task constraints and changing variability with improved skill. Eighteen subjects used a manipulandum to transport a shallow virtual cup containing a ball to a target without losing the ball. Half were to complete the cup transit in a comfortable target time of 2 s (a redundant task with infinitely many equivalent solutions), and the other hal
APA, Harvard, Vancouver, ISO, and other styles
40

Ojemann, Jeffrey G., George A. Ojemann, and Ettore Lettich. "Cortical stimulation mapping of language cortex by using a verb generation task: effects of learning and comparison to mapping based on object naming." Journal of Neurosurgery 97, no. 1 (2002): 33–38. http://dx.doi.org/10.3171/jns.2002.97.1.0033.

Full text
Abstract:
Object. Cortical stimulation mapping has traditionally relied on disruption of object naming to define essential language areas. In this study, the authors reviewed the use of a different language task, verb generation, in mapping language. This task has greater use in brain imaging studies and may be used to test aspects of language different from those of object naming. Methods. In 14 patients, cortical stimulation mapping performed using a verb generation task provided a map of language areas in the frontal and temporoparietal cortices. These verb generation maps often overlapped object nam
APA, Harvard, Vancouver, ISO, and other styles
41

Fornia, Luca, Marco Rossi, Marco Rabuffetti, et al. "Direct Electrical Stimulation of Premotor Areas: Different Effects on Hand Muscle Activity during Object Manipulation." Cerebral Cortex 30, no. 1 (2019): 391–405. http://dx.doi.org/10.1093/cercor/bhz139.

Full text
Abstract:
Abstract Dorsal and ventral premotor (dPM and vPM) areas are crucial in control of hand muscles during object manipulation, although their respective role in humans is still debated. In patients undergoing awake surgery for brain tumors, we studied the effect of direct electrical stimulation (DES) of the premotor cortex on the execution of a hand manipulation task (HMt). A quantitative analysis of the activity of extrinsic and intrinsic hand muscles recorded during and in absence of DES was performed. Results showed that DES applied to premotor areas significantly impaired HMt execution, affec
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Fan, Jiaxing Luan, Zhichao Xu, and Wei Chen. "DetReco: Object-Text Detection and Recognition Based on Deep Neural Network." Mathematical Problems in Engineering 2020 (July 14, 2020): 1–15. http://dx.doi.org/10.1155/2020/2365076.

Full text
Abstract:
Deep learning-based object detection method has been applied in various fields, such as ITS (intelligent transportation systems) and ADS (autonomous driving systems). Meanwhile, text detection and recognition in different scenes have also attracted much attention and research effort. In this article, we propose a new object-text detection and recognition method termed “DetReco” to detect objects and texts and recognize the text contents. The proposed method is composed of object-text detection network and text recognition network. YOLOv3 is used as the algorithm for the object-text detection t
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Xiaoliang, Kehe Wu, Qi Ma, and Zuge Chen. "Research on Object Detection Model Based on Feature Network Optimization." Processes 9, no. 9 (2021): 1654. http://dx.doi.org/10.3390/pr9091654.

Full text
Abstract:
As the object detection dataset scale is smaller than the image recognition dataset ImageNet scale, transfer learning has become a basic training method for deep learning object detection models, which pre-trains the backbone network of the object detection model on an ImageNet dataset to extract features for detection tasks. However, the classification task of detection focuses on the salient region features of an object, while the location task of detection focuses on the edge features, so there is a certain deviation between the features extracted by a pretrained backbone network and those
APA, Harvard, Vancouver, ISO, and other styles
44

Martinovic, Jasna, Thomas Gruber, and Matthias Müller. "Priming of object categorization within and across levels of specificity." Psihologija 42, no. 1 (2009): 27–46. http://dx.doi.org/10.2298/psi0901027m.

Full text
Abstract:
Identification of objects can occur at different levels of specificity. Depending on task and context, an object can be classified at the superordinate level (as an animal), at the basic level (a bird) or at the subordinate level (a sparrow). What are the interactions between these representational levels and do they rely on the same sequential processes that lead to successful object identification? In this electroencephalogram study, a task-switching paradigm (covert naming or living/non-living judgment) was used. Images of objects were repeated either within the same task, or with a switch
APA, Harvard, Vancouver, ISO, and other styles
45

Karne, Ms Archana, Mr RadhaKrishna Karne, Mr V. Karthik Kumar, and Dr A. Arunkumar. "Convolutional Neural Networks for Object Detection and Recognition." Journal of Artificial Intelligence, Machine Learning and Neural Network, no. 32 (February 4, 2023): 1–13. http://dx.doi.org/10.55529/jaimlnn.32.1.13.

Full text
Abstract:
One of the essential technologies in the fields of target extraction, pattern recognition, and motion measurement is moving object detection. Finding moving objects or a number of moving objects across a series of frames is called object tracking. Basically, object tracking is a difficult task. Unexpected changes in the surroundings, an item's mobility, noise, etc., might make it difficult to follow an object. Different tracking methods have been developed to solve these issues. This paper discusses a number of object tracking and detection approaches. The major methods for identifying objects
APA, Harvard, Vancouver, ISO, and other styles
46

Rau, Pei-Luen Patrick, Jian Zheng, Lijun Wang, Jingyu Zhao, and Dangxiao Wang. "Haptic and Auditory–Haptic Attentional Blink in Spatial and Object-Based Tasks." Multisensory Research 33, no. 3 (2020): 295–312. http://dx.doi.org/10.1163/22134808-20191483.

Full text
Abstract:
Abstract Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory–haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of i
APA, Harvard, Vancouver, ISO, and other styles
47

Ramos, Shayenne Elizianne, Luis David Solis Murgas, Monica Rodrigues Ferreira, and Carlos Alberto Mourao Junior. "Learning and Working Memory In Mice Under Different Lighting Conditions." Revista Neurociências 21, no. 3 (2013): 349–55. http://dx.doi.org/10.34024/rnc.2013.v21.8158.

Full text
Abstract:
Objective. This study aimed to investigate the effect of different light/ dark cycles and light intensity during behavioral tests of learning and working memory in Swiss mice. Method. Fifty-seven Swiss mice were kept in a housing room in either a 12:12h light/dark cycle (LD), con­stant light (LL), or constant darkness (DD). The animals were then tested in Lashley maze and Object recognition task under either 500 or 0 lux illumination, resulting in six treatments (LD-500, LD-0, LL- 500, LL-0, DD-500, and DD-0). Results. There were no significant differences between the conditions of light/dark,
APA, Harvard, Vancouver, ISO, and other styles
48

Koivisto, Mika, Simone Grassini, Niina Salminen-Vaparanta, and Antti Revonsuo. "Different Electrophysiological Correlates of Visual Awareness for Detection and Identification." Journal of Cognitive Neuroscience 29, no. 9 (2017): 1621–31. http://dx.doi.org/10.1162/jocn_a_01149.

Full text
Abstract:
Detecting the presence of an object is a different process than identifying the object as a particular object. This difference has not been taken into account in designing experiments on the neural correlates of consciousness. We compared the electrophysiological correlates of conscious detection and identification directly by measuring ERPs while participants performed either a task only requiring the conscious detection of the stimulus or a higher-level task requiring its conscious identification. Behavioral results showed that, even if the stimulus was consciously detected, it was not neces
APA, Harvard, Vancouver, ISO, and other styles
49

Ravinder M., Arunima Jaiswal, and Shivani Gulati. "Deep Learning-Based Object Detection in Diverse Weather Conditions." International Journal of Intelligent Information Technologies 18, no. 1 (2022): 1–14. http://dx.doi.org/10.4018/ijiit.296236.

Full text
Abstract:
The number of different types of composite images has grown very rapidly in current years, making Object Detection, an extremely critical task that requires a deeper understanding of various deep learning strategies that help to detect objects with higher accuracy in less amount of time. A brief description of object detection strategies under various weather conditions is discussed in this paper with their advantages and disadvantages. So, to overcome this transfer learning has been used and implementation has been done with two Pretrained Models i.e., YOLO and Resnet50 with denoising which d
APA, Harvard, Vancouver, ISO, and other styles
50

Pérez, Javier, Jose-Luis Guardiola, Alberto J. Perez, and Juan-Carlos Perez-Cortes. "Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM)." Sensors 20, no. 22 (2020): 6554. http://dx.doi.org/10.3390/s20226554.

Full text
Abstract:
Inspecting a 3D object which shape has elastic manufacturing tolerances in order to find defects is a challenging and time-consuming task. This task usually involves humans, either in the specification stage followed by some automatic measurements, or in other points along the process. Even when a detailed inspection is performed, the measurements are limited to a few dimensions instead of a complete examination of the object. In this work, a probabilistic method to evaluate 3D surfaces is presented. This algorithm relies on a training stage to learn the shape of the object building a statisti
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!