To see the other types of publications on this topic, follow the link: Reward value.

Journal articles on the topic 'Reward value'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Reward value.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Qi, Yan Sun, Ji Zhu, and Xiaohang Zhang. "The impact of uncertain rewards on customers’ recommendation intention in social networks." Internet Research 28, no. 4 (2018): 1029–54. http://dx.doi.org/10.1108/intr-03-2017-0116.

Full text
Abstract:
Purpose The purpose of this paper is to research the effect of uncertain rewards on the recommendation intention in referral reward programs (RRPs) and investigate the interaction of tie strength and reward type on the recommendation intention. Design/methodology/approach The research adopts a quantitative exploratory approach through the use of experiments. Study 1 adopted a 2×2 between-participants design ((reward type: certain reward vs uncertain reward)×(tie strength: strong tie vs weak tie)). Respectively, by manipulating uncertain probabilities and expected value, Studies 2 and 3 further explore the effect of uncertain rewards and tie strength on customers’ referral intention. Findings This paper finds the following: compared to certain rewards, customers’ referral intention under uncertain rewards is higher and positive experience has a mediating effect between reward type and recommendation intention; when only the recommender is rewarded, the tie strength between the recommender and the receiver moderates the effect of reward type on the recommendation intention; for strong ties, customers’ recommendation intention is higher in uncertain reward condition, but for weak ties, customers’ willingness to recommend is almost the same in both reward types; when both the recommender and the receiver are rewarded, although certain rewards have a higher expected value than uncertain and random rewards, for strong ties, the participants have a higher referral intention under random rewards than that under uncertain rewards, which have a higher referral willingness than that under certain rewards. Additionally, for weak ties, the reverse is true. Originality/value The research has both theoretical implications for research on uncertain rewards and tie strength and practical implications for marketing managers designing and implementing RRPs.
APA, Harvard, Vancouver, ISO, and other styles
2

Pastor-Bernier, Alexandre, Arkadiusz Stasiak, and Wolfram Schultz. "Reward-specific satiety affects subjective value signals in orbitofrontal cortex during multicomponent economic choice." Proceedings of the National Academy of Sciences 118, no. 30 (2021): e2022650118. http://dx.doi.org/10.1073/pnas.2022650118.

Full text
Abstract:
Sensitivity to satiety constitutes a basic requirement for neuronal coding of subjective reward value. Satiety from natural ongoing consumption affects reward functions in learning and approach behavior. More specifically, satiety reduces the subjective economic value of individual rewards during choice between options that typically contain multiple reward components. The unconfounded assessment of economic reward value requires tests at choice indifference between two options, which is difficult to achieve with sated rewards. By conceptualizing choices between options with multiple reward components (“bundles”), Revealed Preference Theory may offer a solution. Despite satiety, choices against an unaltered reference bundle may remain indifferent when the reduced value of a sated bundle reward is compensated by larger amounts of an unsated reward of the same bundle, and then the value loss of the sated reward is indicated by the amount of the added unsated reward. Here, we show psychophysically titrated choice indifference in monkeys between bundles of differently sated rewards. Neuronal chosen value signals in the orbitofrontal cortex (OFC) followed closely the subjective value change within recording periods of individual neurons. A neuronal classifier distinguishing the bundles and predicting choice substantiated the subjective value change. The choice between conventional single rewards confirmed the neuronal changes seen with two-reward bundles. Thus, reward-specific satiety reduces subjective reward value signals in OFC. With satiety being an important factor of subjective reward value, these results extend the notion of subjective economic reward value coding in OFC neurons.
APA, Harvard, Vancouver, ISO, and other styles
3

Gregorios-Pippas, Lucy, Philippe N. Tobler, and Wolfram Schultz. "Short-Term Temporal Discounting of Reward Value in Human Ventral Striatum." Journal of Neurophysiology 101, no. 3 (2009): 1507–23. http://dx.doi.org/10.1152/jn.90730.2008.

Full text
Abstract:
Delayed rewards lose their value for economic decisions and constitute weaker reinforcers for learning. Temporal discounting of reward value already occurs within a few seconds in animals, which allows investigations of the underlying neurophysiological mechanisms. However, it is difficult to relate these mechanisms to human discounting behavior, which is usually studied over days and months and may engage different brain processes. Our study aimed to bridge the gap by using very short delays and measuring human functional magnetic resonance responses in one of the key reward centers of the brain, the ventral striatum. We used psychometric methods to assess subjective timing and valuation of monetary rewards with delays of 4.0–13.5 s. We demonstrated hyperbolic and exponential decreases of striatal responses to reward predicting stimuli within this time range, irrespective of changes in reward rate. Lower reward magnitudes induced steeper behavioral and striatal discounting. By contrast, striatal responses following the delivery of reward reflected the uncertainty in subjective timing associated with delayed rewards rather than value discounting. These data suggest that delays of a few seconds affect the neural processing of predicted reward value in the ventral striatum and engage the temporal sensitivity of reward responses. Comparisons with electrophysiological animal data suggest that ventral striatal reward discounting may involve dopaminergic and orbitofrontal inputs.
APA, Harvard, Vancouver, ISO, and other styles
4

Leathers, Marvin L., and Carl R. Olson. "In monkeys making value-based decisions, amygdala neurons are sensitive to cue value as distinct from cue salience." Journal of Neurophysiology 117, no. 4 (2017): 1499–511. http://dx.doi.org/10.1152/jn.00564.2016.

Full text
Abstract:
Neurons in the lateral intraparietal (LIP) area of macaque monkey parietal cortex respond to cues predicting rewards and penalties of variable size in a manner that depends on the motivational salience of the predicted outcome (strong for both large reward and large penalty) rather than on its value (positive for large reward and negative for large penalty). This finding suggests that LIP mediates the capture of attention by salient events and does not encode value in the service of value-based decision making. It leaves open the question whether neurons elsewhere in the brain encode value in the identical task. To resolve this issue, we recorded neuronal activity in the amygdala in the context of the task employed in the LIP study. We found that responses to reward-predicting cues were similar between areas, with the majority of reward-sensitive neurons responding more strongly to cues that predicted large reward than to those that predicted small reward. Responses to penalty-predicting cues were, however, markedly different. In the amygdala, unlike LIP, few neurons were sensitive to penalty size, few penalty-sensitive neurons favored large over small penalty, and the dependence of firing rate on penalty size was negatively correlated with its dependence on reward size. These results indicate that amygdala neurons encoded cue value under circumstances in which LIP neurons exhibited sensitivity to motivational salience. However, the representation of negative value, as reflected in sensitivity to penalty size, was weaker than the representation of positive value, as reflected in sensitivity to reward size. NEW & NOTEWORTHY This is the first study to characterize amygdala neuronal responses to cues predicting rewards and penalties of variable size in monkeys making value-based choices. Manipulating reward and penalty size allowed distinguishing activity dependent on motivational salience from activity dependent on value. This approach revealed in a previous study that neurons of the lateral intraparietal (LIP) area encode motivational salience. Here, it reveals that amygdala neurons encode value. The results establish a sharp functional distinction between the two areas.
APA, Harvard, Vancouver, ISO, and other styles
5

Evans, Simon, Stephen M. Fleming, Raymond J. Dolan, and Bruno B. Averbeck. "Effects of Emotional Preferences on Value-based Decision-making Are Mediated by Mentalizing and Not Reward Networks." Journal of Cognitive Neuroscience 23, no. 9 (2011): 2197–210. http://dx.doi.org/10.1162/jocn.2010.21584.

Full text
Abstract:
Real-world decision-making often involves social considerations. Consequently, the social value of stimuli can induce preferences in choice behavior. However, it is unknown how financial and social values are integrated in the brain. Here, we investigated how smiling and angry face stimuli interacted with financial reward feedback in a stochastically rewarded decision-making task. Subjects reliably preferred the smiling faces despite equivalent reward feedback, demonstrating a socially driven bias. We fit a Bayesian reinforcement learning model to factor the effects of financial rewards and emotion preferences in individual subjects, and regressed model predictions on the trial-by-trial fMRI signal. Activity in the subcallosal cingulate and the ventral striatum, both involved in reward learning, correlated with financial reward feedback, whereas the differential contribution of social value activated dorsal temporo-parietal junction and dorsal anterior cingulate cortex, previously proposed as components of a mentalizing network. We conclude that the impact of social stimuli on value-based decision processes is mediated by effects in brain regions partially separable from classical reward circuitry.
APA, Harvard, Vancouver, ISO, and other styles
6

Diao, Liuting, Wenping Li, Wenhao Chang, and Qingguo Ma. "Reward Modulates Unconsciously Triggered Adaptive Control Processes." i-Perception 13, no. 1 (2022): 204166952110738. http://dx.doi.org/10.1177/20416695211073819.

Full text
Abstract:
Adaptive control (e.g., conflict adaptation) refers to dynamic adjustments of cognitive control processes in goal-directed behavior, which can be influenced by incentive rewards. Recently, accumulating evidence has shown that adaptive control processes can operate in the absence of conscious awareness, raising the question as to whether reward can affect unconsciously triggered adaptive control processes. Two experiments were conducted to address the question. In Experiment 1, participants performed a masked flanker-like priming task manipulated with high- and low-value performance-contingent rewards presented at the block level. In this experiment conflict awareness was manipulated by masking the conflict-inducing stimulus, and high- or low-value rewards were presented at the beginning of each block, and participants earned the reward contingent upon their responses in each trial. We observed a great conflict adaptation for high-value rewards in both conscious and unconscious conflict tasks, indicating reward-induced enhancements of consciously and unconsciously triggered adaptive control processes. Crucially, this effect still existed when controlling the stimulus-response repetitions in a rewarded masked Stroop-like priming task in Experiment 2. The results endorse the proposition that reward modulates unconsciously triggered adaptive control to conflict, suggesting that individuals may enable rewarding stimuli to dynamically regulate concurrent control processes based on previous conflict experience, regardless of whether the previous conflict was experienced consciously.
APA, Harvard, Vancouver, ISO, and other styles
7

Yamada, Hiroshi, Hitoshi Inokawa, Naoyuki Matsumoto, Yasumasa Ueda, Kazuki Enomoto, and Minoru Kimura. "Coding of the long-term value of multiple future rewards in the primate striatum." Journal of Neurophysiology 109, no. 4 (2013): 1140–51. http://dx.doi.org/10.1152/jn.00289.2012.

Full text
Abstract:
Decisions maximizing benefits involve a tradeoff between the quantity of a reward and the cost of elapsed time until an animal receives it. The estimation of long-term reward values is critical to attain the most desirable outcomes over a certain period of time. Reinforcement learning theories have established algorithms to estimate the long-term reward values of multiple future rewards in which the values of future rewards are discounted as a function of how many steps of choices are necessary to achieve them. Here, we report that presumed striatal projection neurons represent the long-term values of multiple future rewards estimated by a standard reinforcement learning model while monkeys are engaged in a series of trial-and-error choices and adaptive decisions for multiple rewards. We found that the magnitude of activity of a subset of neurons was positively correlated with the long-term reward values, and that of another subset of neurons was negatively correlated throughout the entire decision-making process in individual trials: from the start of the task trial, estimation of the values and their comparison among alternatives, choice execution, and evaluation of the received rewards. An idiosyncratic finding was that neurons showing negative correlations represented reward values in the near future (high discounting), while neurons showing positive correlations represented reward values not only in the near future, but also in the far future (low discounting). These findings provide a new insight that long-term value signals are embedded in two subsets of striatal neurons as high and low discounting of multiple future rewards.
APA, Harvard, Vancouver, ISO, and other styles
8

Rigley, Eryn, Adriane Chapman, Christine Evers, and Will McNeill. "ME: Modelling Ethical Values for Value Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 26 (2025): 27608–16. https://doi.org/10.1609/aaai.v39i26.34974.

Full text
Abstract:
Value alignment, at the intersection of moral philosophy and AI safety, is dedicated to ensuring that artificially intelligent (AI) systems align with a certain set of values. One challenge facing value alignment researchers is accurately translating these values into a machine readable format. In the case of reinforcement learning (RL), a popular method within value alignment, this requires designing a reward function which accurately defines the value of all state-action pairs. It is common for programmers to hand-set and manually tune these values. In this paper, we examine the challenges of hand-programming values into reward functions for value alignment, and propose mathematical models as an alternative grounding for reward function design in ethical scenarios. Experimental results demonstrate that our modelled-ethics approach offers a more consistent alternative and outperforms our hand-programmed reward functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ostaszewski, Pawel, and Katarzyna Karzel. "Discounting of Delayed and Probabilistic Losses of Different Amounts." European Psychologist 7, no. 4 (2002): 295–301. http://dx.doi.org/10.1027//1016-9040.7.4.295.

Full text
Abstract:
Previous research has shown that the value of a larger future reward is discounted less steeply than the value of a smaller future reward. The value of probabilistic reward has been shown to have either an opposite effect on discounting (when a smaller reward is not certain its value was discounted less steeply than the value of a larger reward) or no effect on the rate of discounting at all. The present article shows the results for delayed and probabilistic losses: The same hyperbola-like functions describe temporal and probabilistic discounting of both rewards and losses. In the case of losses, larger delayed amounts are discounted less steeply than smaller delayed amounts (as in the case of delayed rewards). No relationship was found between the amount of probabilistic losses and the rate of discounting of their value. In summary, the results show that the description of the discounting process and the effect of amount on the rate of discounting are basically the same in case of both rewards and losses.
APA, Harvard, Vancouver, ISO, and other styles
10

Ichikawa, Yoshihiro, and Keiki Takadama. "Designing Internal Reward of Reinforcement Learning Agents in Multi-Step Dilemma Problem." Journal of Advanced Computational Intelligence and Intelligent Informatics 17, no. 6 (2013): 926–31. http://dx.doi.org/10.20965/jaciii.2013.p0926.

Full text
Abstract:
This paper proposes the reinforcement learning agent that estimates internal rewards using external rewards in order to avoid conflict in multi-step dilemma problem. Intensive simulation results have revealed that the agent succeeds in avoiding local convergence and obtains a behavior policy for reaching a higher reward by updating the Q-value using the value that is subtracted the average reward from an external reward.
APA, Harvard, Vancouver, ISO, and other styles
11

Traner, Michael R., Ethan S. Bromberg-Martin, and Ilya E. Monosov. "How the value of the environment controls persistence in visual search." PLOS Computational Biology 17, no. 12 (2021): e1009662. http://dx.doi.org/10.1371/journal.pcbi.1009662.

Full text
Abstract:
Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.
APA, Harvard, Vancouver, ISO, and other styles
12

Momanyi, George O., Maureen A. Adoyo, Eunice M. Mwangi, and Dennis O. Mokua. "Strengthening Strategic Reward Framework in Health Systems: A Survey of Narok County, Kenya." Global Journal of Health Science 9, no. 1 (2016): 181. http://dx.doi.org/10.5539/gjhs.v9n1p181.

Full text
Abstract:
BACKGROUND: Rewards are used to strengthen good behavior among employees based on the general assumption that rewards motivate staff to improve organizational productivity. However, the extent to which rewards influence motivation among health workers (HWs) has limited information that is useful to human resources (HRs) instruments. This study assessed the influence of rewards on motivation among HWs in Narok County, Kenya. METHODS: This was a cross-sectional study done in two sub-counties of Narok County. Data on the rewards availability, rewards perceptions and influence of rewards on performance, as well as motivation level of the HWs, was collected using a self-administered questionnaire with HWs. SPSS version 21 was used to analyze descriptive statistics, and factor analysis and multivariate regression using Eigen vectors was used to assess the relationship between the reward intervention and HWs’ motivation.RESULTS: A majority of HWs 175 (73.8%) had not received a reward for good performance. Only 3 (4.8%) of the respondents who received rewards were not motivated by the reward they received. Overall, reward significantly predicted general motivation (p-value = 0.009).CONCLUSION: In Narok County, the HR’s instruments have not utilized the reward system known to motivate employees. In the study area, hard work was not acknowledged and rewarded accordingly. In addition, there were not sufficient opportunities for promotion in the county. An increased level of reward has the potential to motivate HWs to perform better. Therefore, providing rewards to employees to increase motivation is a strategy that the Narok County health system and its HR management should utilize.
APA, Harvard, Vancouver, ISO, and other styles
13

Stanek, Jessica K., Kathryn C. Dickerson, Kimberly S. Chiew, Nathaniel J. Clement, and R. Alison Adcock. "Expected Reward Value and Reward Uncertainty Have Temporally Dissociable Effects on Memory Formation." Journal of Cognitive Neuroscience 31, no. 10 (2019): 1443–54. http://dx.doi.org/10.1162/jocn_a_01411.

Full text
Abstract:
Anticipating rewards has been shown to enhance memory formation. Although substantial evidence implicates dopamine in this behavioral effect, the precise mechanisms remain ambiguous. Because dopamine nuclei have been associated with two distinct physiological signatures of reward prediction, we hypothesized two dissociable effects on memory formation. These two signatures are a phasic dopamine response immediately following a reward cue that encodes its expected value and a sustained, ramping response that has been demonstrated during high reward uncertainty [Fiorillo, C. D., Tobler, P. N., & Schultz, W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science, 299, 1898–1902, 2003]. Here, we show in humans that the impact of reward anticipation on memory for an event depends on its timing relative to these physiological signatures. By manipulating reward probability (100%, 50%, or 0%) and the timing of the event to be encoded (just after the reward cue versus just before expected reward outcome), we demonstrated the predicted double dissociation: Early during reward anticipation, memory formation was improved by increased expected reward value, whereas late during reward anticipation, memory formation was enhanced by reward uncertainty. Notably, although the memory benefits of high expected reward in the early interval were consolidation dependent, the memory benefits of high uncertainty in the later interval were not. These findings support the view that expected reward benefits memory consolidation via phasic dopamine release. The novel finding of a distinct memory enhancement, temporally consistent with sustained anticipatory dopamine release, points toward new mechanisms of memory modulation by reward now ripe for further investigation.
APA, Harvard, Vancouver, ISO, and other styles
14

De Cock, Nathalie, Leentje Vervoort, Patrick Kolsteren, et al. "Adding a reward increases the reinforcing value of fruit." British Journal of Nutrition 117, no. 4 (2017): 611–20. http://dx.doi.org/10.1017/s0007114517000381.

Full text
Abstract:
AbstractAdolescents’ snack choices could be altered by increasing the reinforcing value (RV) of healthy snacks compared with unhealthy snacks. This study assessed whether the RV of fruit increased by linking it to a reward and if this increased RV was comparable with the RV of unhealthy snacks alone. Moderation effects of sex, hunger, BMI z-scores and sensitivity to reward were also explored. The RV of snacks was assessed in a sample of 165 adolescents (15·1 (sd 1·5) years, 39·4 % boys and 17·4 % overweight) using a computerised food reinforcement task. Adolescents obtained points for snacks through mouse clicks (responses) following progressive ratio schedules of increasing response requirements. Participants were (computer) randomised to three experimental groups (1:1:1): fruit (n 53), fruit+reward (n 60) or unhealthy snacks (n 69). The RV was evaluated as total number of responses and breakpoint (schedule of terminating food reinforcement task). Multilevel regression analyses (total number of responses) and Cox’s proportional hazard regression models (breakpoint) were used. The total number of responses made were not different between fruit+reward and fruit (b −473; 95 % CI −1152, 205, P=0·17) or unhealthy snacks (b410; 95 % CI −222, 1043, P=0·20). The breakpoint was slightly higher for fruit than fruit+reward (HR 1·34; 95 % CI 1·00, 1·79, P=0·050), whereas no difference between unhealthy snacks and fruit+reward (HR 0·86; 95 % CI 0·62, 1·18, P=0·34) was observed. No indication of moderation was found. Offering rewards slightly increases the RV of fruit and may be a promising strategy to increase healthy food choices. Future studies should however, explore if other rewards, could reach larger effect sizes.
APA, Harvard, Vancouver, ISO, and other styles
15

Bermudez, Maria A., and Wolfram Schultz. "Timing in reward and decision processes." Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1637 (2014): 20120468. http://dx.doi.org/10.1098/rstb.2012.0468.

Full text
Abstract:
Sensitivity to time, including the time of reward, guides the behaviour of all organisms. Recent research suggests that all major reward structures of the brain process the time of reward occurrence, including midbrain dopamine neurons, striatum, frontal cortex and amygdala. Neuronal reward responses in dopamine neurons, striatum and frontal cortex show temporal discounting of reward value. The prediction error signal of dopamine neurons includes the predicted time of rewards. Neurons in the striatum, frontal cortex and amygdala show responses to reward delivery and activities anticipating rewards that are sensitive to the predicted time of reward and the instantaneous reward probability. Together these data suggest that internal timing processes have several well characterized effects on neuronal reward processing.
APA, Harvard, Vancouver, ISO, and other styles
16

Bermudez, Maria A., and Wolfram Schultz. "Reward Magnitude Coding in Primate Amygdala Neurons." Journal of Neurophysiology 104, no. 6 (2010): 3424–32. http://dx.doi.org/10.1152/jn.00540.2010.

Full text
Abstract:
Animals assess the values of rewards to learn and choose the best possible outcomes. We studied how single neurons in the primate amygdala coded reward magnitude, an important variable determining the value of rewards. A single, Pavlovian-conditioned visual stimulus predicted fruit juice to be delivered with one of three equiprobable volumes ( P = 1/3). A population of amygdala neurons showed increased activity after reward delivery, and almost one half of these responses covaried with reward magnitude in a monotonically increasing or decreasing fashion. A subset of the reward responding neurons were tested with two different probability distributions of reward magnitude; the reward responses in almost one half of them adapted to the predicted distribution and thus showed reference-dependent coding. These data suggest parametric reward value coding in the amygdala as a characteristic component of its function in reinforcement learning and economic decision making.
APA, Harvard, Vancouver, ISO, and other styles
17

Smith, Kyle S., and Ann M. Graybiel. "Habit formation coincides with shifts in reinforcement representations in the sensorimotor striatum." Journal of Neurophysiology 115, no. 3 (2016): 1487–98. http://dx.doi.org/10.1152/jn.00925.2015.

Full text
Abstract:
Evaluating outcomes of behavior is a central function of the striatum. In circuits engaging the dorsomedial striatum, sensitivity to goal value is accentuated during learning, whereas outcome sensitivity is thought to be minimal in the dorsolateral striatum and its habit-related corticostriatal circuits. However, a distinct population of projection neurons in the dorsolateral striatum exhibits selective sensitivity to rewards. Here, we evaluated the outcome-related signaling in such neurons as rats performed an instructional T-maze task for two rewards. As the rats formed maze-running habits and then changed behavior after reward devaluation, we detected outcome-related spike activity in 116 units out of 1,479 recorded units. During initial training, nearly equal numbers of these units fired preferentially either after rewarded runs or after unrewarded runs, and the majority were responsive at only one of two reward locations. With overtraining, as habits formed, firing in nonrewarded trials almost disappeared, and reward-specific firing declined. Thus error-related signaling was lost, and reward signaling became generalized. Following reward devaluation, in an extinction test, postgoal activity was nearly undetectable, despite accurate running. Strikingly, when rewards were then returned, postgoal activity reappeared and recapitulated the original early response pattern, with nearly equal numbers responding to rewarded and unrewarded runs and to single rewards. These findings demonstrate that outcome evaluation in the dorsolateral striatum is highly plastic and tracks stages of behavioral exploration and exploitation. These signals could be a new target for understanding compulsive behaviors that involve changes to dorsal striatum function.
APA, Harvard, Vancouver, ISO, and other styles
18

Peng, Zihan, Jinhui Zhao, and Xiaobin Fan. "Exploring the Influencing Factors of Cooperative Behavior Based on Social Value Orientation and Reward Approach Perspective." Journal of Education, Humanities and Social Sciences 15 (June 13, 2023): 163–67. http://dx.doi.org/10.54097/ehss.v15i.9214.

Full text
Abstract:
This study deploys a public goods game experiment using oTree's online behavioural experimental research platform to investigate the effects of introducing different types of rewards on the cooperative behaviour of two types of groups. There are four types of experimental categories. In the public goods game experiment, subjects were first required to perform a triple response matrix to determine the social attributes of individuals, and then to conduct four different types of investment decision experiments in turn. The results of the experiment showed that subjects with different social attributes showed different investment decision behaviours in teamwork, and different types of rewards also influenced the subjects' performance in cooperative behaviour, but the interaction between individual social attributes and reward types did not find significant evidence of an effect on cooperative behaviour at present. After further regression analysis, the present study found that material rewards had a more significant effect on changing subjects' behaviour than mental rewards and that both continuous and fixed interval material rewards had a significant positive effect on subjects' cooperative behaviour. The results in this study suggest different reward methods and the response of team members to the reward methods, and how to establish a positive link between reward incentives and cooperation to facilitate the achievement of cooperative goals.
APA, Harvard, Vancouver, ISO, and other styles
19

Hori, Yukiko, Takafumi Minamimoto, and Minoru Kimura. "Neuronal Encoding of Reward Value and Direction of Actions in the Primate Putamen." Journal of Neurophysiology 102, no. 6 (2009): 3530–43. http://dx.doi.org/10.1152/jn.00104.2009.

Full text
Abstract:
Decision making and action selection are influenced by the values of benefit, reward, cost, and punishment. Mapping of the positive and negative values of external events and actions occurs mainly via the discharge rates of neurons in the cerebral cortex, the amygdala, and the basal ganglia. However, it remains unclear how the reward values of external events and actions encoded in the basal ganglia are integrated into reward value-based control of limb-movement actions through the corticobasal ganglia loops. To address this issue, we investigated the activities of presumed projection neurons in the putamen of macaque monkeys performing a visually instructed GO–NOGO button-press task for large and small rewards. Regression analyses of neuronal discharge rates, actions, and reward values revealed three major categories of neurons. First, neurons activated during the preinstruction delay period were selective to either the GO(large reward)–NOGO(small reward) or NOGO(large reward)–GO(small reward) combinations, although the actions to be instructed were not predictable. Second, during the postinstruction epoch, GO and NOGO action-related activities were highly selective to reward size. The pre- and postinstruction activities of a large subset of neurons were also selective to cue position or GO-response direction. Third, neurons activated during both the pre- and postinstruction epochs were selective to both action and reward size. The results support the view that putamen neurons encode reward value and direction of actions, which may be a basis for mediating the processes leading from reward-value mapping to guiding ongoing actions toward their expected outcomes and directions.
APA, Harvard, Vancouver, ISO, and other styles
20

Roesch, Matthew R., and Carl R. Olson. "Neuronal Activity in Primate Orbitofrontal Cortex Reflects the Value of Time." Journal of Neurophysiology 94, no. 4 (2005): 2457–71. http://dx.doi.org/10.1152/jn.00373.2005.

Full text
Abstract:
Neurons in monkey orbitofrontal cortex (OF) are known to respond to reward-predicting cues with a strength that depends on the value of the predicted reward as determined 1) by intrinsic attributes including size and quality and 2) by extrinsic factors including the monkey's state of satiation and awareness of what other rewards are currently available. We pose here the question whether another extrinsic factor critical to determining reward value—the delay expected to elapse before delivery—influences neuronal activity in OF. To answer this question, we recorded from OF neurons while monkeys performed a memory-guided saccade task in which a cue presented early in each trial predicted whether the delay before the monkey could respond and receive a reward of fixed size would be short or long. OF neurons tended to fire more strongly in response to a cue predicting a short delay. The tendency to fire more strongly in anticipation of a short delay was correlated across neurons with the tendency to fire more strongly before a large reward. We conclude that neuronal activity in OF represents the time-discounted value of the expected reward.
APA, Harvard, Vancouver, ISO, and other styles
21

Schultz, Wolfram. "Neuronal Reward and Decision Signals: From Theories to Data." Physiological Reviews 95, no. 3 (2015): 853–951. http://dx.doi.org/10.1152/physrev.00023.2014.

Full text
Abstract:
Rewards are crucial objects that induce learning, approach behavior, choices, and emotions. Whereas emotions are difficult to investigate in animals, the learning function is mediated by neuronal reward prediction error signals which implement basic constructs of reinforcement learning theory. These signals are found in dopamine neurons, which emit a global reward signal to striatum and frontal cortex, and in specific neurons in striatum, amygdala, and frontal cortex projecting to select neuronal populations. The approach and choice functions involve subjective value, which is objectively assessed by behavioral choices eliciting internal, subjective reward preferences. Utility is the formal mathematical characterization of subjective value and a prime decision variable in economic choice theory. It is coded as utility prediction error by phasic dopamine responses. Utility can incorporate various influences, including risk, delay, effort, and social interaction. Appropriate for formal decision mechanisms, rewards are coded as object value, action value, difference value, and chosen value by specific neurons. Although all reward, reinforcement, and decision variables are theoretical constructs, their neuronal signals constitute measurable physical implementations and as such confirm the validity of these concepts. The neuronal reward signals provide guidance for behavior while constraining the free will to act.
APA, Harvard, Vancouver, ISO, and other styles
22

Tricomi, Elizabeth, and Karolina M. Lempert. "Value and probability coding in a feedback-based learning task utilizing food rewards." Journal of Neurophysiology 113, no. 1 (2015): 4–13. http://dx.doi.org/10.1152/jn.00086.2014.

Full text
Abstract:
For the consequences of our actions to guide behavior, the brain must represent different types of outcome-related information. For example, an outcome can be construed as negative because an expected reward was not delivered or because an outcome of low value was delivered. Thus behavioral consequences can differ in terms of the information they provide about outcome probability and value. We investigated the role of the striatum in processing probability-based and value-based negative feedback by training participants to associate cues with food rewards and then employing a selective satiety procedure to devalue one food outcome. Using functional magnetic resonance imaging, we examined brain activity related to receipt of expected rewards, receipt of devalued outcomes, omission of expected rewards, omission of devalued outcomes, and expected omissions of an outcome. Nucleus accumbens activation was greater for rewarding outcomes than devalued outcomes, but activity in this region did not correlate with the probability of reward receipt. Activation of the right caudate and putamen, however, was largest in response to rewarding outcomes relative to expected omissions of reward. The dorsal striatum (caudate and putamen) at the time of feedback also showed a parametric increase correlating with the trialwise probability of reward receipt. Our results suggest that the ventral striatum is sensitive to the motivational relevance, or subjective value, of the outcome, while the dorsal striatum codes for a more complex signal that incorporates reward probability. Value and probability information may be integrated in the dorsal striatum, to facilitate action planning and allocation of effort.
APA, Harvard, Vancouver, ISO, and other styles
23

Butterworth, James, Nicholas Kelley, Elaine Boland, and Constantine Sedikides. "085 Effort Exertion and Good Sleep Interactively Increase the Subjective Value of the Future." Sleep 44, Supplement_2 (2021): A36. http://dx.doi.org/10.1093/sleep/zsab072.084.

Full text
Abstract:
Abstract Introduction The present study examined how habitual variation in sleep quality shapes reward responsivity following effort exertion. Behavioural and neuroscientific theory and research suggest that expending effort leads to compensatory increases in reward responsivity. Converging evidence links the preference for larger-but-delayed rewards to increases in reward sensitivity in psychophysiological, psychopharmacological, and animal studies. Accordingly, we hypothesized that exerting mental effort would increase the preference for larger-but-delayed rewards (i.e., the subjective value of the future) insofar as these preferences reflect elevated reward responsivity. Furthermore, given that sleep shapes perceptions of effort and preferences for larger-but-delayed rewards, we hypothesized that this finding would be moderated by habitual variation in sleep quality, with the strongest effects apparent among participants reporting habitually good sleep. Methods To test these hypotheses, we recruited 79 participants to complete a 10-minute effortful (vs. control) writing task followed by a delay discounting task and the Pittsburgh Sleep Quality Index. Results As hypothesized, the effortful writing task (vs. control) participants demonstrated a greater preference for larger-but-delayed rewards (vs. smaller-but-immediate rewards). This effect was moderated by sleep quality with those high but not low in sleep quality showing the hypothesized effect. Conclusion Ultimately, we found that exerting mental effort increases the subjective value of the future, particularly among participants who habitually report good sleep. These results suggest that good sleep quality helps us contend with the effortful demands of daily life in a way that promotes long-term goal pursuit. Support (if any):
APA, Harvard, Vancouver, ISO, and other styles
24

Lindeman, Sander, Xiaochen Fu, Janine Kristin Reinert, and Izumi Fukunaga. "Value-related learning in the olfactory bulb occurs through pathway-dependent perisomatic inhibition of mitral cells." PLOS Biology 22, no. 3 (2024): e3002536. http://dx.doi.org/10.1371/journal.pbio.3002536.

Full text
Abstract:
Associating values to environmental cues is a critical aspect of learning from experiences, allowing animals to predict and maximise future rewards. Value-related signals in the brain were once considered a property of higher sensory regions, but their wide distribution across many brain regions is increasingly recognised. Here, we investigate how reward-related signals begin to be incorporated, mechanistically, at the earliest stage of olfactory processing, namely, in the olfactory bulb. In head-fixed mice performing Go/No-Go discrimination of closely related olfactory mixtures, rewarded odours evoke widespread inhibition in one class of output neurons, that is, in mitral cells but not tufted cells. The temporal characteristics of this reward-related inhibition suggest it is odour-driven, but it is also context-dependent since it is absent during pseudo-conditioning and pharmacological silencing of the piriform cortex. Further, the reward-related modulation is present in the somata but not in the apical dendritic tuft of mitral cells, suggesting an involvement of circuit components located deep in the olfactory bulb. Depth-resolved imaging from granule cell dendritic gemmules suggests that granule cells that target mitral cells receive a reward-related extrinsic drive. Thus, our study supports the notion that value-related modulation of olfactory signals is a characteristic of olfactory processing in the primary olfactory area and narrows down the possible underlying mechanisms to deeper circuit components that contact mitral cells perisomatically.
APA, Harvard, Vancouver, ISO, and other styles
25

Bnaya, Zahy, Alon Palombo, Rami Puzis, and Ariel Felner. "Confidence Backup Updates for Aggregating MDP State Values in Monte-Carlo Tree Search." Proceedings of the International Symposium on Combinatorial Search 6, no. 1 (2021): 156–60. http://dx.doi.org/10.1609/socs.v6i1.18378.

Full text
Abstract:
Monte-Carlo Tree Search (MCTS) algorithms estimate the value of MDP states based on rewards received by performing multiple random simulations. MCTS algorithms can use different strategies to aggregate these rewards and provide an estimation for the states’ values. The most common aggregation method is to store the mean reward of all simulations. Another common approach stores the best observed reward from each state. Both of these methods have complementary benefits and drawbacks. In this paper, we show that both of these methods are biased estimators for the real expected value of MDP states. We propose an hybrid approach that uses the best reward for states with low noise, and otherwise uses the mean. Experimental results on the Sailing MDP domain show that our method has a considerable advantage when the rewards are drawn from a noisy distribution.
APA, Harvard, Vancouver, ISO, and other styles
26

Blaukopf, Clare L., and Gregory J. DiGirolamo. "Reward, Context, and Human Behaviour." Scientific World JOURNAL 7 (2007): 626–40. http://dx.doi.org/10.1100/tsw.2007.122.

Full text
Abstract:
Animal models of reward processing have revealed an extensive network of brain areas that process different aspects of reward, from expectation and prediction to calculation of relative value. These results have been confirmed and extended in human neuroimaging to encompass secondary rewards more unique to humans, such as money. The majority of the extant literature covers the brain areas associated with rewards whilst neglecting analysis of the actual behaviours that these rewards generate. This review strives to redress this imbalance by illustrating the importance of looking at the behavioural outcome of rewards and the context in which they are produced. Following a brief review of the literature of reward-related activity in the brain, we examine the effect of reward context on actions. These studies reveal how the presence of reward vs. rewardandpunishment, or being conscious vs. unconscious of reward-related actions, differentially influence behaviour. The latter finding is of particular importance given the extent to which animal models are used in understanding the reward systems of the human mind. It is clear that further studies are needed to learn about the human reaction to reward in its entirety, including any distinctions between conscious and unconscious behaviours. We propose that studies of reward entail a measure of the animal's (human or nonhuman) knowledge of the reward and knowledge of its own behavioural outcome to achieve that reward.
APA, Harvard, Vancouver, ISO, and other styles
27

Chuang, Yun-Shiuan, Yu-Shiang Su, and Joshua O. S. Goh. "Neural responses reveal associations between personal values and value-based decisions." Social Cognitive and Affective Neuroscience 15, no. 12 (2020): 1299–309. http://dx.doi.org/10.1093/scan/nsaa150.

Full text
Abstract:
Abstract Personal values are thought to modulate value-based decisions, but the neural mechanisms underlying this influence remain unclear. Using a Lottery Choice Task functional brain imaging experiment, we examined the associations between personal value for hedonism and security (based on the Schwartz Value Survey) and subjective neurocognitive processing of reward and loss probability and magnitude objectively coded in stimuli. Hedonistic individuals accepted more losing stakes and showed increased right dorsolateral prefrontal and striatal and left parietal responses with increasing probability of losing. Individuals prioritizing security rejected more stakes and showed reduced right inferior frontal and amygdala responses with increasing stake magnitude, but increased precuneus responses for high-magnitude high-winning probability. With higher hedonism, task-related functional connectivity with the whole brain was higher in right insula and lower in bilateral habenula. For those with higher security ratings, whole-brain functional connectivity was higher in bilateral insula, supplementary motor areas, right superior frontal gyrus, dorsal anterior cingulate cortex, and lower in right middle occipital gyrus. These findings highlight distinct neural engagement across brain systems involved in reward and affective processing, and cognitive control that subserves how individual differences in personal value for gaining rewards or maintaining status quo modulate value-based decisions
APA, Harvard, Vancouver, ISO, and other styles
28

Farhan Pratama and Bangun Putra Prasetya. "Pengaruh Reward Terhadap Motivasi Kerja Pada Karyawan Badan Waqaf Al-Qur’an Cabang Yogyakarta." Manajemen Kreatif Jurnal 2, no. 2 (2024): 85–92. http://dx.doi.org/10.55606/makreju.v2i2.3078.

Full text
Abstract:
This research aims to explain the influence of reward variables and partial motivation on employee performance. This research uses a quantitative method with the sample determination method used is Purposive Sampling with a sample size of 20 and collecting samples by distributing questionnaires. The data analysis used is descriptive statistical analysis. The results show the results of Validity analysis testing to determine the significance of the influence of rewards on work motivation. The table of questions regarding the influence of rewards on work motivation 1-6 above shows correlation values ​​of 0.738, 0.731, 0.665, 0.853, 0.678, 0.693, and the significant values ​​for the questions on the influence of rewards on work motivation 1-6 are 0.00 and 0.01, so it can be stated valid, and the results of the reliability statistics on the reward variable questions, the Cronbach's Alpha value is 0.654, this value exceeds the minimum value of 0.60, thus the questions in the reward variable table are declared reliable or reliable.
APA, Harvard, Vancouver, ISO, and other styles
29

Diao, Feiyu, Xiaoqian Hu, Tingkang Zhang, et al. "The Impact of Reward Object on Object-Based Attention." Behavioral Sciences 14, no. 6 (2024): 505. http://dx.doi.org/10.3390/bs14060505.

Full text
Abstract:
Reward has been shown to influence selective attention, yet previous research has primarily focused on rewards associated with specific locations or features, with limited investigation into the impact of a reward object on object-based attention (OBA). Therefore, it remains unclear whether objects previously associated with rewards affect OBA. To address this issue, we conducted two experiments using a paradigm that combined a reward training phase with a modified two-rectangle paradigm. The results indicate that a reward object modulates both space-based attention (SBA) and OBA. When cues appear on a reward object, the effects of both SBA and OBA are amplified compared to when cues appear on a no-reward object. This finding supports the value-driven attentional capture (VDAC) theory, which suggests that a reward object gain enhanced saliency to capture attention, thereby providing a theoretical support for the treatment of conditions such as drug addiction.
APA, Harvard, Vancouver, ISO, and other styles
30

Mary, Rose. "A STUDY ON THE EFFECTIVENESS OF REWARD MANAGEMENT AS A MOTIVATIONAL TOOL IN THE CONSTRUCTION INDUSTRY IN DUBAI." International Journal of Innovations & Research Analysis 04, no. 03(I) (2024): 176–83. http://dx.doi.org/10.62823/ijira/4.3(i).6912.

Full text
Abstract:
An effective employee reward system is vital for organizations. It provides a structured framework for recognizing employees based on their contributions, skills, and market value, aligning with the organization's reward philosophy. These rewards help communicate values and set performance expectations, encouraging behaviors that support organizational goals. When creating a reward system, consider: “What motivates employees?” “What behaviors do we want to encourage?” and “How can rewards support those behaviors?” Employees should be viewed as stakeholders in the reward system, having a say in the policies that affect them. The system must be fair, consistent, and transparent to help employees understand how rewards impact them. This research decisively explores reward management as a key motivational tool in Dubai's construction industry. It collects primary data from expatriates with zero to four years of experience. The objective is to thoroughly evaluate the effectiveness of both monetary and non-monetary rewards in enhancing employee motivation and to uncover insightful perceptions from the workforce. Based on thorough data analysis, present actionable recommendations for a robust rewards package designed to significantly enhance employee motivation
APA, Harvard, Vancouver, ISO, and other styles
31

Sari, S.K.M, Nungki Liana. "HUBUNGAN PEMBERIAN PENGHARGAAN (REWARD) DENGAN KINERJA PERAWAT DI RAWAT INAP EKSEKUTIF RUMAH SAKIT UMUM DAERAH KABUPATEN JOMBANG TAHUN 2020." Jurnal Wiyata: Penelitian Sains dan Kesehatan 10, no. 1 (2023): 63. http://dx.doi.org/10.56710/wiyata.v10i1.705.

Full text
Abstract:
Nurses are health professionals who provided services in 24 hours, that a decisive role in improved quality of care through result performance has given and reward system it could affected quality of hospital services. Purpose : seen the relationship between given reward and nurse performance at Jombang Distric Hospital. Method: quantitatif studied ; cross sectional studied design ; total sampling technique. Population and sample amounted 30 nurses. Scope of this research in executive inpatient care at Jombang Distric Hospital. Research results with 0,05 sig of Spearman’s rho are : salary p value 0,041; work condition p value 0.273; Interpersonal relationship p value 0.995 ; total extrinsic reward p value 0,054; achievement p value 0.018; self-recognition p value 0.017; the work itself p value 0.012; responsibility p value 0.114, potential development p value 0.054 ; total intrinsic reward p value 0.028. corelation of variables relate to nurses performance at Jombang Distric Hospital are requaire managerial review and evaluation. Conclusion : there is a relationship give rewards and nurse performance. Suggestions : increase the motivation of nurses by include training and simulations relate to nurses and pay attention to remuneration of nurses.
APA, Harvard, Vancouver, ISO, and other styles
32

Cwibi, Mzukisi. "What do hotel managers think of employee rewards? An exploration of five-star hotels in Cape Town." International Conference on Tourism Research 6, no. 1 (2023): 462–69. http://dx.doi.org/10.34190/ictr.6.1.1292.

Full text
Abstract:
Reward systems are important tools that management can use to motivate employees; the main objective of organizations in awarding rewards is to attract and retain efficient, productive, and motivated employees. However, there is no evidence available regarding managers' perceptions of employee rewards in five-star hotels in Cape Town. Therefore, this study aims to explore the perceptions of five-star hotel managers about the reward systems offered to employees. Further, this paper attempts to explore the influence and impact of the covid-19 pandemic on the employee reward systems offered at five-star hotels. A total of 14 interviews were conducted with managers working in four selected five-star hotels. The study used semi-structured interviews to collect qualitative data. The data were analysed using Creswell’s six steps. The study's findings indicate that managers offered distinct types of rewards to their employees, including extrinsic and intrinsic rewards. Managers revealed that extrinsic rewards, specifically money, are the most preferred rewards. The study revealed that the impact of the Covid-19 pandemic led to hotels adjusting their employee reward systems to offer less extrinsic rewards and more intrinsic rewards. This paper concludes by recommending strategies to hotel management for enhancing the type of rewards offered to employees and offering utilisation of effective intrinsic rewards. This is to ensure that employees increase their value towards intrinsic rewards as much as they value extrinsic rewards Implications for future research are also presented.
APA, Harvard, Vancouver, ISO, and other styles
33

Armus, H. L. "Effect of Response Effort on the Reward Value of Distinctively Flavored Food Pellets." Psychological Reports 88, no. 3_suppl (2001): 1031–34. http://dx.doi.org/10.2466/pr0.2001.88.3c.1031.

Full text
Abstract:
This study was designed to test whether distinctively flavored food pellets, used as rewards for lever-pressing by rats, would acquire different reward values as a function of the differential effort involved in making the lever-pressing response which would be predictable from the concept of cognitive dissonance. Subjects were seven Long-Evans strain hooded rats, 301–308 days old at the start of the study and had had their body weights reduced to 80% of their free-feeding weights. Testing was done in a Y-maze, with food pellets associated with the difficult lever-press response serving as the reward for one choice and pellets associated with the easy lever-press response for the other. Analysis showed there was no preference in the choice of either the “easy” or the “difficult” pellets, the choice not being significantly different from chance. This indicated that the effort involved in making a response did not affect the reward strength of food pellets when used to reward a response with a different topography.
APA, Harvard, Vancouver, ISO, and other styles
34

Locey, Matthew L., Bryan A. Jones, and Howard Rachlin. "Real and hypothetical rewards in self-control and social discounting." Judgment and Decision Making 6, no. 6 (2011): 552–64. http://dx.doi.org/10.1017/s1930297500002515.

Full text
Abstract:
AbstractLaboratory studies of choice and decision making among real monetary rewards typically use smaller real rewards than those common in real life. When laboratory rewards are large, they are almost always hypothetical. In applying laboratory results meaningfully to real-life situations, it is important to know the extent to which choices among hypothetical rewards correspond to choices among real rewards and whether variation of the magnitude of hypothetical rewards affects behavior in meaningful ways. The present study compared real and hypothetical monetary rewards in two experiments. In Experiment 1, participants played a temporal discounting game that incorporates the logic of a repeated prisoner’s-dilemma (PD) game versus tit-for-tat; choice of one alternative (“defection” in PD terminology) resulted in a small-immediate reward; choice of the other alternative (“cooperation” in PD terminology) resulted in a larger reward delayed until the following trial. The larger-delayed reward was greater for half of the groups than for the other half. Rewards also differed in type across groups: multiples of real nickels, hypothetical nickels, or hypothetical hundred-dollar bills. All groups significantly increased choice of the larger delayed reward over the 40 trials of the experiment. Over the last 10 trials, cooperation was significantly higher when the difference between larger and smaller hypothetical rewards was greater. Reward type (real or hypothetical) made no significant difference in cooperation on most measures. In Experiment 2, real and hypothetical rewards were compared in social discounting—the decrease in value to the giver of a reward as social distance increases to the receiver of the reward. Social discount rates were well described by a hyperbolic function. Discounting rates for real and hypothetical rewards did not significantly differ. These results add to the evidence that results of experiments with hypothetical rewards validly apply in everyday life.
APA, Harvard, Vancouver, ISO, and other styles
35

Pelletier, Gabriel, and Lesley K. Fellows. "Value Neglect: A Critical Role for Ventromedial Frontal Lobe in Learning the Value of Spatial Locations." Cerebral Cortex 30, no. 6 (2020): 3632–43. http://dx.doi.org/10.1093/cercor/bhz331.

Full text
Abstract:
Abstract Whether you are a gazelle bounding to the richest tract of grassland or a return customer heading to the freshest farm stand at a crowded market, the ability to learn the value of spatial locations is important in adaptive behavior. The ventromedial frontal lobe (VMF) is implicated in value-based decisions between objects and in flexibly learning to choose between objects based on feedback. However, it is unclear if this region plays a material-general role in reward learning. Here, we tested whether VMF is necessary for learning the value of spatial locations. People with VMF damage were compared with healthy participants and a control group with frontal damage sparing VMF in an incentivized spatial search task. Participants chose among spatial targets distributed among distractors, rewarded with an expected value that varied along the right-left axis of the screen. People with VMF damage showed a weaker tendency to reap reward in contralesional hemispace. In some individuals, this impairment could be dissociated from the ability to make value-based decisions between objects, assessed separately. This is the first evidence that the VMF is critically involved in reward-guided spatial search and offers a novel perspective on the relationships between value, spatial attention, and decision-making.
APA, Harvard, Vancouver, ISO, and other styles
36

Miller, Eric M., Maya U. Shankar, Brian Knutson, and Samuel M. McClure. "Dissociating Motivation from Reward in Human Striatal Activity." Journal of Cognitive Neuroscience 26, no. 5 (2014): 1075–84. http://dx.doi.org/10.1162/jocn_a_00535.

Full text
Abstract:
Neural activity in the striatum has consistently been shown to scale with the value of anticipated rewards. As a result, it is common across a number of neuroscientific subdiscliplines to associate activation in the striatum with anticipation of a rewarding outcome or a positive emotional state. However, most studies have failed to dissociate expected value from the motivation associated with seeking a reward. Although motivation generally scales positively with increases in potential reward, there are circumstances in which this linkage does not apply. The current study dissociates value-related activation from that induced by motivation alone by employing a task in which motivation increased as anticipated reward decreased. This design reverses the typical relationship between motivation and reward, allowing us to differentially investigate fMRI BOLD responses that scale with each. We report that activity scaled differently with value and motivation across the striatum. Specifically, responses in the caudate and putamen increased with motivation, whereas nucleus accumbens activity increased with expected reward. Consistent with this, self-report ratings indicated a positive association between caudate and putamen activity and arousal, whereas activity in the nucleus accumbens was more associated with liking. We conclude that there exist regional limits on inferring reward expectation from striatal activation.
APA, Harvard, Vancouver, ISO, and other styles
37

Vakhrushev, Roman, Felicia Pei-Hsin Cheng, Anne Schacht, and Arezoo Pooresmaeili. "Differential effects of intra-modal and cross-modal reward value on perception: ERP evidence." PLOS ONE 18, no. 6 (2023): e0287900. http://dx.doi.org/10.1371/journal.pone.0287900.

Full text
Abstract:
In natural environments objects comprise multiple features from the same or different sensory modalities but it is not known how perception of an object is affected by the value associations of its constituent parts. The present study compares intra- and cross-modal value-driven effects on behavioral and electrophysiological correlates of perception. Human participants first learned the reward associations of visual and auditory cues. Subsequently, they performed a visual discrimination task in the presence of previously rewarded, task-irrelevant visual or auditory cues (intra- and cross-modal cues, respectively). During the conditioning phase, when reward associations were learned and reward cues were the target of the task, high value stimuli of both modalities enhanced the electrophysiological correlates of sensory processing in posterior electrodes. During the post-conditioning phase, when reward delivery was halted and previously rewarded stimuli were task-irrelevant, cross-modal value significantly enhanced the behavioral measures of visual sensitivity, whereas intra-modal value produced only an insignificant decrement. Analysis of the simultaneously recorded event-related potentials (ERPs) of posterior electrodes revealed similar findings. We found an early (90–120 ms) suppression of ERPs evoked by high-value, intra-modal stimuli. Cross-modal stimuli led to a later value-driven modulation, with an enhancement of response positivity for high- compared to low-value stimuli starting at the N1 window (180–250 ms) and extending to the P3 (300–600 ms) responses. These results indicate that sensory processing of a compound stimulus comprising a visual target and task-irrelevant visual or auditory cues is modulated by the reward value of both sensory modalities, but such modulations rely on distinct underlying mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
38

Olsen, Jesse E. "Societal values and individual values in reward allocation preferences." Cross Cultural Management 22, no. 2 (2015): 187–200. http://dx.doi.org/10.1108/ccm-09-2013-0130.

Full text
Abstract:
Purpose – Prior research suggests that cultural values affect individuals’ preferences in whether work rewards (i.e. pay and benefits) are allocated according to rules based on equity, equality, or need. However, this research has focussed primarily on societal-level values or individual-level operationalizations of values originally conceptualized at the societal level. Drawing on equity and social exchange theories, the purpose of this paper is to present a theoretical model and nine propositions that incorporate both individual and societal values as determinants of these reward allocation rule preferences. Design/methodology/approach – The author briefly reviews of the relevant literature on values and reward allocation preferences and present arguments supported by prior research, leading to a model and nine propositions. Findings – The author proposes that societal values and individual values have main and interactive effects on reward allocation preferences and that the effects of societal values are partially mediated by individual values. Research limitations/implications – The model and propositions present relationships that could be tested in future multi-level studies. Future conceptual/theoretical work may also build on the model presented in this paper. Practical implications – The proposed relationships, if supported, would have important implications for organizational reward systems and staffing. Originality/value – Prior research on reward allocation preferences focusses mostly on the effects of societal or individual values. This theoretical paper attempts to clarify and distinguish values at these two levels and to better understand their main and interactive effects on individual reward allocation rule preferences.
APA, Harvard, Vancouver, ISO, and other styles
39

Kwan, Y., J. Lee, S. Hwang, and S. Choi. "Social Hypersensitivity in Bipolar Disorder: An ERP Study." European Psychiatry 65, S1 (2022): S159. http://dx.doi.org/10.1192/j.eurpsy.2022.425.

Full text
Abstract:
Introduction Bipolar Disorder (BD) is a disorder in which cognitive function is relatively preserved but social functioning is markedly impaired. Interestingly, studies on BD show that the patients have a strong desire for social rewards. Hypersensitivity to social rewards in BD has not yet been sufficiently examined through experimental methods, although recent studies have pointed out that their reward hypersensitivity is the cause of symptoms and dysfunction. Objectives The purpose of this study was to investigate whether patients with BD are hypersensitive to social rewards using the social value capture task. Methods Groups of 25 BD and healthy control (HC) each completed the social value attention capture task. This task consists of a practice phase in which associative learning of social rewards with specific stimuli occurs, and a test phase in which the stimuli associated with rewards appear as distractors during the participants performing a selective attention task. We also recorded event-related potential (ERP) in the practice phase in order to investigate BDs’ cortical activity for social reward. Results showed significantly decreased accuracy rate and increased reaction time in the high social reward-associated distractor trials of the test phase in the BD compared to the HC. As a result of analysis in ERP components, P3 amplitude for social reward was significantly greater in the BD than the HC. Conclusions BD patients exhibit behavioral and physiological hypersensitivity to social rewards that might contribute to social dysfunction. Disclosure No significant relationships.
APA, Harvard, Vancouver, ISO, and other styles
40

Schultz, Wolfram. "Predictive Reward Signal of Dopamine Neurons." Journal of Neurophysiology 80, no. 1 (1998): 1–27. http://dx.doi.org/10.1152/jn.1998.80.1.1.

Full text
Abstract:
Schultz, Wolfram. Predictive reward signal of dopamine neurons. J. Neurophysiol. 80: 1–27, 1998. The effects of lesions, receptor blocking, electrical self-stimulation, and drugs of abuse suggest that midbrain dopamine systems are involved in processing reward information and learning approach behavior. Most dopamine neurons show phasic activations after primary liquid and food rewards and conditioned, reward-predicting visual and auditory stimuli. They show biphasic, activation-depression responses after stimuli that resemble reward-predicting stimuli or are novel or particularly salient. However, only few phasic activations follow aversive stimuli. Thus dopamine neurons label environmental stimuli with appetitive value, predict and detect rewards and signal alerting and motivating events. By failing to discriminate between different rewards, dopamine neurons appear to emit an alerting message about the surprising presence or absence of rewards. All responses to rewards and reward-predicting stimuli depend on event predictability. Dopamine neurons are activated by rewarding events that are better than predicted, remain uninfluenced by events that are as good as predicted, and are depressed by events that are worse than predicted. By signaling rewards according to a prediction error, dopamine responses have the formal characteristics of a teaching signal postulated by reinforcement learning theories. Dopamine responses transfer during learning from primary rewards to reward-predicting stimuli. This may contribute to neuronal mechanisms underlying the retrograde action of rewards, one of the main puzzles in reinforcement learning. The impulse response releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons. This signal may improve approach behavior by providing advance reward information before the behavior occurs, and may contribute to learning by modifying synaptic transmission. The dopamine reward signal is supplemented by activity in neurons in striatum, frontal cortex, and amygdala, which process specific reward information but do not emit a global reward prediction error signal. A cooperation between the different reward signals may assure the use of specific rewards for selectively reinforcing behaviors. Among the other projection systems, noradrenaline neurons predominantly serve attentional mechanisms and nucleus basalis neurons code rewards heterogeneously. Cerebellar climbing fibers signal errors in motor performance or errors in the prediction of aversive events to cerebellar Purkinje cells. Most deficits following dopamine-depleting lesions are not easily explained by a defective reward signal but may reflect the absence of a general enabling function of tonic levels of extracellular dopamine. Thus dopamine systems may have two functions, the phasic transmission of reward information and the tonic enabling of postsynaptic neurons.
APA, Harvard, Vancouver, ISO, and other styles
41

Anderson, Brian A., and Haena Kim. "On the representational nature of value-driven spatial attentional biases." Journal of Neurophysiology 120, no. 5 (2018): 2654–58. http://dx.doi.org/10.1152/jn.00489.2018.

Full text
Abstract:
Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner. NEW & NOTEWORTHY The representational nature of space in the value-driven control of attention remains unclear. Here, we provide evidence for scene-specific overt spatial attentional biases following reinforcement learning, even though the scenes contained no objects. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner.
APA, Harvard, Vancouver, ISO, and other styles
42

Lak, Armin, William R. Stauffer, and Wolfram Schultz. "Dopamine neurons learn relative chosen value from probabilistic rewards." eLife 5 (October 27, 2016). http://dx.doi.org/10.7554/elife.18044.

Full text
Abstract:
Economic theories posit reward probability as one of the factors defining reward value. Individuals learn the value of cues that predict probabilistic rewards from experienced reward frequencies. Building on the notion that responses of dopamine neurons increase with reward probability and expected value, we asked how dopamine neurons in monkeys acquire this value signal that may represent an economic decision variable. We found in a Pavlovian learning task that reward probability-dependent value signals arose from experienced reward frequencies. We then assessed neuronal response acquisition during choices among probabilistic rewards. Here, dopamine responses became sensitive to the value of both chosen and unchosen options. Both experiments showed also the novelty responses of dopamine neurones that decreased as learning advanced. These results show that dopamine neurons acquire predictive value signals from the frequency of experienced rewards. This flexible and fast signal reflects a specific decision variable and could update neuronal decision mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
43

Huang (黃飛揚), Fei-Yang, and Fabian Grabenhorst. "Nutrient-sensitive reinforcement learning in monkeys." Journal of Neuroscience, January 20, 2023, JN—RM—0752–22. http://dx.doi.org/10.1523/jneurosci.0752-22.2022.

Full text
Abstract:
In Reinforcement Learning (RL), animals choose by assigning values to options and learn by updating these values from reward outcomes. This framework has been instrumental in identifying fundamental learning variables and their neuronal implementations. However, canonical RL models do not explain how reward values are constructed from biologically critical intrinsic reward components, such as nutrients. From an ecological perspective, animals should adapt their foraging choices in dynamic environments to acquire nutrients that are essential for survival. Here, to advance the biological and ecological validity of RL models, we investigated how (male) monkeys adapt their choices to obtain preferred nutrient rewards under varying reward probabilities. We found that the rewards’ nutrient composition strongly influenced learning and choices. The animals’ preferences for specific nutrients (sugar, fat) affected how they adapted to changing reward probabilities: the history of recent rewards influenced monkeys’ choices more strongly if these rewards contained the monkey’s preferred nutrients (‘nutrient-specific reward history’). The monkeys also chose preferred nutrients even when they were associated with lower reward probability. A nutrient-sensitive RL model captured these processes: it updated the values of individual sugar and fat components of expected rewards based on experience and integrated them into subjective values that explained the monkeys’ choices. Nutrient-specific reward prediction errors guided this value-updating process. Our results identify nutrients as important reward components that guide learning and choice by influencing the subjective value of choice options. Extending RL models with nutrient-value functions may enhance their biological validity and uncover nutrient-specific learning and decision variables.SIGNIFICANCE STATEMENT:Reinforcement learning (RL) is an influential framework that formalizes how animals learn from experienced rewards. Although 'reward’ is a foundational concept in RL theory, canonical RL models cannot explain how learning depends on specific reward properties, such as nutrients. Intuitively, learning should be sensitive to the reward’s nutrient components, to benefit health and survival. Here we show that the nutrient (fat, sugar) composition of rewards affects monkeys’ choices and learning in an RL paradigm, and that key learning variables including ‘reward history’ and ‘reward prediction error’ should be modified with nutrient-specific components to account for monkeys’ behavior in our task. By incorporating biologically critical nutrient rewards into the RL framework our findings help advance the ecological validity of RL models.
APA, Harvard, Vancouver, ISO, and other styles
44

Hill, Daniel F., Robert W. Hickman, Alaa Al-Mohammad, Arkadiusz Stasiak, and Wolfram Schultz. "Dopamine neurons encode trial-by-trial subjective reward value in an auction-like task." Nature Communications 15, no. 1 (2024). http://dx.doi.org/10.1038/s41467-024-52311-8.

Full text
Abstract:
AbstractThe dopamine reward prediction error signal is known to be subjective but has so far only been assessed in aggregate choices. However, personal choices fluctuate across trials and thus reflect the instantaneous subjective reward value. In the well-established Becker-DeGroot-Marschak (BDM) auction-like mechanism, participants are encouraged to place bids that accurately reveal their instantaneous subjective reward value; inaccurate bidding results in suboptimal reward (“incentive compatibility”). In our experiment, male rhesus monkeys became experienced over several years to place accurate BDM bids for juice rewards without specific external constraints. Their bids for physically identical rewards varied trial by trial and increased overall for larger rewards. In these highly experienced animals, responses of midbrain dopamine neurons followed the trial-by-trial variations of bids despite constant, explicitly predicted reward amounts. Inversely, dopamine responses were similar with similar bids for different physical reward amounts. Support Vector Regression demonstrated accurate prediction of the animals’ bids by as few as twenty dopamine neurons. Thus, the phasic dopamine reward signal reflects instantaneous subjective reward value.
APA, Harvard, Vancouver, ISO, and other styles
45

McDougle, Samuel D., Ian C. Ballard, Beth Baribault, Sonia J. Bishop, and Anne G. E. Collins. "Executive Function Assigns Value to Novel Goal-Congruent Outcomes." Cerebral Cortex, July 7, 2021. http://dx.doi.org/10.1093/cercor/bhab205.

Full text
Abstract:
Abstract People often learn from the outcomes of their actions, even when these outcomes do not involve material rewards or punishments. How does our brain provide this flexibility? We combined behavior, computational modeling, and functional neuroimaging to probe whether learning from abstract novel outcomes harnesses the same circuitry that supports learning from familiar secondary reinforcers. Behavior and neuroimaging revealed that novel images can act as a substitute for rewards during instrumental learning, producing reliable reward-like signals in dopaminergic circuits. Moreover, we found evidence that prefrontal correlates of executive control may play a role in shaping flexible responses in reward circuits. These results suggest that learning from novel outcomes is supported by an interplay between high-level representations in prefrontal cortex and low-level responses in subcortical reward circuits. This interaction may allow for human reinforcement learning over arbitrarily abstract reward functions.
APA, Harvard, Vancouver, ISO, and other styles
46

Ko, Woo Li, and Tae Ho Song. "Nonlinear Reward Gradient Behavior in Customer Reward and Loyalty Programs: Evidence From the Restaurant Industry." Journal of Hospitality & Tourism Research, February 9, 2024. http://dx.doi.org/10.1177/10963480231226083.

Full text
Abstract:
Reward programs offer rewards, and with further progression toward these rewards, customers become more motivated. This acceleration is called a reward gradient. This study proposes prospect theory as the underlying mechanism of the reward gradient and suggests a nonlinear relationship between motivation and progress level (low vs. middle vs. high). A numerical simulation based on the mathematical model and experiments were conducted to see how the expected values of rewards can change as progress is made, and how this further affects motivation. The results identified a stronger acceleration in the expected value after reaching a middle point of the reward achievement, which is due to a larger deviation in the loss value than the gain value. These findings can help restaurants and tourism companies implement a dynamic perspective in reward programs for enhancing customers’ reward gradient behaviors.
APA, Harvard, Vancouver, ISO, and other styles
47

Xu, Shiyang, Senqing Qi, Haijun Duan, et al. "Task Difficulty Regulates How Conscious and Unconscious Monetary Rewards Boost the Performance of Working Memory: An Event-Related Potential Study." Frontiers in Systems Neuroscience 15 (January 13, 2022). http://dx.doi.org/10.3389/fnsys.2021.716961.

Full text
Abstract:
The performance of working memory can be improved by the corresponding high-value vs. low-value rewards consciously or unconsciously. However, whether conscious and unconscious monetary rewards boosting the performance of working memory is regulated by the difficulty level of working memory task is unknown. In this study, a novel paradigm that consists of a reward-priming procedure and N-back task with differing levels of difficulty was designed to inspect this complex process. In particular, both high-value and low-value coins were presented consciously or unconsciously as the reward cues, followed by the N-back task, during which electroencephalogram signals were recorded. It was discovered that the high-value reward elicited larger event-related potential (ERP) component P3 along the parietal area (reflecting the working memory load) as compared to the low-value reward for the less difficult 1-back task, no matter whether the reward was unconsciously or consciously presented. In contrast, this is not the case for the more difficult 2-back task, in which the difference in P3 amplitude between the high-value and low-value rewards was not significant for the unconscious reward case, yet manifested significance for the conscious reward processing. Interestingly, the results of the behavioral analysis also exhibited very similar patterns as ERP patterns. Therefore, this study demonstrated that the difficulty level of a task can modulate the influence of unconscious reward on the performance of working memory.
APA, Harvard, Vancouver, ISO, and other styles
48

Giuffrida, Valentina, Isabel Beatrice Marc, Surabhi Ramawat, et al. "Reward prospect affects strategic adjustments in stop signal task." Frontiers in Psychology 14 (March 17, 2023). http://dx.doi.org/10.3389/fpsyg.2023.1125066.

Full text
Abstract:
Interaction with the environment requires us to predict the potential reward that will follow our choices. Rewards could change depending on the context and our behavior adapts accordingly. Previous studies have shown that, depending on reward regimes, actions can be facilitated (i.e., increasing the reward for response) or interfered (i.e., increasing the reward for suppression). Here we studied how the change in reward perspective can influence subjects’ adaptation strategy. Students were asked to perform a modified version of the Stop-Signal task. Specifically, at the beginning of each trial, a Cue Signal informed subjects of the value of the reward they would receive; in one condition, Go Trials were rewarded more than Stop Trials, in another, Stop Trials were rewarded more than Go Trials, and in the last, both trials were rewarded equally. Subjects participated in a virtual competition, and the reward consisted of points to be earned to climb the leaderboard and win (as in a video game contest). The sum of points earned was updated with each trial. After a learning phase in which the three conditions were presented separately, each subject performed 600 trials testing phase in which the three conditions were randomly mixed. Based on the previous studies, we hypothesized that subjects could employ different strategies to perform the task, including modulating inhibition efficiency, adjusting response speed, or employing a constant behavior across contexts. We found that to perform the task, subjects preferentially employed a strategy-related speed of response adjustment, while the duration of the inhibition process did not change significantly across the conditions. The investigation of strategic motor adjustments to reward’s prospect is relevant not only to understanding how action control is typically regulated, but also to work on various groups of patients who exhibit cognitive control deficits, suggesting that the ability to inhibit can be modulated by employing reward prospects as motivational factors.
APA, Harvard, Vancouver, ISO, and other styles
49

Pearson, Daniel, Meihui Piao, and Mike Le Pelley. "EXPRESS: Value-modulated attentional capture is augmented by win-related sensory cues." Quarterly Journal of Experimental Psychology, February 20, 2023, 174702182311603. http://dx.doi.org/10.1177/17470218231160368.

Full text
Abstract:
Attentional prioritization of stimuli in the environment plays an important role in overt choice. Previous research shows that prioritization is influenced by the magnitude of paired rewards, in that stimuli signalling high-value rewards are more likely to capture attention than stimuli signalling low-value rewards; and this attentional bias has been proposed to play a role in addictive and compulsive behaviours. A separate line of research has shown that win-related sensory cues can bias overt choices. However, the role that these cues play in attentional selection is yet to be investigated. Participants in the current study completed a visual search task in which they responded to a target shape in order to earn reward. The colour of a distractor signalled the magnitude of reward and type of feedback on each trial. Participants were slower to respond to the target when the distractor signalled high reward compared to when the distractor signalled low reward, suggesting that the high-reward distractors had increased attentional priority. Critically, the magnitude of this reward-related attentional bias was further increased for a high-reward distractor with post-trial feedback accompanied by win-related sensory cues. Participants also demonstrated an overt choice preference for the distractor that was associated with win-related sensory cues. These findings demonstrate that stimuli paired with win-related sensory cues are prioritized by the attention system over stimuli with equivalent physical salience and learned value. This attentional prioritization may have downstream implications for overt choices, especially in gambling contexts where win-related sensory cues are common.
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Cuijuan, Zhenxin Xiao, Yu Gao, Maggie Chuoyan Dong, and Shanxing Gao. "Understanding the spillover effects of manufacturer-initiated reward on observers’ compliance: a social learning perspective." Journal of Business & Industrial Marketing, November 28, 2022. http://dx.doi.org/10.1108/jbim-02-2022-0078.

Full text
Abstract:
Purpose Although manufacturer-initiated rewards are widely used to secure distributors’ compliance, the spillover effect on unrewarded distributors (i.e. observers) in the same distribution channel is under-researched. Using insights from social learning theory, this paper aims to investigate how manufacturer-initiated rewards affect observers’ expectation of reward and shape observers’ compliance toward the manufacturer. Furthermore, this paper explores how such effects are contingent upon distributor relationship features. Design/methodology/approach To test the hypotheses, hierarchical multiple regression and bootstrapping analyses were performed using survey data from 280 Chinese distributors. Findings The magnitude of a manufacturer-initiated reward to a distributor stimulates expectation of reward among observers, which enhances compliance; observers’ expectation of reward mediates the impact of reward magnitude on compliance. Moreover, network centrality (of the rewarded peer) negatively moderates the positive impact of reward magnitude on observers’ expectation of reward, whereas observers’ dependence (on the manufacturer) positively moderates this dynamic. Practical implications Manufacturers should pay attention to the spillover effects of rewards. Overall, they should use rewards of appropriate magnitude to show willingness to recognize outstanding distributors. This will inspire unrewarded distributors, which will then be more compliant. Furthermore, manufacturers should know that specific types of distributor relationship features may significantly vary the spillover effects. Originality/value This study illuminates the spillover effects of manufacturer-initiated reward by opening the “black box” of the link between reward magnitude and observers’ compliance and by specifying the effects’ boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!