Academic literature on the topic 'Reinforcement Schedules'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reinforcement Schedules.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Reinforcement Schedules"

1

Latham, Gary P., and Vandra L. Huber. "Schedules of Reinforcement:." Journal of Organizational Behavior Management 12, no. 1 (1991): 125–49. http://dx.doi.org/10.1300/j075v12n01_06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pérez, Omar D., Michael RF Aitken, Peter Zhukovsky, Fabián A. Soto, Gonzalo P. Urcelay, and Anthony Dickinson. "Human instrumental performance in ratio and interval contingencies: A challenge for associative theory." Quarterly Journal of Experimental Psychology 72, no. 2 (2018): 311–21. http://dx.doi.org/10.1080/17470218.2016.1265996.

Full text
Abstract:
Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.
APA, Harvard, Vancouver, ISO, and other styles
3

Mawhinney, Thomas C. "Trigger Pulling for Monetary Reinforcements by a Single Subject during Ninety-Nine Ten-Minute Sessions." Psychological Reports 75, no. 2 (1994): 812–14. http://dx.doi.org/10.2466/pr0.1994.75.2.812.

Full text
Abstract:
Reinforcement maximization by identifying and following switching rules that occurred on conFR/VI-10 sec. reinforcement schedules did not occur when the subject experienced conFR/VI-20 sec. reinforcement schedules. Exclusive preference for the schedule with the lower valued N on conFR-N/FR-N schedules occurred as predicted by both matching and maximization theories of operant choice behavior. Additional research is required to assess the reliability of the phenomenon observed and factors upon which its occurrence may depend.
APA, Harvard, Vancouver, ISO, and other styles
4

Reed, Phil. "Human free-operant performance varies with a concurrent task: Probability learning without a task, and schedule-consistent with a task." Learning & Behavior 48, no. 2 (2020): 254–73. http://dx.doi.org/10.3758/s13420-019-00398-1.

Full text
Abstract:
AbstractThree experiments examined human rates and patterns of responding during exposure to various schedules of reinforcement with or without a concurrent task. In the presence of the concurrent task, performances were similar to those typically noted for nonhumans. Overall response rates were higher on medium-sized ratio schedules than on smaller or larger ratio schedules (Experiment 1), on interval schedules with shorter than longer values (Experiment 2), and on ratio compared with interval schedules with the same rate of reinforcement (Experiment 3). Moreover, bout-initiation responses were more susceptible to influence by rates of reinforcement than were within-bout responses across all experiments. In contrast, in the absence of a concurrent task, human schedule performance did not always display characteristics of nonhuman performance, but tended to be related to the relationship between rates of responding and reinforcement (feedback function), irrespective of the schedule of reinforcement employed. This was also true of within-bout responding, but not bout-initiations, which were not affected by the presence of a concurrent task. These data suggest the existence of two strategies for human responding on free-operant schedules, relatively mechanistic ones that apply to bout-initiation, and relatively explicit ones, that tend to apply to within-bout responding, and dominate human performance when other demands are not made on resources.
APA, Harvard, Vancouver, ISO, and other styles
5

Shah, K., C. M. Bradshaw, and E. Szabadi. "Performance of Humans in Concurrent Variable-Ratio Variable-Ratio Schedules of Monetary Reinforcement." Psychological Reports 65, no. 2 (1989): 515–20. http://dx.doi.org/10.2466/pr0.1989.65.2.515.

Full text
Abstract:
Four women pressed a button in five two-component concurrent variable-ratio variable-ratio ( conc VR VR) schedules of monetary reinforcement. There was no consistent tendency towards “probability matching” (distribution of responses between the two components in proportion to the relative probabilities of reinforcement); three of the four subjects showed exclusive preference for the schedule associated with the higher probability of reinforcement. These results are similar to results previously obtained with pigeons and rats in concurrent VR VR schedules.
APA, Harvard, Vancouver, ISO, and other styles
6

Nuijten, Raoul, Pieter Van Gorp, Alireza Khanshan, et al. "Health Promotion through Monetary Incentives: Evaluating the Impact of Different Reinforcement Schedules on Engagement Levels with a mHealth App." Electronics 10, no. 23 (2021): 2935. http://dx.doi.org/10.3390/electronics10232935.

Full text
Abstract:
Background: Financial rewards can be employed in mHealth apps to effectively promote health behaviors. However, the optimal reinforcement schedule—with a high impact, but relatively low costs—remains unclear. Methods: We evaluated the impact of different reinforcement schedules on engagement levels with a mHealth app in a six-week, three-arm randomized intervention trial, while taking into account personality differences. Participants (i.e., university staff and students, N = 61) were awarded virtual points for performing health-related activities. Their performance was displayed via a dashboard, leaderboard, and newsfeed. Additionally, participants could win financial rewards. These rewards were distributed using a fixed schedule in the first study arm, and a variable schedule in the other arms. Furthermore, payouts were immediate in the first two arms, whereas payouts in the third arm were delayed. Results: All three reinforcement schedules had a similar impact on user engagement, although the variable schedule with immediate payouts was reported to have the lowest cost per participant. Additionally, the impact of financial rewards was affected by personal characteristics. Especially, individuals that were triggered by the rewards had a greater ability to defer gratification. Conclusion: When employing financial rewards in mHealth apps, variable reinforcement schedules with immediate payouts are preferred from the perspective of cost and impact.
APA, Harvard, Vancouver, ISO, and other styles
7

Schuett, Mary Andrews, and J. Michael Leibowitz. "Effects of Divergent Reinforcement Histories upon Differential Reinforcement Effectiveness." Psychological Reports 58, no. 2 (1986): 435–45. http://dx.doi.org/10.2466/pr0.1986.58.2.435.

Full text
Abstract:
The effectiveness of differential reinforcement techniques in reducing lever-pressing was studied as a function of natural reinforcement history and prescribed schedule. Based upon a prebaseline, 30 children with natural high rates of responding and 30 children with natural low rates of responding were reinforced for tapping an assigned key for 15 min. on either a differential reinforcement of low rate (drl 5“) or a differential reinforcement of high rate (Conjunctive VR 10-drh 5”) schedule of reinforcement. Responding on the other key was then reinforced for 15 min. on a variable ratio (VR 35) schedule utilizing one of three differential reinforcement techniques to eliminate the previously taught response. Findings indicated that a child's natural history significantly influences subsequent rates of responding. Prescribed divergent schedules effected changes in responding only while the child was being reinforced on that schedule. The differential reinforcement techniques did not produce significant differences between subjects' performance on the new key but did affect responding on the previously reinforced key.
APA, Harvard, Vancouver, ISO, and other styles
8

Steinhauer, Gene D. "Behavioral Contrast on Mixed Schedules." Psychological Reports 78, no. 2 (1996): 673–74. http://dx.doi.org/10.2466/pr0.1996.78.2.673.

Full text
Abstract:
Keypecking by 4 pigeons was studied on mixed schedules of reinforcement. Positive behavioral contrast was found when the schedule was shifted from Mixed VI VI to Mixed VI Extinction only when the VI schedule value was small relative to the component duration.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferster, C. B. "SCHEDULES OF REINFORCEMENT WITH SKINNER." Journal of the Experimental Analysis of Behavior 77, no. 3 (2002): 303–11. http://dx.doi.org/10.1901/jeab.2002.77-303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Morse, W. H., and P. B. Dews. "FOREWORD TO SCHEDULES OF REINFORCEMENT." Journal of the Experimental Analysis of Behavior 77, no. 3 (2002): 313–17. http://dx.doi.org/10.1901/jeab.2002.77-313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography