To see the other types of publications on this topic, follow the link: Reinforcement Schedules.

Journal articles on the topic 'Reinforcement Schedules'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Reinforcement Schedules.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Latham, Gary P., and Vandra L. Huber. "Schedules of Reinforcement:." Journal of Organizational Behavior Management 12, no. 1 (January 25, 1991): 125–49. http://dx.doi.org/10.1300/j075v12n01_06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pérez, Omar D., Michael RF Aitken, Peter Zhukovsky, Fabián A. Soto, Gonzalo P. Urcelay, and Anthony Dickinson. "Human instrumental performance in ratio and interval contingencies: A challenge for associative theory." Quarterly Journal of Experimental Psychology 72, no. 2 (January 1, 2018): 311–21. http://dx.doi.org/10.1080/17470218.2016.1265996.

Full text
Abstract:
Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.
APA, Harvard, Vancouver, ISO, and other styles
3

Mawhinney, Thomas C. "Trigger Pulling for Monetary Reinforcements by a Single Subject during Ninety-Nine Ten-Minute Sessions." Psychological Reports 75, no. 2 (October 1994): 812–14. http://dx.doi.org/10.2466/pr0.1994.75.2.812.

Full text
Abstract:
Reinforcement maximization by identifying and following switching rules that occurred on conFR/VI-10 sec. reinforcement schedules did not occur when the subject experienced conFR/VI-20 sec. reinforcement schedules. Exclusive preference for the schedule with the lower valued N on conFR-N/FR-N schedules occurred as predicted by both matching and maximization theories of operant choice behavior. Additional research is required to assess the reliability of the phenomenon observed and factors upon which its occurrence may depend.
APA, Harvard, Vancouver, ISO, and other styles
4

Reed, Phil. "Human free-operant performance varies with a concurrent task: Probability learning without a task, and schedule-consistent with a task." Learning & Behavior 48, no. 2 (January 2, 2020): 254–73. http://dx.doi.org/10.3758/s13420-019-00398-1.

Full text
Abstract:
AbstractThree experiments examined human rates and patterns of responding during exposure to various schedules of reinforcement with or without a concurrent task. In the presence of the concurrent task, performances were similar to those typically noted for nonhumans. Overall response rates were higher on medium-sized ratio schedules than on smaller or larger ratio schedules (Experiment 1), on interval schedules with shorter than longer values (Experiment 2), and on ratio compared with interval schedules with the same rate of reinforcement (Experiment 3). Moreover, bout-initiation responses were more susceptible to influence by rates of reinforcement than were within-bout responses across all experiments. In contrast, in the absence of a concurrent task, human schedule performance did not always display characteristics of nonhuman performance, but tended to be related to the relationship between rates of responding and reinforcement (feedback function), irrespective of the schedule of reinforcement employed. This was also true of within-bout responding, but not bout-initiations, which were not affected by the presence of a concurrent task. These data suggest the existence of two strategies for human responding on free-operant schedules, relatively mechanistic ones that apply to bout-initiation, and relatively explicit ones, that tend to apply to within-bout responding, and dominate human performance when other demands are not made on resources.
APA, Harvard, Vancouver, ISO, and other styles
5

Shah, K., C. M. Bradshaw, and E. Szabadi. "Performance of Humans in Concurrent Variable-Ratio Variable-Ratio Schedules of Monetary Reinforcement." Psychological Reports 65, no. 2 (October 1989): 515–20. http://dx.doi.org/10.2466/pr0.1989.65.2.515.

Full text
Abstract:
Four women pressed a button in five two-component concurrent variable-ratio variable-ratio ( conc VR VR) schedules of monetary reinforcement. There was no consistent tendency towards “probability matching” (distribution of responses between the two components in proportion to the relative probabilities of reinforcement); three of the four subjects showed exclusive preference for the schedule associated with the higher probability of reinforcement. These results are similar to results previously obtained with pigeons and rats in concurrent VR VR schedules.
APA, Harvard, Vancouver, ISO, and other styles
6

Nuijten, Raoul, Pieter Van Gorp, Alireza Khanshan, Pascale Le Blanc, Astrid Kemperman, Pauline van den Berg, and Monique Simons. "Health Promotion through Monetary Incentives: Evaluating the Impact of Different Reinforcement Schedules on Engagement Levels with a mHealth App." Electronics 10, no. 23 (November 26, 2021): 2935. http://dx.doi.org/10.3390/electronics10232935.

Full text
Abstract:
Background: Financial rewards can be employed in mHealth apps to effectively promote health behaviors. However, the optimal reinforcement schedule—with a high impact, but relatively low costs—remains unclear. Methods: We evaluated the impact of different reinforcement schedules on engagement levels with a mHealth app in a six-week, three-arm randomized intervention trial, while taking into account personality differences. Participants (i.e., university staff and students, N = 61) were awarded virtual points for performing health-related activities. Their performance was displayed via a dashboard, leaderboard, and newsfeed. Additionally, participants could win financial rewards. These rewards were distributed using a fixed schedule in the first study arm, and a variable schedule in the other arms. Furthermore, payouts were immediate in the first two arms, whereas payouts in the third arm were delayed. Results: All three reinforcement schedules had a similar impact on user engagement, although the variable schedule with immediate payouts was reported to have the lowest cost per participant. Additionally, the impact of financial rewards was affected by personal characteristics. Especially, individuals that were triggered by the rewards had a greater ability to defer gratification. Conclusion: When employing financial rewards in mHealth apps, variable reinforcement schedules with immediate payouts are preferred from the perspective of cost and impact.
APA, Harvard, Vancouver, ISO, and other styles
7

Schuett, Mary Andrews, and J. Michael Leibowitz. "Effects of Divergent Reinforcement Histories upon Differential Reinforcement Effectiveness." Psychological Reports 58, no. 2 (April 1986): 435–45. http://dx.doi.org/10.2466/pr0.1986.58.2.435.

Full text
Abstract:
The effectiveness of differential reinforcement techniques in reducing lever-pressing was studied as a function of natural reinforcement history and prescribed schedule. Based upon a prebaseline, 30 children with natural high rates of responding and 30 children with natural low rates of responding were reinforced for tapping an assigned key for 15 min. on either a differential reinforcement of low rate (drl 5“) or a differential reinforcement of high rate (Conjunctive VR 10-drh 5”) schedule of reinforcement. Responding on the other key was then reinforced for 15 min. on a variable ratio (VR 35) schedule utilizing one of three differential reinforcement techniques to eliminate the previously taught response. Findings indicated that a child's natural history significantly influences subsequent rates of responding. Prescribed divergent schedules effected changes in responding only while the child was being reinforced on that schedule. The differential reinforcement techniques did not produce significant differences between subjects' performance on the new key but did affect responding on the previously reinforced key.
APA, Harvard, Vancouver, ISO, and other styles
8

Steinhauer, Gene D. "Behavioral Contrast on Mixed Schedules." Psychological Reports 78, no. 2 (April 1996): 673–74. http://dx.doi.org/10.2466/pr0.1996.78.2.673.

Full text
Abstract:
Keypecking by 4 pigeons was studied on mixed schedules of reinforcement. Positive behavioral contrast was found when the schedule was shifted from Mixed VI VI to Mixed VI Extinction only when the VI schedule value was small relative to the component duration.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferster, C. B. "SCHEDULES OF REINFORCEMENT WITH SKINNER." Journal of the Experimental Analysis of Behavior 77, no. 3 (May 2002): 303–11. http://dx.doi.org/10.1901/jeab.2002.77-303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Morse, W. H., and P. B. Dews. "FOREWORD TO SCHEDULES OF REINFORCEMENT." Journal of the Experimental Analysis of Behavior 77, no. 3 (May 2002): 313–17. http://dx.doi.org/10.1901/jeab.2002.77-313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Killeen, Peter R., Diana Posadas-Sanchez, Espen Borgå Johansen, and Eric A. Thrailkill. "Progressive ratio schedules of reinforcement." Journal of Experimental Psychology: Animal Behavior Processes 35, no. 1 (2009): 35–50. http://dx.doi.org/10.1037/a0012497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nevin, John A. "Reinforcement schedules and “numerical competence”." Behavioral and Brain Sciences 11, no. 4 (December 1988): 594–95. http://dx.doi.org/10.1017/s0140525x00053619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Doughty, Adam H., and Kennon A. Lattal. "Response Persistence under Variable-Time Schedules following Immediate and Unsignalled Delayed Reinforcement." Quarterly Journal of Experimental Psychology Section B 56, no. 3b (August 2003): 267–77. http://dx.doi.org/10.1080/02724990244000124.

Full text
Abstract:
Key pecking of three pigeons was maintained in separate components of a multiple schedule by either immediate reinforcement (i.e., tandem variable-time fixed-interval schedule) or unsignalled delayed reinforcement (i.e., tandem variable-interval fixed-time schedule). The relative rate of food delivery was equal across components, and this absolute rate differed across conditions. Immediate reinforcement always generated higher response rates than did unsignalled delayed reinforcement. Then, variable-time schedules of food delivery replaced the contingencies just described such that food was delivered at the same rate but independently of responding. In most cases, response rates decreased to near-zero levels. In addition, response persistence was not systematically different between multiple-schedule components across pigeons. The implications of the results for the concepts of response strength and the response-reinforcer relation are noted.
APA, Harvard, Vancouver, ISO, and other styles
14

Williams, Ben A., and Paul Royalty. "CONDITIONED REINFORCEMENT VERSUS TIME TO REINFORCEMENT IN CHAIN SCHEDULES." Journal of the Experimental Analysis of Behavior 53, no. 3 (May 1990): 381–93. http://dx.doi.org/10.1901/jeab.1990.53-381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ren, Michelle, and Shahrdad Lotfipour. "Antibiotic Knockdown of Gut Bacteria Sex-Dependently Enhances Intravenous Fentanyl Self-Administration in Adult Sprague Dawley Rats." International Journal of Molecular Sciences 24, no. 1 (December 27, 2022): 409. http://dx.doi.org/10.3390/ijms24010409.

Full text
Abstract:
Communication between the brain and gut bacteria impacts drug- and addiction-related behaviors. To investigate the role of gut microbiota on fentanyl reinforcement and reward, we depleted gut bacteria in adult Sprague Dawley male and female rats using an oral, nonabsorbable antibiotic cocktail and allowed rats to intravenously self-administer fentanyl on an escalating schedule of reinforcement. We found that antibiotic treatment enhanced fentanyl self-administration in males, but not females, at the lowest schedule of reinforcement (i.e., fixed ratio 1). Both males and females treated with antibiotics self-administered greater amounts of fentanyl at higher schedules of reinforcement. We then replete microbial metabolites via short-chain fatty acid administration to evaluate a potential mechanism in gut-brain communication and found that restoring metabolites decreases fentanyl self-administration back to controls at higher fixed ratio schedules of reinforcement. Our findings highlight an important relationship between the knockdown and rescue of gut bacterial metabolites and fentanyl self-administration in adult rats, which provides support for a significant relationship between the gut microbiome and opioid use. Further work in this field may lead to effective, targeted treatment interventions in opioid-related disorders.
APA, Harvard, Vancouver, ISO, and other styles
16

McMillan, D. E., and Mi Li. "DRUG DISCRIMINATION UNDER CONCURRENT REINFORCEMENT SCHEDULES." Behavioural Pharmacology 9, no. 1 (August 1998): S60. http://dx.doi.org/10.1097/00008877-199808000-00131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

McMillan, D. E., and Mi Li. "DRUG DISCRIMINATION UNDER CONCURRENT REINFORCEMENT SCHEDULES." Behavioural Pharmacology 9, Supplement (August 1998): S60. http://dx.doi.org/10.1097/00008877-199808001-00131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McMillan, D. E., and Mi Li. "DRUG DISCRIMINATION UNDER CONCURRENT REINFORCEMENT SCHEDULES." Behavioural Pharmacology 9, no. 1 (August 1998): S60. http://dx.doi.org/10.1097/00008877-199812001-00131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Keenan, Michael, and Liam Toal. "Periodic Reinforcement and Second-Order Schedules." Psychological Record 41, no. 1 (January 1991): 87–115. http://dx.doi.org/10.1007/bf03395096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Burgess, I. S., and J. E. Wright. "Resistance to variable-time schedules produced by spaced-response reinforcement schedules." Behavioural Processes 11, no. 4 (November 1985): 389–404. http://dx.doi.org/10.1016/0376-6357(85)90004-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kenworthy, Luke, Siddharth Nayak, Christopher Chin, and Hamsa Balakrishnan. "NICE: Robust Scheduling through Reinforcement Learning-Guided Integer Programming." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 9821–29. http://dx.doi.org/10.1609/aaai.v36i9.21218.

Full text
Abstract:
Integer programs provide a powerful abstraction for representing a wide range of real-world scheduling problems. Despite their ability to model general scheduling problems, solving large-scale integer programs (IP) remains a computational challenge in practice. The incorporation of more complex objectives such as robustness to disruptions further exacerbates the computational challenge. We present NICE (Neural network IP Coefficient Extraction), a novel technique that combines reinforcement learning and integer programming to tackle the problem of robust scheduling. More specifically, NICE uses reinforcement learning to approximately represent complex objectives in an integer programming formulation. We use NICE to determine assignments of pilots to a flight crew schedule so as to reduce the impact of disruptions. We compare NICE with (1) a baseline integer programming formulation that produces a feasible crew schedule, and (2) a robust integer programming formulation that explicitly tries to minimize the impact of disruptions. Our experiments show that, across a variety of scenarios, NICE produces schedules resulting in 33% to 48% fewer disruptions than the baseline formulation. Moreover, in more severely constrained scheduling scenarios in which the robust integer program fails to produce a schedule within 90 minutes, NICE is able to build robust schedules in less than 2 seconds on average.
APA, Harvard, Vancouver, ISO, and other styles
22

Huh, N., S. Jo, H. Kim, J. H. Sul, and M. W. Jung. "Model-based reinforcement learning under concurrent schedules of reinforcement in rodents." Learning & Memory 16, no. 5 (April 29, 2009): 315–23. http://dx.doi.org/10.1101/lm.1295509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jones, B. Max, and Michael Davison. "REPORTING CONTINGENCIES OF REINFORCEMENT IN CONCURRENT SCHEDULES." Journal of the Experimental Analysis of Behavior 69, no. 2 (March 1998): 161–83. http://dx.doi.org/10.1901/jeab.1998.69-161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Haight, Patkicia A., and Peter R. Killeen. "Adjunctive behavior in multiple schedules of reinforcement." Animal Learning & Behavior 19, no. 3 (September 1991): 257–63. http://dx.doi.org/10.3758/bf03197884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Alessandri, Jérôme, and Carlos R. X. Cançado. "Human choice under schedules of negative reinforcement." Behavioural Processes 121 (December 2015): 70–73. http://dx.doi.org/10.1016/j.beproc.2015.10.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Falligant, John M., John T. Rapp, Kristen M. Brogan, and Jonathan W. Pinkston. "Response Force in Conjugate Schedules of Reinforcement." Psychological Record 68, no. 4 (August 15, 2018): 525–36. http://dx.doi.org/10.1007/s40732-018-0298-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Swiergosz, Matthew J., and Harvard L. Armus. "Secondary reinforcement strength with continuous primary reinforcement: Fixed-ratio and continuous secondary reinforcement schedules." Bulletin of the Psychonomic Society 26, no. 3 (September 1988): 252–53. http://dx.doi.org/10.3758/bf03337302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bradshaw, C. M., and E. Szabadi. "Herrnstein's Equation: Data from 110 Rats." Psychological Reports 73, no. 3_suppl (December 1993): 1355–61. http://dx.doi.org/10.2466/pr0.1993.73.3f.1355.

Full text
Abstract:
110 rats were trained under a series of variable-interval schedules of sucrose reinforcement (0.6 M, 50 μl), covering a wide range of scheduled interreinforcement intervals. Response and reinforcement rates recorded during the last five sessions of exposure to each schedule were used to fit Herrnstein's (1970) hyperbolic ‘response strength’ equation to the data from each rat The equation accounted for >80% of the data variance in 90%, and >90% of the variance in 60% of the sample. The distribution of the values of Rmax, the asymptote of the hyperbolic curve, did not depart significantly from normality. However, the distribution of the values of KH, the reinforcement rate needed to maintain the half-maximum response rate, was markedly skewed; logarithmically transformed values of KH conformed to a normal distribution. The data provide further support for the applicability of Herrnstein's equation to variable-interval performance; it is suggested that studies involving comparison of the parameters of the equation between groups of subjects should adopt logarithmic transformation of the values of KH.
APA, Harvard, Vancouver, ISO, and other styles
29

Silva, Francisco J., Ruhiyyih Yuille, and Lisa K. Peters. "A Method for Illustrating the Continuity of Behavior during Schedules of Reinforcement." Teaching of Psychology 27, no. 2 (April 2000): 145–48. http://dx.doi.org/10.1207/s15328023top2702_12.

Full text
Abstract:
In this article, we present a method for illustrating the continuity of behavior during schedules of reinforcement. Students experienced either a fixed-interval 15-sec schedule in which the first contact after 15 sec of a cursor on a computer screen with a 0.7-cm diameter virtual (invisible) target resulted in reinforcement (a beep) or a fixed-ratio 5 schedule in which every 5th contact with the target produced the reinforcer. In addition to illustrating the continuity of behavior, this method provides a means of exposing students to concepts and methods such as the acquisition of operant behavior, the assignment-of-credit problem, the organization of behavior across time, and the analysis of single-subject data.
APA, Harvard, Vancouver, ISO, and other styles
30

Killeen, Peter R. "Mathematical principles of reinforcement." Behavioral and Brain Sciences 17, no. 1 (March 1994): 105–35. http://dx.doi.org/10.1017/s0140525x00033628.

Full text
Abstract:
AbstractEffective conditioning requires a correlation between the experimenter's definition of a response and an organism's, but an animal's perception of its behavior differs from ours. These experiments explore various definitions of the response, using the slopes of learning curves to infer which comes closest to the organism's definition. The resulting exponentially weighted moving average provides a model of memory that is used to ground a quantitative theory of reinforcement. The theory assumes that: incentives excite behavior and focus the excitement on responses that are contemporaneous in memory. The correlation between the organism's memory and the behavior measured by the experimenter is given by coupling coefficients, which are derived for various schedules of reinforcement. The coupling coefficients for simple schedules may be concatenated to predict the effects of complex schedules. The coefficients are inserted into a generic model of arousal and temporal constraint to predict response rates under any scheduling arrangement. The theory posits a response-indexed decay of memory, not a time-indexed one. It requires that incentives displace memory for the responses that occur before them, and may truncate the representation of the response that brings them about. As a contiguity-weighted correlation model, it bridges opposing views of the reinforcement process. By placing the short-term memory of behavior in so central a role, it provides a behavioral account of a key cognitive process.
APA, Harvard, Vancouver, ISO, and other styles
31

Wylie, A. Michael, Michael P. Layng, and Kim A. Meyer. "SCHEDULE-INDUCED DEFECATION BY RATS DURING RATIO AND INTERVAL SCHEDULES OF FOOD REINFORCEMENT." Journal of the Experimental Analysis of Behavior 60, no. 3 (November 1993): 611–20. http://dx.doi.org/10.1901/jeab.1993.60-611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Madden, Gregory J., and Michael Perone. "HUMAN SENSITIVITY TO CONCURRENT SCHEDULES OF REINFORCEMENT: EFFECTS OF OBSERVING SCHEDULE-CORRELATED STIMULI." Journal of the Experimental Analysis of Behavior 71, no. 3 (May 1999): 303–18. http://dx.doi.org/10.1901/jeab.1999.71-303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Marcus, Bethany A., and Timothy R. Vollmer. "COMBINING NONCONTINGENT REINFORCEMENT AND DIFFERENTIAL REINFORCEMENT SCHEDULES AS TREATMENT FOR ABERRANT BEHAVIOR." Journal of Applied Behavior Analysis 29, no. 1 (March 1996): 43–51. http://dx.doi.org/10.1901/jaba.1996.29-43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cohen, Steven L., Jennifer Pedersen, Gene G. Kinney, and James Myers. "EFFECTS OF REINFORCEMENT HISTORY ON RESPONDING UNDER PROGRESSIVE-RATIO SCHEDULES OF REINFORCEMENT." Journal of the Experimental Analysis of Behavior 61, no. 3 (May 1994): 375–87. http://dx.doi.org/10.1901/jeab.1994.61-375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Leslie, Julian C. "A history of reinforcement: The role of reinforcement schedules in behavior pharmacology." Behavior Analyst Today 4, no. 1 (2003): 98–108. http://dx.doi.org/10.1037/h0100017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Alessandri, Jérôme, Carlos R. X. Cançado, and Josele Abreu-Rodrigues. "Effects of reinforcement value on instruction following under schedules of negative reinforcement." Behavioural Processes 145 (December 2017): 27–30. http://dx.doi.org/10.1016/j.beproc.2017.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bell, Matthew C., and Ben A. Williams. "CONDITIONED REINFORCEMENT IN CHAIN SCHEDULES WHEN TIME TO REINFORCEMENT IS HELD CONSTANT." Journal of the Experimental Analysis of Behavior 99, no. 2 (January 14, 2013): 179–88. http://dx.doi.org/10.1002/jeab.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Falcomata, Terry S., Colin S. Muething, Bryant C. Silbaugh, Summer Adami, Katherine Hoffman, Cayenne Shpall, and Joel E. Ringdahl. "Lag Schedules and Functional Communication Training: Persistence of Mands and Relapse of Problem Behavior." Behavior Modification 42, no. 3 (November 24, 2017): 314–34. http://dx.doi.org/10.1177/0145445517741475.

Full text
Abstract:
We evaluated the effects of lag schedules of reinforcement and functional communication training (FCT) on mand variability and problem behavior in two children with autism spectrum disorder. Specifically, we implemented FCT with increasing lag schedules and compared its effects on problem behavior with baseline conditions. The results showed that both participants exhibited low rates of problem behavior during treatment relative to baseline during and following schedule thinning (up to a Lag 5 schedule arrangement). Variable and total mands remained high during schedule thinning. With one participant, variable manding persisted when the value of the lag schedule was reduced to zero. The current results are discussed in terms of implications for training multiple mand topographies during FCT for the potential prevention and/or mitigation of clinical relapse during challenges to treatment.
APA, Harvard, Vancouver, ISO, and other styles
39

Muharib, Reem, and Robert C. Pennington. "My Student Cannot Wait! Teaching Tolerance Following Functional Communication Training." Beyond Behavior 28, no. 2 (June 20, 2019): 99–107. http://dx.doi.org/10.1177/1074295619852106.

Full text
Abstract:
Functional communication training (FCT) involves the reinforcement of an appropriate communicative response as an alternative to challenging behavior. The intervention has been identified as an evidence-based practice across multiple populations. Despite its extensive research support, FCT may be impractical in some educational settings because it often requires educators to reinforce alternative responses at high rates. In this discussion article, we describe three procedures (delay to reinforcement, chained schedules of reinforcement, and multiple schedules of reinforcement) that can be used following FCT in educational settings to teach students who exhibit challenging behaviors to tolerate waiting for a reinforcer.
APA, Harvard, Vancouver, ISO, and other styles
40

Carr, Nicholas, and Janet Carr. "REINFORCEMENT SCHEDULES AND THE MANAGEMENT OF CHILDHOOD BEHAVIOURS." Behavioural and Cognitive Psychotherapy 27, no. 1 (January 1999): 89–96. http://dx.doi.org/10.1017/s1352465899271093.

Full text
Abstract:
Where a behaviour has been maintained on a variable schedule of reinforcement theoretically it should be possible to reduce resistance to extinction by first putting the behaviour onto a continuous schedule of reinforcement. This approach has been employed in animal research but rarely with human participants, and where it has, with little success. This study describes the use of the approach to overcome some minor problems in the behaviour of young children, the problems being sufficiently troublesome for the parents to consult their GP. All the families who used the approach were successful in remediating the behaviour. Some reasons for this success, in contrast with the disappointing outcomes of some of the earlier research, are discussed. Although the study lacks formal controls it is suggested that the approach could be usefully applied to other common childhood behaviours that have been subjected to variable reinforcement.
APA, Harvard, Vancouver, ISO, and other styles
41

Wibowo, Agus. "APLIKASI REINFORCEMENT OLEH GURU MATA PELAJARAN DAN IMPLIKASINYA TERHADAP BIMBINGAN DAN KONSELING." GUIDENA: Jurnal Ilmu Pendidikan, Psikologi, Bimbingan dan Konseling 5, no. 2 (December 13, 2015): 16. http://dx.doi.org/10.24127/gdn.v5i2.321.

Full text
Abstract:
Abstract: This study originated from problems still lack teachers applying positive reinforcement to behavior that indicated the student in the learning process. The research objective to describe: 1) the level of reinforcement application, 2) application of reinforcement schedules, 3) the types of reinforcement, and 4) granting reinforcement by subject teachers. The study population was all students of class XI SMA Adabiah 2 Padang in the academic year 2012/2013, amounting to 325 students. The samples with Simple Random Sampling technique, and obtained a sample of 176 students.Jenis research is descriptive quantitative research. The study was conducted in April 2013. The research instrument Semantic Differential scale. Analysis of data using mean hypothetic results showed the level of application of the reinforcement of subject teachers in the high category, and interpretation of students to the application of reinforcement schedules, types of reinforcement, and reinforcement by way of provision of subject teachers in the learning process positively.Keywords: reinforcement, guidance, and counseling
APA, Harvard, Vancouver, ISO, and other styles
42

Baron, Alan, Jeffrey Mikorski, and Michael Schlund. "REINFORCEMENT MAGNITUDE AND PAUSING ON PROGRESSIVE-RATIO SCHEDULES." Journal of the Experimental Analysis of Behavior 58, no. 2 (September 1992): 377–88. http://dx.doi.org/10.1901/jeab.1992.58-377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Leinenweber, Antoinette, Shannon M. Nietzel, and Alan Baron. "TEMPORAL CONTROL BY PROGRESSIVE-INTERVAL SCHEDULES OF REINFORCEMENT." Journal of the Experimental Analysis of Behavior 66, no. 3 (November 1996): 311–26. http://dx.doi.org/10.1901/jeab.1996.66-311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Williams, Ben A. "CONDITIONED REINFORCEMENT DYNAMICS IN THREE-LINK CHAINED SCHEDULES." Journal of the Experimental Analysis of Behavior 67, no. 1 (January 1997): 145–59. http://dx.doi.org/10.1901/jeab.1997.67-145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lattal, Kennon A., Mark P. Reilly, and James P. Kohn. "RESPONSE PERSISTENCE UNDER RATIO AND INTERVAL REINFORCEMENT SCHEDULES." Journal of the Experimental Analysis of Behavior 70, no. 2 (September 1998): 165–83. http://dx.doi.org/10.1901/jeab.1998.70-165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

White, K. Geoffrey, Margaret-Ellen Pipe, Anthony P. McLean, and Selina Redman. "TEMPORAL PROXIMITY AND REINFORCEMENT SENSITIVITY IN MULTIPLE SCHEDULES." Journal of the Experimental Analysis of Behavior 44, no. 2 (September 1985): 207–15. http://dx.doi.org/10.1901/jeab.1985.44-207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Alsop, Brent, and Michael Davison. "PREFERENCE FOR MULTIPLE VERSUS MIXED SCHEDULES OF REINFORCEMENT." Journal of the Experimental Analysis of Behavior 45, no. 1 (January 1986): 33–45. http://dx.doi.org/10.1901/jeab.1986.45-33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Royalty, Paul, Ben A. Williams, and Edmund Fantino. "EFFECTS OF DELAYED CONDITIONED REINFORCEMENT IN CHAIN SCHEDULES." Journal of the Experimental Analysis of Behavior 47, no. 1 (January 1987): 41–56. http://dx.doi.org/10.1901/jeab.1987.47-41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liang, Yi, Zan Sun, Tianheng Song, Qiang Chou, Wei Fan, Jianping Fan, Yong Rui, et al. "Lenovo Schedules Laptop Manufacturing Using Deep Reinforcement Learning." INFORMS Journal on Applied Analytics 52, no. 1 (January 2022): 56–68. http://dx.doi.org/10.1287/inte.2021.1109.

Full text
Abstract:
Lenovo Research teamed with members of the factory operations group at Lenovo’s largest laptop manufacturing facility, LCFC, to replace a manual production scheduling system with a decision-making platform built on a deep reinforcement learning architecture. The system schedules production orders at all LCFC’s 43 assembly manufacturing lines, balancing the relative priorities of production volume, changeover cost, and order fulfillment. The multiobjective optimization scheduling problem is solved using a deep reinforcement learning model. The approach combines high computing efficiency with a novel masking mechanism that enforces operational constraints to ensure that the machine-learning model does not waste time exploring infeasible solutions. The use of the new model transformed the production management process enabling a 20% reduction in the backlog of production orders and a 23% improvement in the fulfillment rate. It also reduced the entire scheduling process from six hours to 30 minutes while it retained multiobjective flexibility to allow LCFC to adjust quickly to changing objectives. The work led to increased revenue of US $1.91 billion in 2019 and US $2.69 billion in 2020 for LCFC. The methodology can be applied to other scenarios in the industry.
APA, Harvard, Vancouver, ISO, and other styles
50

Dickinson, Alyce M., and Alan D. Poling. "Schedules of Monetary Reinforcement in Organizational Behavior Management." Journal of Organizational Behavior Management 16, no. 1 (May 20, 1996): 71–91. http://dx.doi.org/10.1300/j075v16n01_05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography