Journal articles on the topic 'Reinforcement (Psychology) Choice (Psychology) in children'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Reinforcement (Psychology) Choice (Psychology) in children.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Martínez, Lourdes, Angeles F. Estévez, Luis J. Fuentes, and J. Bruce Overmier. "Improving conditional discrimination learning and memory in five-year-old children: Differential outcomes effect using different types of reinforcement." Quarterly Journal of Experimental Psychology 62, no. 8 (August 2009): 1617–30. http://dx.doi.org/10.1080/17470210802557827.

Full text
Abstract:
Previous studies have demonstrated that discriminative learning is facilitated when a particular outcome is associated with each relation to be learned. When this training procedure is applied (the differential outcomes procedure; DOP), learning is faster and better than when the typical common outcomes procedure or nondifferential outcomes (NDO) is used. Our primary purpose in the two experiments reported here was to assess the potential advantage of DOP in 5-year-old children using three different strategies of reinforcement in which (a) children received a reinforcer following a correct choice (“ + ”), (b) children lost a reinforcer following an incorrect choice (“ − ”), or (c) children received a reinforcer following a correct choice and lost one following an incorrect choice (“ + / − ”). In Experiment 1, we evaluated the effects of the presence of DOP and different types of reinforcement on learning and memory of a symbolic delayed matching-to-sample task using secondary and primary reinforcers. Experiment 2 was similar to the previous one except that only primary reinforcers were used. The results from these experiments indicated that, in general, children learned the task faster and showed higher performance and persistence of learning whenever differential outcomes were arranged independent of whether it was differential gain, loss, or combinations. A novel finding was that they performed the task better when they lost a reinforcer following an incorrect choice (type of training “ − ”) in both experiments. A further novel finding was that the advantage of the DOP over the nondifferential outcomes training increased in a retention test.
APA, Harvard, Vancouver, ISO, and other styles
2

De Meyer, Hasse, Gail Tripp, Tom Beckers, and Saskia van der Oord. "Conditional Learning Deficits in Children with ADHD can be Reduced Through Reward Optimization and Response-Specific Reinforcement." Research on Child and Adolescent Psychopathology 49, no. 9 (April 1, 2021): 1165–78. http://dx.doi.org/10.1007/s10802-021-00781-5.

Full text
Abstract:
AbstractWhen children with ADHD are presented with behavioral choices, they struggle more than Typically Developing [TD] children to take into account contextual information necessary for making adaptive choices. The challenge presented by this type of behavioral decision making can be operationalized as a Conditional Discrimination Learning [CDL] task. We previously showed that CDL is impaired in children with ADHD. The present study explores whether this impairment can be remediated by increasing reward for correct responding or by reinforcing correct conditional choice behavior with situationally specific outcomes (Differential Outcomes). An arbitrary Delayed Matching-To-Sample [aDMTS] procedure was used, in which children had to learn to select the correct response given the sample stimulus presented (CDL). We compared children with ADHD (N = 45) and TD children (N = 49) on a baseline aDMTS task and sequentially adapted the aDMTS task so that correct choice behavior was rewarded with a more potent reinforcer (reward manipulation) or with sample-specific (and hence response-specific) reinforcers (Differential Outcomes manipulation). At baseline, children with ADHD performed significantly worse than TD children. Both manipulations (reward optimization and Differential Outcomes) improved performance in the ADHD group, resulting in a similar level of performance to the TD group. Increasing the reward value or the response-specificity of reinforcement enhances Conditional Discrimination Learning in children with ADHD. These behavioral techniques may be effective in promoting the learning of adaptive behavioral choices in children with ADHD.
APA, Harvard, Vancouver, ISO, and other styles
3

Mokobane, Maria, Basil Pillay, Nicho Thobejane, and Anneke Meyer. "Delay aversion and immediate choice in Sepedi-speaking primary school children with attention-deficit/hyperactivity disorder." South African Journal of Psychology 50, no. 2 (September 27, 2019): 250–61. http://dx.doi.org/10.1177/0081246319876145.

Full text
Abstract:
Motivational factors play a significant role in the pathology of attention-deficit/hyperactivity disorder and are associated with altered reinforcement sensitivity. Delay aversion as a motivational style is characterised by a negative emotional reaction to the burden of delay. Children with attention-deficit/hyperactivity disorder have a stronger need to seek smaller immediate rewards rather than larger delayed rewards. This study ascertains whether children with attention-deficit/hyperactivity disorder have different responses when asked to choose between a larger delayed reward and a smaller immediate reward. Furthermore, it determines whether there are differences in response among the attention-deficit/hyperactivity disorder presentations. A sample ( N = 188) of attention-deficit/hyperactivity disorder participants ( n = 94) was compared with that of a group of children ( n = 94) without attention-deficit/hyperactivity disorder. These children attended primary school in Limpopo Province, South Africa. The Two-Choice Impulsivity Paradigm computer task was administered. The results showed that children with attention-deficit/hyperactivity disorder–combined presentation selected significantly smaller immediate rewards over larger delayed rewards in comparison to the control group, whereas children with attention-deficit/hyperactivity disorder–predominantly inattentive and attention-deficit/hyperactivity disorder–hyperactive/impulsive presentations did not demonstrate a significant difference in choice compared to the control group. In addition, no effect for gender was found. Children with attention-deficit/hyperactivity disorder seem to present with impulsive responses, which lead them to complete the concerned task faster and thereby escape delay. The study confirmed that children with attention-deficit/hyperactivity disorder–combined presentation may face problems with waiting for delayed rewards, which could have negative consequences in social and academic situations.
APA, Harvard, Vancouver, ISO, and other styles
4

Strain, Phillip S., and Frank W. Kohler. "Analyzing Predictors of Daily Social Skill Performance." Behavioral Disorders 21, no. 1 (November 1995): 79–88. http://dx.doi.org/10.1177/019874299502100108.

Full text
Abstract:
The purpose of this study was to examine the impact of play activities, teachers’ predictions of children's sociability, and intervention fidelity variables on the level of interaction between three preschoolers with autism and their typical peers. Children participated in daily play activity groups of three, including one youngster with autism and two peers. Following a baseline condition, all children in the class learned to exchange a range of prosocial overtures, including shares, play organizers, and assistance. Teachers then implemented an individual reinforcement contingency to maintain children's newly taught exchanges. Results indicated that social reciprocity and peer effort correlated most highly with target children's level of social interaction. Conversely, teachers’ choice of activity materials and predictions about sociability did not correlate with children's interactions during either experimental phase. These findings are discussed with regard to their implications for future social skills research and intervention.
APA, Harvard, Vancouver, ISO, and other styles
5

Fantino, Edmund. "Choice, conditioned reinforcement, and the prius effect." Behavior Analyst 31, no. 2 (October 2008): 95–111. http://dx.doi.org/10.1007/bf03392164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fantino, Edmund, Debra Freed, Ray A. Preston, and Wendy A. Williams. "CHOICE AND CONDITIONED REINFORCEMENT." Journal of the Experimental Analysis of Behavior 55, no. 2 (March 1991): 177–88. http://dx.doi.org/10.1901/jeab.1991.55-177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Flora, Stephen R., and Matthew D. Workman. "Distributed Reinforcement during Delay to Large Reinforcement May Increase “Self-Control” in Rats." Psychological Reports 76, no. 3_suppl (June 1995): 1355–61. http://dx.doi.org/10.2466/pr0.1995.76.3c.1355.

Full text
Abstract:
Two groups of rats were tested for self-control. In Exp. 1 all rats were impulsive. In Exp. 2, when subjects entered one goal box, the rats would receive 3 pellets immediately, the impulsive choice. When Standard Group rats entered the other goal box, they received 7 pellets after a delay of 10 sec., the self-control choice. When Distributed Group rats made a self-control choice in Phase 1 they received 1 pellet immediately, 3 after 3 sec., and 3 pellets 7 sec. later (10 sec. total); in Phase 2 they received 1 pellet immediately and 6 after 10 sec.; and in Phase 3 they received 7 pellets after a delay of 10 sec. Rats in the Distributed Group, but not rats in the Standard Group, tended to be self-controlled throughout the experiment.
APA, Harvard, Vancouver, ISO, and other styles
8

Corcoran, Kevin J. "Efficacy, "skills," reinforcement, and choice behavior." American Psychologist 46, no. 2 (1991): 155–57. http://dx.doi.org/10.1037/0003-066x.46.2.155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mazur, James E. "Choice, delay, probability, and conditioned reinforcement." Animal Learning & Behavior 25, no. 2 (June 1997): 131–47. http://dx.doi.org/10.3758/bf03199051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Preston, Ray A., and Edmund Fantino. "CONDITIONED REINFORCEMENT VALUE AND CHOICE." Journal of the Experimental Analysis of Behavior 55, no. 2 (March 1991): 155–75. http://dx.doi.org/10.1901/jeab.1991.55-155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ito, Masato. "Choice and amount of reinforcement in rats." Learning and Motivation 16, no. 1 (February 1985): 95–108. http://dx.doi.org/10.1016/0023-9690(85)90006-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Baker, L. J. V., and Yvonne Milner. "Sensory Reinforcement with Autistic Children." Behavioural and Cognitive Psychotherapy 13, no. 4 (October 1985): 328–41. http://dx.doi.org/10.1017/s0141347300012076.

Full text
Abstract:
Three non-verbal, autistic boys of 9, 12 and 16 years served as subjects in two experiments and a nurse play-therapist acted as the agent. The first experiment compared the effects upon a motor coordination task of each subject's preferred sensory reinforcer with those of the sensory reinforcer preferred by the other two subjects. On-task performances were maintained by prompting and by contingent presentation of each reinforcer in a multiple-baseline design across subjects. All subjects showed higher levels of on-task performance for their preferred sensory activity. In the second experiment a multielement-baseline design compared the effects of the preferred sensory reinforcer with those of a preferred edible reinforcer. All subjects showed higher levels of on-task performance for their preferred sensory activity. Inter-observer reliability remained above 90%. A role for sensory reinforcement in training autistic children is suggested.
APA, Harvard, Vancouver, ISO, and other styles
13

Jacob, Teresa C., and Edmund Fantino. "EFFECTS OF REINFORCEMENT CONTEXT ON CHOICE." Journal of the Experimental Analysis of Behavior 49, no. 3 (May 1988): 367–81. http://dx.doi.org/10.1901/jeab.1988.49-367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Shahan, Timothy A., and Kennon A. Lattal. "CHOICE, CHANGING OVER, AND REINFORCEMENT DELAYS." Journal of the Experimental Analysis of Behavior 74, no. 3 (November 2000): 311–30. http://dx.doi.org/10.1901/jeab.2000.74-311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Schuett, Mary Andrews, and J. Michael Leibowitz. "Effects of Divergent Reinforcement Histories upon Differential Reinforcement Effectiveness." Psychological Reports 58, no. 2 (April 1986): 435–45. http://dx.doi.org/10.2466/pr0.1986.58.2.435.

Full text
Abstract:
The effectiveness of differential reinforcement techniques in reducing lever-pressing was studied as a function of natural reinforcement history and prescribed schedule. Based upon a prebaseline, 30 children with natural high rates of responding and 30 children with natural low rates of responding were reinforced for tapping an assigned key for 15 min. on either a differential reinforcement of low rate (drl 5“) or a differential reinforcement of high rate (Conjunctive VR 10-drh 5”) schedule of reinforcement. Responding on the other key was then reinforced for 15 min. on a variable ratio (VR 35) schedule utilizing one of three differential reinforcement techniques to eliminate the previously taught response. Findings indicated that a child's natural history significantly influences subsequent rates of responding. Prescribed divergent schedules effected changes in responding only while the child was being reinforced on that schedule. The differential reinforcement techniques did not produce significant differences between subjects' performance on the new key but did affect responding on the previously reinforced key.
APA, Harvard, Vancouver, ISO, and other styles
16

Rehfeldt, Ruth Anne, and Lisette Randich. "Establishing Preference for Unreliable Reinforcement in Adults with Dual Diagnoses." Psychological Reports 93, no. 1 (August 2003): 161–74. http://dx.doi.org/10.2466/pr0.2003.93.1.161.

Full text
Abstract:
We evaluated the choice responding of three adults dually diagnosed with developmental and psychiatric disabilities using concurrent schedules of reinforcement. Specifically, participants were given a choice between a response option resulting in reliable reinforcement and a response option resulting in unreliable reinforcement. Our primary purpose was to shift preference from reliable to unreliable reinforcement via the systematic presentation of stimuli during delay intervals. A second purpose was to evaluate the effectiveness of intervening stimuli in shifting preference at differing delay-to-reinforcement intervals. Preference for unreliable reinforcement was first examined in the absence of stimulus presentations during delays, at three different delay values. Next, we aimed to establish preference for unreliable reinforcement by presenting pictures of reinforcers during delays preceding unreliable reinforcement. Preference was again examined at three different delay values. In the absence of stimulus presentations during delays, participants were shown to prefer reliable reinforcement, particularly at the longer delay value. When stimuli were presented during the delays, two of the three participants preferred unreliable reinforcement, particularly the longer the delay value. These results suggest that the presentation of intervening stimuli during delays may help facilitate tolerance for unreliable reinforcement.
APA, Harvard, Vancouver, ISO, and other styles
17

Williams, Ben A. "The role of probability of reinforcement in models of choice." Psychological Review 101, no. 4 (1994): 704–7. http://dx.doi.org/10.1037/0033-295x.101.4.704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McDevitt, Margaret A., and Ben A. Williams. "DUAL EFFECTS ON CHOICE OF CONDITIONED REINFORCEMENT FREQUENCY AND CONDITIONED REINFORCEMENT VALUE." Journal of the Experimental Analysis of Behavior 93, no. 2 (March 2010): 147–55. http://dx.doi.org/10.1901/jeab.2010.93-147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dunn, Roger, and Marcia L. Spetch. "CHOICE WITH UNCERTAIN OUTCOMES: CONDITIONED REINFORCEMENT EFFECTS." Journal of the Experimental Analysis of Behavior 53, no. 2 (March 1990): 201–18. http://dx.doi.org/10.1901/jeab.1990.53-201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Forehand, Rex. "Parental Positive Reinforcement With Deviant Children:." Child & Family Behavior Therapy 8, no. 3 (December 29, 1986): 19–26. http://dx.doi.org/10.1300/j019v08n03_02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fernández, Thalía, Jorge Bosch-Bayard, Thalía Harmony, María I. Caballero, Lourdes Díaz-Comas, Lídice Galán, Josefina Ricardo-Garcell, Eduardo Aubert, and Gloria Otero-Ojeda. "Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement." Applied Psychophysiology and Biofeedback 41, no. 1 (August 21, 2015): 27–37. http://dx.doi.org/10.1007/s10484-015-9309-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mawhinney, Thomas C. "Trigger Pulling for Monetary Reinforcements by a Single Subject during Ninety-Nine Ten-Minute Sessions." Psychological Reports 75, no. 2 (October 1994): 812–14. http://dx.doi.org/10.2466/pr0.1994.75.2.812.

Full text
Abstract:
Reinforcement maximization by identifying and following switching rules that occurred on conFR/VI-10 sec. reinforcement schedules did not occur when the subject experienced conFR/VI-20 sec. reinforcement schedules. Exclusive preference for the schedule with the lower valued N on conFR-N/FR-N schedules occurred as predicted by both matching and maximization theories of operant choice behavior. Additional research is required to assess the reliability of the phenomenon observed and factors upon which its occurrence may depend.
APA, Harvard, Vancouver, ISO, and other styles
23

Jackson, Kevin, and Timothy D. Hackenberg. "TOKEN REINFORCEMENT, CHOICE, AND SELF-CONTROL IN PIGEONS." Journal of the Experimental Analysis of Behavior 66, no. 1 (July 1996): 29–49. http://dx.doi.org/10.1901/jeab.1996.66-29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rasmussen, Erin B., and M. Christopher Newland. "ASYMMETRY OF REINFORCEMENT AND PUNISHMENT IN HUMAN CHOICE." Journal of the Experimental Analysis of Behavior 89, no. 2 (March 2008): 157–67. http://dx.doi.org/10.1901/jeab.2008.89-157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Haw, John. "The Relationship Between Reinforcement and Gaming Machine Choice." Journal of Gambling Studies 24, no. 1 (July 20, 2007): 55–61. http://dx.doi.org/10.1007/s10899-007-9073-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rost, Kristen A. "Reinforcement uncertainty enhances preference for choice in humans." Journal of the Experimental Analysis of Behavior 110, no. 2 (July 1, 2018): 201–12. http://dx.doi.org/10.1002/jeab.449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gaynor, Scott T., Ashley P. Thomas, and P. Scott Lawrence. "Dysphoric Mood and Preference for Immediate versus Delayed Monetary Reinforcement." Psychological Reports 84, no. 3_suppl (June 1999): 1281–93. http://dx.doi.org/10.2466/pr0.1999.84.3c.1281.

Full text
Abstract:
It has been proposed that depression is the product of deficits in self-management skills: self-monitoring, self-evaluation, and self-reinforcement. While interventions based on this theory have shown promise, some of the basic tenets upon which the theory is based lack empirical support. The present experiment tested one such tenet—the claim that depressed individuals select smaller more immediate reinforcers (an impulsive choice) at the expense of larger more delayed reinforcers (a self-control choice). Currently, empirical support for this notion is sparse and contradictory. This study addressed several methodological problems in earlier studies by creating divergent groups based on Beck Depression Inventory scores, employing a task requiring multiple responses and applying a quantitative model to determine reinforcer value. Analyses indicated no systematic difference between participants in the dysphoric and nondysphoric groups in ability to delay reinforcement. Thus, the current results provide no support for the hypothesis that the 36 dysphoric individuals were unable to delay reinforcement relative to the 21 nondysphoric individuals. Because respondents across the sample as a whole showed a self-control preference, however, the data are consistent with findings in the experimental study of choice responding with adult human subjects. Interpretations in terms of sensitivity and pseudosensitivity to the experimental contingencies are explored.
APA, Harvard, Vancouver, ISO, and other styles
28

Judson, D. H., and L. N. Gray. "Modifying Power Asymmetry in Dyads via Environmental Reinforcement Contingencies." Small Group Research 21, no. 4 (November 1990): 492–506. http://dx.doi.org/10.1177/1046496490214004.

Full text
Abstract:
Previous behavioral research has demonstrated the ability to modify individuals' behavior in simple choice situations, and more complex models have been shown to predict the behavior of groups. T/iis article uses an equity-based balancitng model to show how reinforcement schedules can be used to modify the equality of power in two-person groups. Using a behavioral measure of power asymmetry, how to derive predictions about equality from an eqity-based model is shown. The ability to modify power asymmetry using environmental rewards is demonstrated with 18 dyads tlat participated in apattern-guessing game. The results are in accord with predictions. The theoretical and therapeutic implications of this finding are discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Rudski, Jeffrey. "Naloxone Decreases Responding for Conditioned Reinforcement in Rats." Psychological Reports 100, no. 1 (February 2007): 263–69. http://dx.doi.org/10.2466/pr0.100.1.263-269.

Full text
Abstract:
That opioids can mediate unconditioned reinforcement is well established, but there is little evidence indicating whether they modify conditioned reinforcement. Here, a tone which initially served as a discriminative stimulus for the availability of water reinforcement was established as a conditioned stimulus. When later given a choice between pressing a lever producing the tone (but not water) or one which produced no effect, rats chose the tone-producing lever 66% of the time. Naloxone (3.0 mg/kg) reduced overall responding and completely eliminated the preference for the tone-producing lever. Results are briefly discussed in terms of the importance of understanding mechanisms serving conditioned reinforcement.
APA, Harvard, Vancouver, ISO, and other styles
30

Savastano, Hernán I., and Edmund Fantino. "HUMAN CHOICE IN CONCURRENT RATIO-INTERVAL SCHEDULES OF REINFORCEMENT." Journal of the Experimental Analysis of Behavior 61, no. 3 (May 1994): 453–63. http://dx.doi.org/10.1901/jeab.1994.61-453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Foster, Theresa A., and Timothy D. Hackenberg. "UNIT PRICE AND CHOICE IN A TOKEN-REINFORCEMENT CONTEXT." Journal of the Experimental Analysis of Behavior 81, no. 1 (January 2004): 5–25. http://dx.doi.org/10.1901/jeab.2004.81-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cooper, Andrew J., Sarah Stirling, Sharon Dawe, Giulia Pugnaghi, and Philip J. Corr. "The reinforcement sensitivity theory of personality in children: A new questionnaire." Personality and Individual Differences 115 (September 2017): 65–69. http://dx.doi.org/10.1016/j.paid.2016.06.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Colder, Craig R., Elisa M. Trucco, Hector I. Lopez, Larry W. Hawk, Jennifer P. Read, Liliana J. Lengua, William F. Weiczorek, and Rina D. Eiden. "Revised reinforcement sensitivity theory and laboratory assessment of BIS and BAS in children." Journal of Research in Personality 45, no. 2 (April 2011): 198–207. http://dx.doi.org/10.1016/j.jrp.2011.01.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Graff, Richard B., and Myrna E. Libby. "A COMPARISON OF PRESESSION AND WITHIN-SESSION REINFORCEMENT CHOICE." Journal of Applied Behavior Analysis 32, no. 2 (June 1999): 161–73. http://dx.doi.org/10.1901/jaba.1999.32-161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mazur, James E. "CHOICE WITH PROBABILISTIC REINFORCEMENT: EFFECTS OF DELAY AND CONDITIONED REINFORCERS." Journal of the Experimental Analysis of Behavior 55, no. 1 (January 1991): 63–77. http://dx.doi.org/10.1901/jeab.1991.55-63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Williams, Ben A. "CHOICE AS A FUNCTION OF LOCAL VERSUS MOLAR REINFORCEMENT CONTINGENCIES." Journal of the Experimental Analysis of Behavior 56, no. 3 (November 1991): 455–73. http://dx.doi.org/10.1901/jeab.1991.56-455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

McDevitt, Margaret A., and Ben A. Williams. "EFFECTS OF SIGNALED VERSUS UNSIGNALED DELAY OF REINFORCEMENT ON CHOICE." Journal of the Experimental Analysis of Behavior 75, no. 2 (March 2001): 165–82. http://dx.doi.org/10.1901/jeab.2001.75-165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dunn, Roger, Ben Williams, and Paul Royalty. "DEVALUATION OF STIMULI CONTINGENT ON CHOICE: EVIDENCE FOR CONDITIONED REINFORCEMENT." Journal of the Experimental Analysis of Behavior 48, no. 1 (July 1987): 117–31. http://dx.doi.org/10.1901/jeab.1987.48-117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Moore, Jay, and Karen E. Friedlen. "Choice Behavior in Pigeons Maintained With Probabilistic Schedules of Reinforcement." Psychological Record 57, no. 3 (July 2007): 313–38. http://dx.doi.org/10.1007/bf03395580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Zentall, Thomas R. "An Animal Model of Human Gambling." International Journal of Psychological Research 9, no. 2 (July 1, 2016): 96–112. http://dx.doi.org/10.21500/20112084.2284.

Full text
Abstract:
Human gambling generally involves taking a risk on a low probability high outcome alternative over the more economically optimal high probability low outcome alternative (not gambling). Surprisingly, although optimal foraging theory suggests that animals should be sensitive to the overall probability of reinforcement, the results of many experiments suggest otherwise. For example, they do not prefer an alternative that 100% of the time provides them with a stimulus that always predicts reinforcement over an alternative that provides them with a stimulus that predicts reinforcement 50% of the time. This line of research leads to the conclusion that preference depends on the predictive value of the stimulus that follows and surprisingly, not on its frequency. A similar mechanism likely accounts for the suboptimal choice that humans have to engage in commercial gambling.
APA, Harvard, Vancouver, ISO, and other styles
41

Bjørnebekk, Gunnar. "Reinforcement sensitivity theory and major motivational and self-regulatory processes in children." Personality and Individual Differences 43, no. 8 (December 2007): 1980–90. http://dx.doi.org/10.1016/j.paid.2007.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, I.-Ning, and Jeffrey N. Melvin. "Effects of Ratio Reinforcement Schedules on Choice Behavior." Journal of General Psychology 117, no. 1 (January 1990): 99–106. http://dx.doi.org/10.1080/00221309.1990.9917777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Pérez, Omar D., Michael RF Aitken, Peter Zhukovsky, Fabián A. Soto, Gonzalo P. Urcelay, and Anthony Dickinson. "Human instrumental performance in ratio and interval contingencies: A challenge for associative theory." Quarterly Journal of Experimental Psychology 72, no. 2 (January 1, 2018): 311–21. http://dx.doi.org/10.1080/17470218.2016.1265996.

Full text
Abstract:
Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.
APA, Harvard, Vancouver, ISO, and other styles
44

Pedersen, Mads Lund, Michael J. Frank, and Guido Biele. "The drift diffusion model as the choice rule in reinforcement learning." Psychonomic Bulletin & Review 24, no. 4 (December 13, 2016): 1234–51. http://dx.doi.org/10.3758/s13423-016-1199-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Barrett, Thomas E. "Clinical Application of Behavioral Social Skills Training with Children." Psychological Reports 57, no. 3_suppl (December 1985): 1183–86. http://dx.doi.org/10.2466/pr0.1985.57.3f.1183.

Full text
Abstract:
28 normal children of early primary school age, referred to a private clinic for deficits in social skills, were trained using the behavioral techniques of cognitive behavior modification, modeling, role-playing, and token reinforcement. Self-reports using the Children's Self-concept Scale and parents' reports using the Child Behavior Rating Scale indicated changes in behavior limited to the social skills response categories of those instruments.
APA, Harvard, Vancouver, ISO, and other styles
46

Jones, Robert N., Donnajean Mandler-Provin, Mark E. Latkowski, and William M. McMahon. "Development of a Reinforcement Survey for Inpatient Psychiatric Children." Child & Family Behavior Therapy 9, no. 3-4 (April 29, 1988): 73–77. http://dx.doi.org/10.1300/j019v09n03_06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Heyman, Gene M. "Which behavioral consequences matter? The importance of frame of reference in explaining addiction." Behavioral and Brain Sciences 19, no. 4 (December 1996): 599–610. http://dx.doi.org/10.1017/s0140525x00043284.

Full text
Abstract:
AbstractThe target article emphasizes the relationship between a matching law-based theory of addiction and the disease model of addiction. In contrast, this response emphasizes the relationship between the matching law theory and other behavioral approaches to addiction. The basic difference, I argue, is that the matching law specifies that choice is governed by local reinforcement rates. In contrast, economics says that overall reinforcement rate controls choice, and for other approaches there are other measures or no clear prediction at all. The response also differs from the target article in that there is more emphasis on the finding that stimulus conditions determine whether choice is under local or overall reinforcement rate control.
APA, Harvard, Vancouver, ISO, and other styles
48

Kodak, Tiffany, Dorothea C. Lerman, and Nathan Call. "EVALUATING THE INFLUENCE OF POSTSESSION REINFORCEMENT ON CHOICE OF REINFORCERS." Journal of Applied Behavior Analysis 40, no. 3 (September 2007): 515–27. http://dx.doi.org/10.1901/jaba.2007.40-515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Smith, J. David, Brooke N. Jackson, and Barbara A. Church. "Monkeys (Macaca mulatta) learn two-choice discriminations under displaced reinforcement." Journal of Comparative Psychology 134, no. 4 (November 2020): 423–34. http://dx.doi.org/10.1037/com0000227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Williamson, Donald A., Stephanie H. Williamson, Philip C. Watkins, and Halane H. Hughes. "Increasing Cooperation among Children Using Dependent Group-Oriented Reinforcement Contingencies." Behavior Modification 16, no. 3 (July 1992): 414–25. http://dx.doi.org/10.1177/01454455920163007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography