Reinforcement: the process by which a response is strengthened. (Cardwell, 205) In classical conditioning, the procedure that increases the likelihood of a response. (Collin, 343) The occurrence of a “stimulus” or event following a “response,” that increases the likelihood of that response being repeated. (Hockenbury, 191)

Reinforcement is not essential for “learning" to occur. Rather, the expectation of reinforcement affects the performance of what has been learned. (Hockenbury, 208) “Positive reinforcement” can stimulate particular patterns of behavior, as (Harvard psychologist) B.F. Skinner demonstrated by placing a rat in one of his specially designed boxes, fitted with a lever or bar. Pellets of food appeared every time the animal pressed the bar, encouraging it to perform this action again and again. (Collin, 82)


Accidental Reinforcement: when reinforcement is just a coincidence. For example, 'superstitions.' (Hockenbury, 199) 

Negative Reinforcement: occurs when a response is followed by an end to discomfort or by the removal of an unpleasant event. (Coon, 273) A situation in which a response results in the removal of, avoidance of, or escape from, a punishing stimulus. Increases the likelihood that the response will be repeated in similar situations. Behaviors are said to be negatively reinforced when they let you either: escape “aversive stimuli” that are already present; or avoid aversive stimuli before they occur. (Hockenbury, 191-192) A response followed by the removal of something unpleasant. (Cardwell, 205)

Aversive Stimulus: a stimulus that is painful or uncomfortable. (Coon) Physical or psychological discomfort that an organism seeks to escape or avoid. For example, you dab some ‘hydrocortisone cream’ on an insect bite (the operant) to ‘escape’ the itching (the aversive stimulus). Another example, to ‘avoid’ getting the flu (the aversive stimulus) in the winter, you get a flu shot (the operant) in November. (Hockenbury, 192) 

Partial Reinforcement: situation in which the occurrence of a particular response is only sometimes followed by a reinforcer. (Hockenbury, 198)

Positive Reinforcement: occurs when a response is followed by a reward or other positive event. (Coon, 273) A situation in which a response is followed by the addition of, a reinforcing stimulus. Increases the likelihood that the response will be repeated in similar situations. (Hockenbury, 191) When an event allows a response and as a direct result of that connection, causes the response to be repeated more often in the future. (Cardwell, 205)

Reinforcer: a reinforcing stimulus. (Hockenbury, 192) Literally, anything that ‘reinforces.’ Reinforcers do not have to be things or events, they can also be activities, which are themselves rewarding for the organism performing them. A more desirable activity (such as watching TV - or playing a video game) can be used to reinforce a less desirable one (such as finishing homework). (Cardwell, 206)

Conditioned Reinforcer: a reinforcer that has acquired reinforcing value by being associated with a “primary reinforcer.” The classic example is money. Awards, frequent-flyer points, and college degrees are just a few other examples. (Hockenbury, 192)

Primary Reinforcer: a stimulus or event that is naturally or inherently reinforcing for a given species, such as food, water, or other biological necessities, such as adequate warmth, and sexual contact. (Hockenbury, 192-193)

Schedules of Reinforcement: specific preset arrangements of partial reinforcement. There are four basic schedules of reinforcement. (Hockenbury, 199)

Continuous Reinforcement: a schedule of reinforcement in which every occurrence of a particular response is reinforced. (Hockenbury, 198) Individuals who acquire a new behavior might typically receive a reinforcer every time they perform the right response. This ensures rapid acquisition of the behavior being reinforced. Once the rate of response reaches a certain level, it is more appropriate to switch to a ‘schedule’ that presents the reinforcer only some of the time. Full reinforcement is most influential in the ‘acquisition’ stage of a a response, partial reinforcement (is most influential) in the ‘maintenance’ stage. (Cardwell, 213)

Fixed-Interval Schedule: a reinforcer is delivered at fixed time intervals provided a response occurs during that time. (Cardwell, 214) A reinforcer is delivered for the first response that occurs after a preset time interval has elapsed. A rat on a two-minute fixed-interval schedule would receive no food pallets for any bar presses made during the first two minutes. But the first bar press after the two-minute interval had elapsed would be reinforced. Typically produces a pattern of responding in which the number of responses tends  to increase as the time for the next reinforcer draws near. Example, your instructor gives you a test every four weeks, your studying behavior would probably follow the same pattern. As the end of the four-week interval draws near, studying behavior increases. (Hockenbury, 200)

Variable-Interval Schedule: a reinforcer is given, on average, every so many seconds (minutes, hours, days), but the interval is varied so that it is not predictable. (Cardwell, 214) A reinforcer is delivered for the first response that occurs after an average time interval, which varies unpredictably from trial to trial. A rat on a variable-interval 30-second schedule might be reinforced:

(1) for the first bar press after only 10 seconds have elapsed on the first trial.
(2) for the first bar press after 50 seconds have elapsed on the second trial.
(3) for the first bar press after 30 seconds have elapsed on the third trial.

This works out to an average of one reinforcer every 30 seconds. The unpredictable nature of variable-interval schedules tends to produce moderate but steady rates of responding. In daily life, we experience variable-interval schedules when we have to wait for events that follow an approximate, rather than a precise schedule. (Hockenbury, 200)

Fixed-Ratio Schedule: a reinforcer is given every so many responses, regardless of time intervals. (Cardwell, 214) A reinforcer is delivered after a fixed number of responses has occurred. A rat on a 10-1 fixed-ratio schedule would have to press the bar 10 times in order to receive one food pellet. Typically produces a high rate of responding. An example (is) ‘piecework’ - work for which you are paid for producing a specific number of items, such as being paid $1 for every 100 envelopes you stuff. (Hockenbury, 200)

Variable-Ratio Schedule: a reinforcer is given on average every so many responses, but the actual number varies for each presentation of the reinforcer (Cardwell, 214) A reinforcer is delivered after an average number of responses, which varies unpredictably from trial to trial. A rat on a variable-ratio-20 schedule might have to press the bar 25 times on the first trial before being reinforced and only 15 times on the second trial before reinforcement. Produces high, steady rates of responding with hardly any pausing between trials. Example - gambling. Each spin of the roulette wheel could be the big one, and the more often you gamble, the more opportunities you have to win. (Hockenbury, 200)