Monday, April 6, 2009

Conditioning

When we talk of learning we usually think of something related to the classroom, such as English or Maths. However, Psychologists refer to learning as a relatively permanent change in behaviour as a result of experience'. Learning is a fundamental process in all animals and the higher up the evolutionary scale the animal, the more important is the ability to learn. All animals need to adapt their behaviour in order to fit in with the environment and to adapt to changing circumstances in order to survive.

Much of our behaviour consists of learned responses to simple signals. Can all behaviour be analysed in the same way? Some psychologists believe that behaviour is the sum of many simple stimulus-response connections. However there are other psychologists who think that stimulus-response is too simplistic and that even simple responses to stimuli require the processing of a vast amount of information.

The Behaviourists are a group of psychologists who focus on these stimulus-response connections, the two most famous being Watson and Skinner. Behaviourism arose because there was dissatisfaction with approaches in psychology that involved 'unscientific, techniques such as introspection and dealt with unmeasurable aspects of behaviour such as the role of the unconscious mind. Behaviourists try to explain the causes of behaviour by studying only those behaviours that can be observed and measured. They leave focused their efforts on two types of learning processes known as classical conditioning and operant conditioning.

Classical Conditioning

This is learning by association. A Russian physiologist called Ivan Pavlov, studied salivation in dogs as part of his research programme. Normally, dogs will salivate at the when food is presented, but Pavlov was interested why the dogs had started to salivate when the saw the people that usually fed them (they also responded to the sound of the dishes being used for their meals). Pavlov set up an experiment to find out if the dogs could be trained to salivate at other stimuli such as the sound of a bell or a light. At feeding times, Pavlov would ring a bell and the amount of saliva produced by the dog was measured. After several 'trials' Pavlov rang the bell without presenting the food and found that the dogs salivated in the same way as if food was being presented.

You will note that the conditional response is the same as the unconditioned response, the only difference being that the response is evoked by a different stimulus.

The Classical Conditioning Procedure:

In scientific terms, the procedure for this is as follows.

1 Food is the unconditioned stimulus or UCS. By this, Pavlov meant that the stimulus that elicited the response occurred naturally.

2 The salivation to the food is an unconditioned response (UCR), that is a response which occurs naturally.

3 The bell is the conditioned stimulus (CS) because it will only produce salivation on condition that it is presented with the food.

4 Salivation to the bell alone is the conditioned response (CR), a response to the conditioned stimulus.

Classical conditioning involves learning by association, that is associating two events which happen at the same time.

Nearly all automatic, involuntary responses can become a conditioned response:

heartbeat, stomach secretion, blood pressure, brain waves etc. For the conditioning to be effective, the conditioned stimulus should occur before the unconditioned stimulus, not after. This is because, in classical conditioning, the conditioned stimulus becomes a kind of signal for the unconditioned stimulus.

The following are some of the important principles of classical conditioning:

Extinction

If a conditioned stimulus is repeatedly presented without the unconditioned stimulus, then the conditioned response will disappear. This is known as extinction. If a dog learns to associate the sound of a bell with food and then the bell is rung repeatedly, but no food is presented, the dog will soon stop salivating a the sound of the bell.

Stimulus Generalisation

A dog who has been conditioned to salivate to the sound of a bell of one tone, may well salivate to a similar sounding bell or a buzzer. Stimulus generalisation is the extension of the conditioned response from the original stimulus to similar stimuli.

Discrimination

An animal or person can be taught to discriminate between different stimuli. For example, if a dog is shown a red circle every time he is fed, then he will salivate at the sight of the red circle alone. But initially, the dog may generalise and salivate at circles of any colour. If the dog is only fed when the red circle is presented and not when other colours are shown, he will learn to discriminate between red and the other colours.

Higher Order Conditioning

This is where more than one stimulus is paired and presented; there can be a chain of events that are linked to the same stimulus. It is thought that words may acquire their emotional meaning through higher order conditioning, for example by pairing the words with something that causes emotion, eventually the word alone will have the emotional meaning.


Classical conditioning


Classical Conditioning (also Pavlovian or Respondent Conditioning) is a form of associative learning that was first demonstrated by Ivan Pavlov. The typical procedure for inducing classical conditioning involves presentations of a neutral stimulus along with a stimulus of some significance. The neutral stimulus could be any event that does not result in an overt behavioral response from the organism under investigation. Pavlov referred to this as a Conditioned Stimulus (CS). Conversely, presentation of the significant stimulus necessarily evokes an innate, often reflexive, response. Pavlov called these the Unconditioned Stimulus (US) and Unconditioned Response (UR), respectively. If the CS and the US are repeatedly paired, eventually the two stimuli become associated and the organism begins to produce a behavioral response to the CS. Pavlov called this the Conditioned Response (CR).

Popular forms of classical conditioning that are used to study neural structures and functions that underlie learning and memory include fear conditioning, eyeblink conditioning, and the foot contraction conditioning of Hermissenda crassicornis.

History

Pavlov's experiment

The original and most famous example of classical conditioning involved the salivary conditioning of Pavlov's dogs. During his research on the physiology of digestion in dogs, Pavlov noticed that, rather than simply salivating in the presence of meat powder (an innate response to food that he called the unconditioned response), the dogs began to salivate in the presence of the lab technician who normally fed them. Pavlov called these psychic secretions. From this observation he predicted that, if a particular stimulus in the dog’s surroundings were present when the dog was presented with meat powder, then this stimulus would become associated with food and cause salivation on its own. In his initial experiment, Pavlov used a metronome to call the dogs to their food and, after a few repetitions, the dogs started to salivate in response to the metronome. Thus, a neutral stimulus (metronome) became a conditioned stimulus (CS) as a result of consistent pairing with the unconditioned stimulus (US - meat powder in this example). Pavlov referred to this learned relationship as a conditional reflex (now called Conditioned Response).

Forward conditioning



Diagram representing forward conditioning. The time interval increases from left to right.

During forward conditioning the onset of the CS precedes the onset of the US. Two common forms of forward conditioning are delay and trace conditioning.


Trace conditioning

During trace conditioning the CS and US do not overlap. Instead, the CS is presented, a period of time is allowed to elapse during which no stimuli are presented, and then the US is presented. The stimulus free period is called the trace interval. It may also be called the "conditioning interval"

Delay Conditioning

In delay conditioning the CS is presented and is overlapped by the presentation of the US

Simultaneous conditioning

During simultaneous conditioning, the CS and US are presented and terminate at the same time.

Backward conditioning

Backward conditioning occurs when a conditioned stimulus immediately follows an unconditioned stimulus. Unlike traditional conditioning models, in which the conditioned stimulus precedes the unconditioned stimulus, the conditioned response tends to be inhibitory. This is because the conditioned stimulus serves as a signal that the unconditioned stimulus has ended, rather than a reliable method of predicting the future occurrence of the unconditioned stimulus.

The onset of the US precedes the onset of the CS. Rather than being a reliable predictor of an impending US (such as in Forward Conditioning), the CS actually serves as a signal that the US has ended. As a result, the CR is said to be inhibitory.

Temporal conditioning

The US is presented at regularly timed intervals, and CR acquisition is dependent upon correct timing of the interval between US presentations. The background, or context, can serve as the CS in this example.

Unpaired conditioning

The CS and US are not presented together. Usually they are presented as independent trials that are separated by a variable, or pseudo-random, interval. This procedure is used to study non-associative behavioral responses, such as sensitization.

CS-alone extinction

The CS is presented in the absence of the US. This procedure is usually done after the CR has been acquired through Forward conditioning training. Eventually, the CR frequency is reduced to pre-training levels.

Procedure variations

In addition to the simple procedures described above, some classical conditioning studies are designed to tap into more complex learning processes. Some common variations are discussed below.

Classical discrimination/reversal conditioning

In this procedure, two CSs and one US are typically used. The CSs may be the same modality (such as lights of different intensity), or they may be different modalities (such as auditory CS and visual CS). In this procedure, one of the CSs is designated CS+ and its presentation is always followed by the US. The other CS is designated CS- and its presentation is never followed by the US. After a number of trials, the organism learns to discriminate CS+ trials and CS- trials such that CRs are only observed on CS+ trials.

During Reversal Training, the CS+ and CS- are reversed and subjects learn to suppress responding to the previous CS+ and show CRs to the previous CS-.

Classical ISI discrimination conditioning

This is a discrimination procedure in which two different CSs are used to signal two different interstimulus intervals. For example, a dim light may be presented 30 seconds before a US, while a very bright light is presented 2 minutes before the US. Using this technique, organisms can learn to perform CRs that are appropriately timed for the two distinct CSs.

Latent inhibition conditioning

In this procedure, a CS is presented several times before paired CS-US training commences. The pre-exposure of the subject to the CS before paired training slows the rate of CR acquisition relative to organisms that are not CS pre-exposed. Also see Latent inhibition for applications.

Conditioned inhibition conditioning

Three phases of conditioning are typically used:

Phase 1:
A CS (CS+) is not paired with a US until asymptotic CR levels are reached.
Phase 2:
CS+/US trials are continued, but interspersed with trials on which the CS+ in compound with a second CS, but not with the US (i.e., CS+/CS- trials). Typically, organisms show CRs on CS+/US trials, but suppress responding on CS+/CS- trials.
Phase 3:
In this retention test, the previous CS- is paired with the US. If conditioned inhibition has occurred, the rate of acquisition to the previous CS- should be impaired relative to organisms that did not experience Phase 2.

Little Albert

John B. Watson, founder of behaviourism, demonstrated classical conditioning empirically through experimentation using the Little Albert experiment in which a child ("Albert") was presented with a white rat to which was later paired with a loud noise. As the trials progressed, the child began showing signs of distress at the sight of the rat and other white objects, demonstrating that conditioning had taken place. Little Albert was also trained to be frightened of furry objects, like a stuffed animal and even a white coat.

Behavioral therapies

In human psychology, implications for therapies and treatments using classical conditioning differ from operant conditioning. Therapies associated with classical conditioning are aversion, Theraphy, flooding and systematic desensitization.

Classical conditioning is short-term, usually requiring less time with therapists and less effort from patients, unlike humanistic therapies. The therapies mentioned are designed to cause either aversive feelings toward something, or to reduce unwanted fear and aversion. Classical conditioning is based on a

Theories of classical conditioning

There are two competing theories of how classical conditioning works. The first, stimulus-response theory, suggests that an association to the unconditioned stimulus is made with the conditioned stimulus within the brain, but without involving conscious thought. The second theory stimulus-stimulus theory involves cognitive activity, in which the conditioned stimulus is associated to the concept of the unconditioned stimulus, a subtle but important distinction.

Stimulus-response theory, referred to as S-R theory, is a theoretical model of behavioral psychology that suggests humans and other animals can learn to associate a new stimulus- the conditioned stimulus (CS)- with a pre-existing stimulus - the unconditioned stimulus (US), and can think, feel or respond to the CS as if it were actually the US.

The opposing theory, put forward by cognitive behaviorists, is stimulus-stimulus theory (S-S theory). Stimulus-stimulus theory, referred to as S-S theory, is a theoretical model of classical conditioning that suggests a cognitive component is required to understand classical conditioning and that stimulus-response theory is an inadequate model. It proposes that a cognitive component is at play. S-R theory suggests that an animal can learn to associate a conditioned stimulus (CS) such as a bell, with the impending arrival of food termed the unconditioned stimulus, resulting in an observable behavior such as salivation. Stimulus-stimulus theory suggests that instead the animal salivates to the bell because it is associated with the concept of food, which is a very fine but important distinction.

To test this theory, psychologist Robert Rescorla undertook the following experiment . Rats learned to associate a loud noise as the unconditioned stimulus, and a light as the conditioned stimulus. The response of the rats was to freeze and cease movement. What would happen then if the rats were habituated to the US? S-R theory would suggest that the rats would continue to respond to the US, but if S-S theory is correct, they would be habituated to the concept of a loud sound (danger), and so would not freeze to the CS. The experimental results suggest that S-S was correct, as the rats no longer froze when exposed to the signal light. His theory still continues and is applied in everyday life.

In Popular Culture

One of the earliest literary references to classical conditioning can be found in the comic novel The Life and Opinions of Tristram Shandy, Gentleman (1759) by Laurence Sterne. The narrator Tristram Shandy explains how his mother was conditioned by his father's habit of winding up a clock before having sex with his wife:

My father, [...], was, I believe, one of the most regular men in every thing he did [...] [H]e had made it a rule for many years of his life,--on the first Sunday-night of every month throughout the whole year,--as certain as ever the Sunday-night came,--to wind up a large house-clock, which we had standing on the back-stairs head, with his own hands:--And being somewhere between fifty and sixty years of age at the time I have been speaking of,--he had likewise gradually brought some other little family concernments to the same period, in order, as he would often say to my uncle Toby, to get them all out of the way at one time, and be no more plagued and pestered with them the rest of the month. [...] [F]rom an unhappy association of ideas, which have no connection in nature, it so fell out at length, that my poor mother could never hear the said clock wound up,--but the thoughts of some other things unavoidably popped into her head--& vice versa:--Which strange combination of ideas, the sagacious Locke, who certainly understood the nature of these things better than most men, affirms to have produced more wry actions than all other sources of prejudice whatsoever.



Operant conditioning


Operant conditioning
is the use of consequences to modify the occurrence and form of behavior. Operant conditioning is distinguished from classical conditioning (also called respondent conditioning, or Pavolian conditioning) in that operant conditioning deals with the modification of "voluentary behavior" or operant behavior. Operant behavior "operates" on the environment and is maintained by its consequences, while classical conditioning deals with the conditioning of respondent behaviors which are elicited by antecedent conditions. Behaviors conditioned via a classical conditioning procedure are not maintained by consequences.

Reinforcement, punishment, and Extinction

Reinforcement and punishment, the core tools of operant conditioning, are either positive (delivered following a response), or negative (withdrawn following a response). This creates a total of four basic consequences, with the addition of a fifth procedure known as extinction (i.e. no change in consequences following a response)

It's important to note that organisms are not spoken of as being reinforced, punished, or extinguished; it is the response that is reinforced, punished, or extinguished. Additionally, reinforcement, punishment, and extinction are not terms whose use is restricted to the laboratory. Naturally occurring consequences can also be said to reinforce, punish, or extinguish behavior and are not always delivered by people.

  • Reinforcement is a consequence that causes a behavior to occur with greater frequency.
  • Punishment is a consequence that causes a behavior to occur with less frequency.
  • Extinction is the lack of any consequence following a behavior. When a behavior is inconsequential, producing neither favorable nor unfavorable consequences, it will occur with less frequency. When a previously reinforced behavior is no longer reinforced with either positive or negative reinforcement, it leads to a decline in the response.

Four contexts of operant conditioning: Here the terms "positive" and "negative" are not used in their popular sense, but rather: "positive" refers to addition, and "negative" refers to subtraction. What is added or subtracted may be either reinforcement or punishment. Hence positive punishment is sometimes a confusing term, as it denotes the addition of punishment (such as spanking or an electric shock), a context that may seem very negative in the lay sense. The four procedures are:

  1. Positive reinforcement occurs when a behavior (response) is followed by a favorable stimulus (commonly seen as pleasant) that increases the frequency of that behavior. In the Skinner box experiment, a stimulus such as food or sugar solution can be delivered when the rat engages in a target behavior, such as pressing a lever.
  2. Negative reinforcement occurs when a behavior (response) is followed by the removal of an aversive stimulus (commonly seen as unpleasant) thereby increasing that behavior's frequency. In the Skinner box experiment, negative reinforcement can be a loud noise continuously sounding inside the rat's cage until it engages in the target behavior, such as pressing a lever, upon which the loud noise is removed.
  3. Positive punishment (also called "Punishment by contingent stimulation") occurs when a behavior (response) is followed by an aversive stimulus, such as introducing a shock or loud noise, resulting in a decrease in that behavior.
  4. Negative punishment (also called "Punishment by contingent withdrawal") occurs when a behavior (response) is followed by the removal of a favorable stimulus, such as taking away a child's toy following an undesired behavior, resulting in a decrease in that behavior.

Also:

  • Avoidance learning is a type of learning in which a certain behavior results in the cessation of an aversive stimulus. For example, performing the behavior of shielding one's eyes when in the sunlight (or going indoors) will help avoid the aversive stimulation of having light in one's eyes.
  • Extinction occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
  • Noncontingent reinforcement refers to delivery of reinforcing stimuli regardless of the organism's (aberrant) behavior. The idea is that the target behavior decreases because it is no longer necessary to receive the reinforcement. This typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which serves to decrease the rate of the target behavior.As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".

Thorndike's law of effect

Operant conditioning, sometimes called instrumental conditioning or instrumental learning, was first extensively studied by Edward L. Thorndike (1874-1949), who observed the behavior of cats trying to escape from home-made puzzle boxes. first constrained in the boxes, the cats took a long time to escape. With experience, ineffective responses occurred less frequently and successful responses occurred more frequently, enabling the cats to escape in less time over successive trials. In his Law of Effect, Thorndike theorized that successful responses, those producing satisfying consequences, were "stamped in" by the experience and thus occurred more frequently. Unsuccessful responses, those producing annoying consequences, were stamped out and subsequently occurred less frequently. In short, some consequences strengthened behavior and some consequences weakened behavior. Thorndike produced the first known learning curves through this procedure. B.F. Skinner (1904-1990) formulated a more detailed analysis of operant conditioning based on reinforcement, punishment, and extinction. Following the ideas of Ernst Mach, Skinner rejected Thorndike's mediating structures required by "satisfaction" and constructed a new conceptualization of behavior without any such references. So while experimenting with some homemade feeding mechanisms Skinner invented the operant conditioning chamber which allowed him to measure rate of response as a key dependent variable using a cumulative record of lever presses or key pecks.

Operant Conditioning vs Fixed Action Patterns

Skinner's construct of instrumental learning is contrasted with what Nobel Prize winning biologist Konrad Lorenz termed "fixed action patterns," or reflexive, impulsive, or instinctive behaviors. These behaviors were said by Skinner and others to exist outside the parameters of operant conditioning but were considered essential to a comprehensive analysis of behavior.

Fixed Action Patterns have their origin in the genetic makeup of the animal in question. Examples of "fixed action patterns" include ducklings that will follow any moving object if they see that object within the period of time when the behaviour will be released, or the dance that a bee performs. Characteristics of "fixed action patterns" include not needing to be learned or acquired; these behaviours are performed correctly the first time that they are performed.

Within operant conditioning, Fixed Action Patterns can be used as reinforcers for learned behaviours. Often, fixed action patterns such as predatory grabbing in dogs can be used as a reinforcer. In police and military dog training, the desire to engage in the predatory bite is often used as a reinforcement for successful completion of a search or an obedience exercise. The amount of desire that a dog might have to engage in the fixed action pattern is also known as "prey drive" although this may well be a misnomer as there is no quantification for how much a dog wants to engage in the predatory sequence.

Fixed Action Patterns can also get in the way of successful learning. Bailey and Breland note in their paper "The Mis-Behaviour of Organisms" note that raccoons cannot be taught to place an item in a jar due to the fixed action pattern that is released when they begin to place the item in the jar. When a component of a learned sequence triggers the beginning of a fixed action pattern, it is difficult and sometimes impossible to interrupt that sequence before it is completed. In this way, teaching raccoons to place items in jars, pigs to fetch (fetching triggers routing behaviours) or young ducklings to sit and stay.

Criticisms

Thorndike's law of effect specifically requires that a behavior be followed by satisfying consequences for learning to occur. There are, however, cases in which learning can be shown to occur without good or bad effects following the behavior. For instance, a number of experiments examining the phenomenon of latent learning showed that a rat needn't receive a satisfying reward (food, if hungry; water, if thirsty) in order to learn a maze; learning that becomes apparent immediately after the desired reward is introduced. However, views claiming such research invalidates theories of operant conditioning are molecular to a fault. If the rat has a history of "searching behavior" being reinforced in novel environments, the behavior will occur in new environments. This is especially plausible in a species which scavenges for food and has thus likely inherited a propensity for searching behavior to be sensitive to reinforcement. Behaving during initial extinction trials as the organism had during reinforcement trials is not proof of latent learning, as behavior is a function of the history of the individual organism and its genetic endowment and is never controlled by future consequences. That an organism continues to respond during unreinforced trials has been well-established when studying intermittent schedules of reinforcement.

A different experiment, in humans, showed that "punishing" the correct behavior may actually cause it to be more frequently taken (i.e. stamp it in). Subjects are given a number of pairs of holes on a large board and required to learn which hole to poke a stylus through for each pair. If the subjects receive an electric shock for punching the correct hole, they learn which hole is correct more quickly than subjects who receive an electric shock for punching the incorrect hole. This cannot, however, be accurately described as punishment if it is increasing the probability of the behavior.

Biological correlates of operant conditioning

The first scientific studies identifying neurons that responded in ways that suggested they encode for conditioned stimuli came from work by Rusty Richardson and Mahlon deLong. They showed that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are equally active for positive and negative reinforcers, and have been demonstrated to cause plasticity in many cortical regions. Evidence also exists that dopamine is activated at similar times. The dopamine pathways encode positive reward only, not aversive reinforcement, and they project much more densely onto frontal cortex regions. Chlonergic projections, in contrast, are dense even in the posterior cortical regions like the primary visual cortx. A study of patients with Parkinson's disease, a condition attributed to the insufficient action of dopamine, further illustrates the role of dopamine in positive reinforcement. It showed that while off their medication, patients learned more readily with aversive consequences than with positive reinforcement. Patients who were on their medication showed the opposite to be the case, positive reinforcement proving to be the more effective form of learning when the action of dopamine is high.

Factors that alter the effectiveness of consequences

When using consequences to modify a response, the effectiveness of a consequence can be increased or decreased by various factors. These factors can apply to either reinforcing or punishing consequences.

  1. Satiation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior. Satiation is generally only a potential problem with primary reinforcers, those that do not need to be learned such as food and water.
  2. Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the consequence. More immediate feedback will be more effective than less immediate feedback. If someone's license plate is caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them over, then their speeding behavior is more likely to be affected.
  3. Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness upon the response is reduced. But if a consequence follows the response consistently after successive instances, its ability to modify the response is increased. The schedule of reinforcement, when consistent, leads to faster learning. When the schedule is variable the learning is slower. Extinction is more difficult when learning occurred during intermittent reinforcement and more easily extinguished when learning occurred during a highly consistent schedule.
  4. Size: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually large lottery jackpot, for example, might be enough to get someone to buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery jackpot is small, the same person might not feel it to be worth the effort of driving out and finding a place to buy a ticket. In this example, it's also useful to note that "effort" is a punishing consequence. How these opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed or not.

Most of these factors exist for biological reasons. The biological purpose of the Principle of Satiation is to maintain the organism's homeostasis. When an organism has been deprived of sugar, for example, the effectiveness of the taste of sugar as a reinforcer is high. However, as the organism reaches or exceeds their optimum blood-sugar levels, the taste of sugar becomes less effective, perhaps even aversive.

The principles of Immediacy and Contingency exist for neurochemical reasons. When an organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated. This network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons." This results in the plasticity of these synapses allowing recently activated synapses to increase their sensitivity to efferent signals, hence increasing the probability of occurrence for the recent responses preceding the reinforcement. These responses are, statistically, the most likely to have been the behavior responsible for successfully achieving reinforcement. But when the application of reinforcement is either less immediate or less contingent (less consistent), the ability of dopamine to act upon the appropriate synapses is reduced.

Operant variability

Operant variability is what allows a response to adapt to new situations. Operant behavior is distinguished from reflexes in that its response topography (the form of the response) is subject to slight variations from one performance to another. These slight variations can include small differences in the specific motions involved, differences in the amount of force applied, and small changes in the timing of the response. If a subject's history of reinforcement is consistent, such variations will remain stable because the same successful variations are more likely to be reinforced than less successful variations. However, behavioral variability can also be altered when subjected to certain controlling variables.

An extinction burst will often occur when an extinction procedure has just begun. This consists of a sudden and temporary increase in the response's frequency , followed by the eventual decline and extinction of the behavior targeted for elimination. Take, as an example, a pigeon that has been reinforced to peck an electronic button. During its training history, every time the pigeon pecked the button, it will have received a small amount of bird seed as a reinforcer. So, whenever the bird is hungry, it will peck the button to receive food. However, if the button were to be turned off, the hungry pigeon will first try pecking the button just as it has in the past. When no food is forthcoming, the bird will likely try again... and again, and again. After a period of frantic activity, in which their pecking behavior yields no result, the pigeon's pecking will decrease in frequency.

The evolutionary advantage of this extinction burst is clear. In a natural environment, an animal that persists in a learned behavior, despite not resulting in immediate reinforcement, might still have a chance of producing reinforcing consequences if they try again. This animal would be at an advantage over another animal that gives up too easily.

Extinction-induced variability serves a similar adaptive role. When extinction begins, and if the environment allows for it, an initial increase in the response rate is not the only thing that can happen. Imagine a bell curve. The horizontal axis would represent the different variations possible for a given behavior. The vertical axis would represent the response's probability in a given situation. Response variants in the middle of the bell curve, at its highest point, are the most likely because those responses, according to the organism's experience, have been the most effective at producing reinforcement. The more extreme forms of the behavior would lie at the lower ends of the curve, to the left and to the right of the peak, where their probability for expression is low.

A simple example would be a person inside a room opening a door to exit. The response would be the opening of the door, and the reinforcer would be the freedom to exit. For each time that same person opens that same door, they do not open the door in the exact same way every time. Rather, each time they open the door a little differently: sometimes with less force, sometimes with more force; sometimes with one hand, sometimes with the other hand; sometimes more quickly, sometimes more slowly. Because of the physical properties of the door and its handle, there is a certain range of successful responses which are reinforced.

Now imagine in our example that the subject tries to open the door and it won't budge. This is when extinction-induced variability occurs. The bell curve of probable responses will begin to broaden, with more extreme forms of behavior becoming more likely. The person might now try opening the door with extra force, repeatedly twist the knob, try to hit the door with their shoulder, maybe even call for help or climb out a window. This is how extinction causes variability in behavior, in the hope that these new variations might be successful. For this reason, extinction-induced variability is an important part of the operant procedure of shaping.

Avoidance learning

Avoidance training belongs to negative reinforcement schedules. The subject learns that a certain response will result in the termination or prevention of an aversive stimulus. There are two kinds of commonly used experimental settings: discriminated and free-operant avoidance learning.

Discriminated avoidance learning

In discriminated avoidance learning, a novel stimulus such as a light or a tone is followed by an aversive stimulus such as a shock (CS-US, similar to classical conditioning). During the first trials (called escape-trials) the animal usually experiences both the CS and the US, showing the operant response to terminate the aversive US. By the time, the animal will learn to perform the response already during the presentation of the CS thus preventing the aversive US from occurring. Such trials are called avoidance trials.

Free-operant avoidance learning

In this experimental session, no discrete stimulus is used to signal the occurrence of the aversive stimulus. Rather, the aversive stimulus (mostly shocks) are presented without explicit warning stimuli.
There are two crucial time intervals determining the rate of avoidance learning. This first one is called the S-S-interval (shock-shock-interval). This is the amount of time which passes during successive presentations of the shock (unless the operant response is performed). The other one is called the R-S-interval (response-shock-interval) which specifies the length of the time interval following an operant response during which no shocks will be delivered. Note that each time the organism performs the operant response, the R-S-interval without shocks begins anew.

Two-process theory of avoidance

This theory was originally established to explain learning in discriminated avoidance learning. It assumes two processes to take place. a) Classical conditioning of fear. During the first trials of the training, the organism experiences both CS and aversive US(escape-trials). The theory assumed that during those trials classical conditioning takes place by pairing the CS with the US. Because of the aversive nature of the US the CS is supposed to elicit a conditioned emotional reaction (CER) - fear. In classical conditioning, presenting a CS conditioned with an aversive US disrupts the organism's ongoing behavior. b) Reinforcement of the operant response by fear-reduction. Because during the first process, the CS signaling the aversive US has itself become aversive by eliciting fear in the organism, reducing this unpleasant emotional reaction serves to motivate the operant response. The organism learns to make the response during the US, thus terminating the aversive internal reaction elicited by the CS. An important aspect of this theory is that the term "Avoidance" does not really describe what the organism is doing. It does not "avoid" the aversive US in the sense of anticipating it. Rather the organism escapes an aversive internal state, caused by the CS.

  • One of the practical aspects of operant conditioning with relation to animal training is the use of shaping (reinforcing successive approximations and not reinforcing behavior past approximating), as well as chaining.

Verbal Behavior

In 1957 Skinner published Verbal Behavior a theoretical extension of the work he had pioneered since 1938. This work extended the theory of operant conditioning to human behavior previously assigned to the areas of language, linguistics and other areas. Verbal Behavior is the logical extension of Skinner's ideas, in which he introduced new functional relationship categories such as intraverbals, autoclitics, mands, tacts and the controlling relationship of the audience. All of these relationships were based on operant conditioning and relied on no new mechanisms despite the introduction of new functional categories.

Four term contingency

Modern behavior analysis, which is the name of the discipline directly descended from Skinner's work, holds that behavior is explained in four terms: an establishing operation (EO), a discriminative stimulus (Sd), a response (R), and a reinforcing stimulus (Srein or Sr for reinforcers, sometimes Save for aversive stimuli).

Operant Hoarding

Operant Hoarding is a term referring to the choice made by a rat, on a compound schedule called a multiple schedule, that maximizes its rate of reinforcement in an operant conditioning context. More specifically, rats were shown to have allowed food pellets to accumulate in a food tray by continuing to press a lever on a continuous reinforcement schedule instead of retrieving those pellets. Retrieval of the pellets always instituted a one-minute period of extinction during which no additional food pellets were available but those that had been accumulated earlier could be consumed. This finding appears to contradict the usual finding that rats behave impulsively in situations in which there is a choice between a smaller food object right away and a larger food object after some delay. See schedules of reinforcement.

No comments:

Post a Comment

ad

Free advertising



Free AdvertisingCoupon CodeDell CouponGap CouponTarget Coupon


Free Advertising