Tuesday, October 22, 2019
Training and selective stimulus control in rats Essay Example
Training and selective stimulus control in rats Essay Example Training and selective stimulus control in rats Paper Training and selective stimulus control in rats Paper The aim of the experiment was to show that rats demonstrated stimulus discrimination and selective stimulus control during operant conditioning. The first hypothesis was the subject would learn to discriminate between the VR16 conditions that signal reinforcement and the EXT conditions. It was also hypothesised that the stimulus used to discriminate between VR16 and EXT would either be the light or the tone, not a combination. The participant in this experiment was a 16-month-old, female, Spague Dawley albino rat that was randomly selected from a group of 20. The apparatus used was an operant chamber, which delivered two stimuli (a light and a tone) to the subject, and a reinforcer of diluted condensed milk. During the first week of experimentation the subject underwent discrimination training, this was followed by a series of probe trials in the second week. The results from the first week showed the subject learned that no reinforcement was given during EXT, because the rate of responding decreased. The second weeks results showed that high tone was the stimulus used to discriminate between the stimuli. These results supported both the hypotheses, and it was concluded that rats do demonstrate stimulus discrimination and selective stimulus control. The major theorists for the development of operant conditioning were Edward Thorndike (1910), John Watson (1914), and Burrhus Skinner (1938) (Huitt and Hummel, 1997). They proposed that learning is the result of the application of consequences following overt behaviour; that is, subjects begin to connect certain responses with certain stimuli. This led Thorndike to conclude that the probability of a specific response reoccurring is changed according to the consequences following the response, and he labelled this learning conditioning (Carlson and Buskist 1997, Huitt and Hummel, 1997). In 1910, Thorndike used the notion of consequences to teach cats and dogs to manipulate a latch in a puzzle-box, to activate a door and escape (Huitt and Hummel, 1997). The consequence was either punishment or reward (Carlson and Buskist, 1997). Thorndike measured the time it took the animal to escape over various trials, and over time he noted that the animals latency to escape decreased consistently until it would activate the lever immediately after being placed in the box (Huitt and Hummel, 1997). The reward of being freed from the box somehow strengthened the association between the stimulus of being in the box and the appropriate action (Huitt and Hummel, 1997). Thorndike concluded that the reward strengthened the stimulus-response associations (Carlson and Buskist, 1997). He then went on to formulate his law of effect, which can be summarised by saying that an animal is more likely to repeat a response if the result is favourable, and less likely to repeat the action if the consequences were not favourable (Carlson and Buskist, 1997). There were two possible consequences of a behaviour, reinforcement or punishment. These could be divided into two sub-categories, positive (sometimes called pleasant) and negative (sometimes called aversive). These could be added to or taken away from the environment in order to change the probability of a given response occurring again (Carlson and Buskist, 1997. Werzburg University). Punishment decreases the repetition of behaviour and reinforcement usually increases the likelihood of response being repeated. A stimulus that acts as an indicator to the subject, suggesting that a reinforcer is available is said to be a discriminative stimulus (Gleitman, 1995). A discriminative stimulus affects the subjects behaviour considerably (Gleitman, 1995), as it influences the likelihood of a response occurring (Carlson and Buskist, 1997). Reynolds (1961) conducted experiments where two pigeons learned to tap a red key with a white triangle. To determine which was the discriminative stimulus, he tested the two birds with either a plain red key or a plain key with just a white triangle. Reynolds (1961) found that the first bird used the red key as the discriminative stimulus and the second bird used the white triangle to discriminate between stimuli. This experiment is also an example of selective stimulus control, where each pigeon selected which stimulus it believed was responsible for producing the reinforcer. To effectively study how a subject behaves in a given environment and to certain stimuli, it was necessary to establish a schedule of reinforcement, which is a set of guidelines saying how often the subject is reinforced (Gleitman, 1995). Stimuli could be presented to the environment according to a schedule of which there were two categories: continuous and intermittent (Gleitman, 1995), or not at all using extinction. Continuous reinforcement simply means that the behaviour is followed by a consequence each time it occurs. Intermittent schedules were based either on the passage of time (interval schedules) or the number of correct responses emitted (ratio schedules). The consequence could be delivered based on the same amount of passage of time or the same number of correct responses (fixed) or it could be based on a slightly different amount of time or number of correct responses that vary around a particular number (variable). This results in four classes of intermittent schedules, fixed interval (FI), fixed ratio (FR), variable interval (VI), and variable ratio (VR) (Gleitman, 1995). [Note: Continuous reinforcement is actually a specific example of a fixed ratio schedule with only one response emitted before a consequence occurs.]. The final schedule was extinction. During extinction, the subject is no longer reinforced for producing a previously reinforced response. Because there is no reward for responding, the frequency of the response decreases until it stops altogether (Carlson and Buskist, 1997. Huitt and Hummel, 1997. Gleitman, 1995). For the purpose of this experiment we used two alternating schedules of consequence (Lab Manual Psychology 111/112, 2002), Variable Ratio of 16 (VR16), where a reinforcer was given after an average of 16 responses, and Extinction (EXT). VR schedule was chosen, as a variable ratio was thought to be the best for maintaining behaviour (Werzburg University).Ã The aim of the experiment was to demonstrate stimulus discrimination and selective stimulus control in rats, and in turn, give support to past research indicating that learning comes from experience. The subject for this experiment was a female, albino rat, approximately 18 months old. The rat was placed in the operant chamber and subjected to two stimuli, a light and a tone. VR was paired with a dull light and high tone (1000Hz) and EXT was paired with a bright light and a low tone (500Hz) (Lab Manual Psychology 111/112, 2002). By reviewing past research, two hypotheses were formulated. The initial hypothesis was that the subject would learn to discriminate between the VR16 conditions that signal reinforcement and the EXT conditions, and therefore the rates of responding during VR16 would be higher than during EXT. It was also hypothesised that the stimulus used by the rats to discriminate would either be the light or the tone, not a combination (selective stimulus control).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.