Chat with us, powered by LiveChat Voluntary and Involuntary Migration Paths | Writedemy

Voluntary and Involuntary Migration Paths

Voluntary and Involuntary Migration Paths

Science of Race

This module looks race science, the study of racial groups from the perspective of the social sciences. In this module, we will investigate three types of race science. The first is research carried out by scientists to discover something about members of a given racial group. In this case, race science is a straightforward investigation of differences among races; that is, it is legitimate and ethical research on racial groups.

Race science, however can also have a second, less attractive meaning: it can be research in which scientists use—or misuse—data to assign negative value to particular racial groups; we call this pseudoscience, or false science. This type of race science confirmed racist views and was common in the nineteenth and the early twentieth centuries. The study of variation among groups of humans is real science. However, when researchers made (and a few continue to make) assumptions about the inferiority of certain groups compared with other groups, those scientists were engaged in pseudoscience. Genuine science looks for group differences, but it becomes pseudoscience when researchers assign positive or negative value to those differences.

Race science has a third meaning: the use of vulnerable populations as subjects of research. Vulnerable populations can include members of minority racial groups (inside and outside the United States), inmates of correctional and/or mental health institutions, and orphans (minors bereft of parental protection). Members of vulnerable groups might not understand the risks involved in research, or they might not have had the risks of participation explained to them. They might not even have been asked for their permission to be used as subjects. Slaves were vulnerable groups because they sometimes were exploited as subjects of medical research. Their permission was not always sought, the purposes of the research were not always revealed to them, and the dangers were not always explained to them—but many were forced to participate in such experimentation nonetheless.

In this module, we present several examples of pseudoscience and discuss ways it has contributed to racism. We open with eugenics, a social philosophy that underlies the science of breeding and eugenics programs in the United States and Germany from about 1900 to the 1930s. We then explore the development of the first American test of intelligence (1916) and its cousins, standardized achievement tests (1920s). We investigate the tests themselves, the testing processes, and the scoring of tests because the tests had biases built into them, and they continue to be demonstrably biased today, in spite of considerable effort to make them less so. Results from tests during the eugenics era—as well as results from some contemporary tests—have occasionally been used to support racism.

We then turn to another area where race is important: the use of vulnerable populations as subjects of medical research in the years before codes of ethics were developed. You may be surprised to learn how recently this situation was occurring.

We conclude the module with a discussion of how legitimate and ethical race science is now employed to examine human genetics and human variation. Scientific research, particularly in the area of human genetics, has revealed data that dramatically challenge the traditional ideas about race.

Science of Race
Topics
I. Cornerstones of Race Science

II. Philosophy and Practices of Eugenics

III. Measurement of Character and Ability

IV. Exploitation of Vulnerable Populations

V. Race Science and Modern Genetics

I. Cornerstones of Race Science
Two cornerstones of race science in early-twentieth-century America were race as biology and the racial hierarchy. Both concepts were elements of racial pseudoscience, meaning they helped to confirm the beliefs scientists held about the inferiority of people who belong to nonwhite racial groups.

Race as Biology
In the nineteenth century, Americans in general—including this nation’s social scientists, medical scientists, and natural scientists—believed some races were genetically and physically superior to others. They also believed that cultural differences, economic differences, and historical differences between groups were not as important in defining race as were the inheritable, biological differences they could observe with their own eyes. Reasoning from what you see—different skin colors, hair textures, and eye folds, for example—is an idea that appears again and again in this module.

But today we hold a different view. We believe that the differences we can see are only one facet of race, and an unimportant one, at that. We find that race has important historical and cultural determinants as well. Human geneticists today illustrate genetic mutations and variances that congregate by race, but these differences are no longer interpreted as indicators of the inferiority or superiority of any racial group.

Relying on visible differences among races to draw conclusions about individuals is like sorting all the books in a library by the colors of their covers, to borrow and build upon a metaphor from philosopher Kwame Anthony Appiah. “Only a tiny portion of an individual’s genetic inheritance, like a tiny fraction of a book’s character, is taken into account by such a system,” and no librarian would believe that deep knowledge about people would be revealed by such a system of classification (Appiah, 1992, p. 38). However, for centuries, we believed that what we see is the most important source of information for categorizing people into groups.

The Racial Hierarchy
There are two components to the racial hierarchy, both of which are examples of Eurocentric thinking. A hierarchy by definition assigns order to items or concepts: from top to bottom, from best to worst—in this case, from most advanced to least. The racial hierarchy placed a value on each racial group: white racial groups (that is, people of European ancestry) were considered better, more advanced, and more civilized than others, in terms of genetic inheritance.

Second, the racial hierarchy also marked the progress each group was making toward civilization. In this way of thinking, the groups deemed most primitive had shown the least capability of advancement and change. These Eurocentric values, held by white people and “confirmed” by white researchers engaged in the pseudoscience of race science, placed white racial groups at the apex of civilization and found nonwhite groups to be more primitive and incapable of improvement.

II. Philosophy and Practices of Eugenics
The Philosophy
The quest for racial purity and human perfection formed the foundation of the eugenics movement in America around the end of the nineteenth century (Black, 2003). Historians John Jackson and Nadine Weidman describe eugenics as “the idea that ‘good people’ should be encouraged to reproduce and ‘bad people’ should be discouraged from it” (Jackson & Weidman, 2004, p. 109). After World War II, it became widely known that Nazi Germany had patterned their programs of eugenics after those in the United States. At that point, American eugenic organizations dropped the name eugenics and began to use the term genetics(Black, 2003). Figure 2.1 reproduces the logo from the Second International Congress of Eugenics in 1921. The logo illustrates the wide range of disciplines that were associated with the racial science of the eugenics movement.

Figure 2.1 Tree of Eugenics

Source: International Eugenics Discussion, 2007

Eugenics became a familiar concept among Americans, and its proponents used elements of popular culture to extol its virtues. A popular film titled The Black Stork, referred to as “the eugenics love story,” played in theatres from 1916 to the 1940s and promoted forced sterilization of defectives and withholding necessary medical treatment of defective newborns (Lombardo, 1997). Also, during the height of the eugenics era (the early 1900s), “Fitter Family” contests were popular at state and local fairs throughout the country, in which used such criteria as health, family pedigree or genealogy, race, intelligence, and even cleanliness (Black, 2003) to determine which families most closely embodied the tenets of eugenics.

Better Breeding
When English researcher Sir Francis Galton coined the term eugenics in 1883, he defined the concept in terms of being “well-born,” implying similar outcomes for humans as for cattle. Borrowing from the science of animal husbandry that focused on selected breeding of domesticated animals like cattle, sheep, horses, cats, and dogs, eugenicists advocated the same methods to develop a racially pure and fit stock of humans.

Animal husbandry, also known as animal science, has long been preoccupied with the selective mating of animals to produce a certain pedigree or stock with superior characteristics. For example, animal scientists were—and continue to be—interested in breeding cattle with increased resistance to disease. Much as a selected pairing of stock animals might be deemed fit or unfit, and evidence of “taint” in a breeding animal would be frowned upon, eugenicists similarly judged the levels of purity and fitness required for a fit marriage, often using the same terminology.

This same language was also used in defining the levels of racial hierarchy for centuries. The emphasis on fitness and breeding was also a preoccupation of slave owners in the south. Slaves who were strong and who were perceived as good workers were often purchased as “breeders” to produce children with similar characteristics (Black, 2003).

The American Breeders’ Association
The American Breeders’ Association (ABA) was an early and ardent supporter of race science research and eugenics in the United States (Black, 2003). Created in 1903, the ABA was made up of agricultural societies that supported research in animal husbandry and hereditary science. Agricultural science, now known as agronomy, focuses on food and plant production using the latest tools and technology of science. Charles Davenport, a Harvard-trained zoologist and a leading American eugenicist, convinced the ABA at its first annual meeting in St. Louis in 1903 to include a focus on eugenics. The Eugenics Committee “would devise methods of recording the values of the blood of individuals, families, people, and races” (Black, 2003, p. 39).

It did not seem out of place to combine research on the breeding of superior plants and animals with the breeding of superior humans in the early 1900s. An animal breeder stated that “every race-horse, every straight back bull, every premium pig, tells what we can do [for animals] and what we must do for man” (Black, 2003, p. 39). The ABA undertook studies in genetics and encouraged better breeding in humans—work that contributed to a biased form of race science.

The ideas spread by the ABA became so popular because they focused on the problems of the day. During the nineteenth and early twentieth centuries, people believed that an individual’s propensity toward criminality, poverty, deviousness, or feeblemindedness was inherited (Black, 2003; Cravens, 1978). In other words, people who were poor or mentally ill and people in prisons, reformatories, orphanages, and poorhouses (institutions for the poor) were believed to have come from “poor native stock” (Barr, 1992). Environmental factors, such as poverty, malnutrition, mistreatment, a lack of opportunities, racism, unemployment, and lack of schooling were not regarded as factors that might affect a child’s development or a family’s ability to pay their bills. Instead, many believed that an individual’s behaviors were incorporated into the genome, so that misbehaviors of sin, sickness, drunkenness, laziness, and criminal tendencies were passed on as genetic heredity to the next generation (Barr, 1992; Cravens, 1978).

Conditions in Society
Around the time eugenics peaked in the United States, our nation was experiencing the largest waves of immigration it had ever seen. Beginning in about the 1880s, immigrants were increasingly the poor, unskilled, and uneducated people from southern and eastern Europe. Many immigrants came from lives of desperate poverty and filled the slums of New York and other port cities. Sentiment against ethnic minorities rapidly increased as cities and their services became overwhelmed serving families of destitute immigrants. Discrimination against racial minorities continued as well.

Think about it

Some researchers have found that when the number of immigrants to a particular region greatly increases, anti-immigrant sentiment there becomes stronger. What might be a few of the reasons?

Figure 2.2 Immigrants Admitted into the United States: Fiscal Years 1900–1999

Source: U.S. Immigration and Naturalization Service, 2002, p. 15

Note: The spike around 1990 resulted from the passing of the Immigration Reform and Control Act (IRCA) of 1986, which provided legalization for illegal immigrants living in the United States. Following the passage of IRCA, 2.7 million illegal immigrants of the United States were able to become legal residents (Marger, 2006).

The Remedies
Eugenicists hoped to “cleanse” society of the unfit and of those whose genes were considered impure. Remedies consisted of forced sterilization of the unfit, denial of medical treatment that would prolong the lives of the unfit, mandatory segregation, restrictive marriage laws, and withholding of medical treatment that could save the lives of newborns with serious birth defects. Paul Popenoe, California’s leading advocate for eugenics and the author of the textbook Applied Eugenics, argued, “The easiest way to counteract feeblemindedness was simple execution” (as quoted in Black, 2003, p. 251). Eugenics societies focused on ways to rid society of the unfit. Swiss psychiatrist and eugenics supporter August Forel issued a dire warning of the dangers of bad breeding when he stated, “The law of heredity winds like a red thread through the family history of every criminal, every epileptic, eccentric, and insane person. Shall we sit still…without applying the remedy?” (as quoted in Black, 2003, p. 257).

Sterilization
Some states passed mandatory sterilization laws for the unfit and legislated restrictive marriage clauses. Virginia, for example, made it illegal for a white person to marry a person who had “one drop” of black blood (Black, 2003, p. 165). These laws were intended to prevent “mongrels and mental defectives” (terms the 1924 Virginia legislature used) from reproducing and spreading their damaged genes through society. Mandatory sterilization laws were applied in the United States, where many individuals in mental institutions and in institutions for the feebleminded were sterilized without their consent. In California, over 11,000 inmates of institutions were sterilized.

The United States was not alone in this respect: Switzerland passed its own sterilization laws in 1928. Denmark soon followed in 1929. Many other countries passed similar legislation in the ensuing years.

Not Mixing the Races
The Richmond Times-Dispatch published an editorial in 1924 supporting the measure to prohibit marriages across racial lines. The editorialist wrote, “Once a drop of inferior black blood gets in his [that is, a white person’s] veins, he descends lower and lower in the mongrel scale” (Black, 2003, p. 167). (Mongrelization is a term borrowed from animal husbandry that designates a mixed-breed animal, specifically a dog.) Clearly, the racial hierarchy was the underlying foundation for eugenics policies in America.

Valuing Nordic People
At the opening of the twentieth century, many believed that the Nordic race was the most superior of the European races. Historians John Jackson and Nadine Weidman reported that so-called “Nordicists” looked for Nordic roots in culture, democracy, and social order and believed that even the “capacity for civilization was racial in nature” (Jackson & Weidman, 2004, pp. 105). Nordic people were “long-headed, blond, blue eyed, creative, strong, natural leaders” (Jackson & Weidman, 2004, p. 108). There were calls to discard “all moral sentiment that would stand in the way of a massive breeding program that would eliminate racial inferiors” (Jackson & Weidman, 2004, p. 108) and to help increase the numbers of Nordics. The idea of Nordic superiority is a familiar theme in Nazi propaganda before and during WWII.

German Eugenics
The Nazis borrowed heavily from the race science ideas developed by American eugenicists. Race science in Germany was aimed at creating a master Nordic race and was used to justify Jewish persecution and extermination that resulted in the Holocaust (Black, 2003). The Nazi remedy for the unfit and the Jews was referred to as the Final Solution (Black, 2003).

With money from the United States as well as other international funding, Germany founded the Institute for Anthropology, Human Heredity, and Eugenics in 1927 to conduct research on eugenics. One of the contributors of ideas to this enterprise was American eugenicist Charles Davenport, the head of the Station for Experimental Evolution at Cold Spring Harbor (New York), where he founded the Eugenics Record Office. In Heredity in Relation to Eugenics (1911), Davenport examined how race and disease were biologically based and determined that the “racially robust” were destined to rule the earth (Black, 2003, p. 386). Davenport’s subsequent book Race Crossing in Jamaica (1929) was of special interest to the Germans. His contribution enabled the Nazis to use “pedigrees,” or family trees, to identify Mischlinge (mixed-race Jews). Davenport had ties to several German journals and institutions in the prewar era.

Eugenics laws came quickly under the Nazi government. The first was a law for the prevention of genetically diseased offspring, to provide for the involuntary sterilization of people with “hereditary defects” (Jackson & Weidman, 2004, p. 123). In 1935, Hitler signed the Nuremberg laws to “strip Jews of citizenship,” to prohibit “marriage between Jews and non-Jews,” and “to require all couples wishing to marry to take medical examinations to ensure the purity of the race” (Jackson & Weidman, p. 124). (Similar racial purity laws were in effect in the United States in the late nineteenth and early twentieth centuries.) The second part of the Nuremberg Laws was the Reich Citizenship law, which defined anyone with three Jewish grandparents as a Jew and therefore a noncitizen. Anyone who had two Jewish grandparents was considered a Mischling and was also denied citizenship. The Nuremberg Laws proved critical in the oppression of the Jews, stripping them of their citizenship and making them targets for extermination (Bergen, 2003).

American eugenicists were delighted with the German laws. Harry Laughlin, an American geneticist who ran the Eugenics Record Office at Cold Spring Harbor for Davenport, was proud that the German law relating to sterilization was based on the law he had written. Jackson & Weidman report that Laughlin received “an honorary doctorate from the University of Heidelberg in 1936” for his work on the law of mandatory sterilization (2004, p. 123).

The American form of eugenics was similar to that practiced in Germany but a bit less extreme. This American version of race science was focused on developing the tools and technology to identify, manage, manipulate, or eliminate from the breeding pool people who were considered undesirable or defective. The way to improve the human breeding stock, eugenicists reasoned, was to prevent the survival of defective newborns and to remove defective individuals from society, relocating them into institutions to prevent their breeding.

Research in the Era of Eugenics
In the early twentieth century, anthropologists and physicians studied the growth of children of different immigrant and racial groups. They found that on average, these youngsters were considerably shorter (by several inches in middle childhood) and weighed considerably less (tens of pounds less in middle childhood) than the children of white, middle-class American families to whom they were compared. Children of immigrants, children of minority racial groups, and inmates of orphanages (who tended to come from poor families) experienced delayed puberty in comparison to children from white, middle-class families. Yet, scientists assessing such information interpreted the data to infer “poor native stock” and “inferior heredity” of these racial groups, rather than pointing out the evident poverty and malnutrition that characterized the lives of immigrant families and families of the poor (Barr, 1992, p. 159).

III. Measurement of Character and Ability
Phrenology
Race scientists had long been interested in human ability and in ways they might measure it. We begin with an idea from the nineteenth century: visible and measurable attributes of the human skull and their relationship to character, ability, and temperament. Phrenology—reading and interpreting the bumps on the skull—was one method used to assess an individual’s abilities and character.

Think about it

How do you measure up?

Feel your own skull for especially well-developed bumps, and determine the corresponding areas on the phrenology chart in figure 2.3. What would a phrenologist have determined about your character and personality in the late 1800s?

Figure 2.3 A Typical nineteenth-century Phrenology Chart

Source: Adapted from Fowler, 1856

Anthropometry
In the nineteenth century, scientists used anthropometric measurements (body and skull proportions) and psychophysical measurements (strength of grip and speed of reaction), believing these to be related to general mental ability. Galton, a scientist and a cousin of Charles Darwin, was trained in statistics. He began collecting such information and developing averages and other statistics for each skill he measured, in an effort to discover more about human ability. Galton ran an anthropometric laboratory in London and also collected data at the immensely popular world’s fairs. Remember from our discussions in module 1 that there was an American anthropometric laboratory at the Louisiana Purchase Exposition in St. Louis in 1904, where measurements of “primitive groups” were compared to those of European white people (Rydell, 1984).

It is important to recognize that measuring skulls and bodies and gauging physical reaction speeds and muscular strength are not racist activities, but are genuine science, part of an attempt to measure the variability of human beings, even of racial or ethnic groups of human beings. A bias exists when scientists interpret these data by using them to support the idea that one racial or ethnic group is inferior to another. That value judgment is what makes real science into pseudoscience.

Think about it

How might you be measured and tested if you had entered Galton’s anthropometric laboratory in London? Explore the online Galton Collection to see photos of the curious instruments Galton used to measure his subjects’ heads and bodies.

Viewing “Types”
In the 1800s, lithography was used to sketch “types”—that is, the faces of individuals who exemplified certain personality attributes. By the end of the century, photography began to be used to record these types. It was believed that by looking closely at facial features, you would be able to discover greatness or ordinariness, or even criminality. Furthermore, by examining and interpreting the features of criminals, it was believed, you could even perceive the particular type of crime committed by a given type of criminal, as defined by certain facial features.

As well as taking anthropometric measurements, Galton experimented with composite photographs to construct “racial types.” He used multiple negatives superimposed one over another to create an “ideal” type of face from many facial photographs. Such a composite might have been used to gather “a group of persons resembling one another in some mental quality” in order to determine “the external characteristics and features most commonly associated with that mental quality” (Study, 1877, p. 573). Galton proposed combining measurements of height and weight with age, hair color, and temperament to determine a “personal equation” that could be correlated with “mental characteristics” (Study, 1877, p. 573). Each of these projects, and Galton’s later system of classification of fingerprints—which we still use today—was based on what you see and what it could indicate about character and mental ability (Kenna, 1964; Study, 1877).

Screening Immigrants
In 1912 and 1913, researchers from the Training School (an institute for the feebleminded) in Vineland, New Jersey, carried out a series of experiments. Working alongside physicians at Ellis Island, where immigrants entered the United States, they sought “to see whether or not persons of considerable experience in institutions for the feebleminded can and do have the ability to pick [mental] defectives at sight.” They hoped to identify and turn away defective individuals attempting to enter the United States, and also to discover which countries “supply the largest amounts of defectives” (“Notes and News,” 1913, p. 245). While physicians were looking at individual immigrants as they entered the United States, the Training School researchers were observing the faces of youngsters to see if they could determine intelligence and character by looking at the faces of children.

One characteristic observed in some children that was imbued with artificial meaning was a tendency to breathe through the mouth, rather than through the nose with the lips closed. “Mouth breathers” were children whose mouths hung open because chronically swollen adenoids made it difficult to breathe through the nose. In the early 1900s, it was thought that children who had problems with adenoids were also deficient mentally. Once again, we see a belief that came about because of what you see. Today, we realize that a child with chronically infected tonsils and adenoids might do poorly in school because respiratory infections can cause a student to miss school.

The idea that you could see intelligence in human faces was very slow to disappear, even decades after tests of intelligencebegan to appear and were in common use. Psychologist Florence Goodenough felt it still needed to be discussed when she wrote her 1934 textbook on developmental psychology. She displayed six head-and-shoulders photographs of preschool children and asked readers to rank the intelligence of the children from high to low. Here is how she defined the task:

Look over the photographs of these six children and decide which you think is the brightest. Write the corresponding letter on a sheet of paper. Then examine the remaining photographs and decide which child you think is next in order of intelligence. Continue until all have been ranked. Then turn to the list of IQs and see how closely your ranking agrees with the test results. (p. 318)

Measuring Mental Ability
Around 1900, efforts at assessing character and mental ability were under way in medicine, criminology, and psychology, using physical measures and photography. But no one had developed a test that seemed satisfactory to characterize intelligence, then called mental ability or mental capacity. For decades, the focus had been on what you see in the human face that could be a valid indicator of intelligence.

The Binet Scales
We now focus on young children, for the first successful test of intelligence was developed for use with school-age children. In 1904, the minister of public instruction in Paris appointed a commission to make recommendations for the education of children in the slums. Biologist Alfred Binet and his physician colleague Theophile Simon were tasked with developing a way to separate children with subnormal mental ability, who needed a simpler, slower program, from children who were normal and could benefit from regular schooling.

At the time the Parisian school system categorized “subnormal” children into two groups: “those of backward intelligence, and those who are unstable.” Unstable children were “turbulent, vicious, rebellious to all discipline; they lack sequence of ideas, and probably power of attention.” Unstable youngsters were not considered in this sorting of students; instead, Binet concentrated on separating the “normal” from those who were “backward” in intelligence (Binet, 1916, p. 191).

There was a great need to organize and systematize elementary schooling at this time. Providing an adequate education—sometimes even a desk—to the crowds of children who came to already overcrowded, free public schools in France and in the United States was very difficult. Classrooms in both countries were filled with students of all different ages and sizes, and teachers struggled to manage. Students entered school at very different ages. In the United States, attendance was not compulsory, so children attended when their parents could spare them. Students were often ill for months on end and thus made very poor progress through school. By one estimate, between one-third and one-half of schoolchildren in the United States failed to progress through the grades as they should have. Teachers were often unwilling to promote students who scored poorly, so the lower grades of schools became overcrowded with older students who failed to progress (Barr, 2002).

To meet the challenge of sorting Parisian slum children by ability, Binet and Simon borrowed experimental tasks from many sources and studied strategies in use for detecting subnormal mental ability. They reported studying and then discarding the psychologists’ techniques of reaction time and speed of counting dots. They carried out and likewise discarded the old ideas of phrenology and palmistry, and they discounted studies of handwriting and measures of fluency of speech. Finally, Binet borrowed questions used by French neurologists who had the responsibility to identify children who should be admitted into institutions for the insane or the feebleminded.

Binet’s 1908 scale was successfully used in Paris. The questions Binet and Simon used were chosen to reflect the ordinary items in a child’s life (remember, these were children living in the slums of Paris). By testing many children, they were able to discover how many and what types of questions an ordinary six-year-old could answer, and how many and what types of questions seven-, eight-, nine-, and ten-year-olds should be able to answer. Using this information, they constructed average scores that represented the ordinary knowledge of typical children from age six to age ten. If a child could answer the questions expected of an average six-year-old, that child received a designation of mental age of six. If the child’s chronological age corresponded to his or her mental age, then he or she was deemed normal. If, however, a child of ten was able to answer only the types of questions a seven-year-old would be expected to answer, then that discrepancy marked the child as subnormal. An especially bright child might have been found to have a mental age two or three years beyond the child’s chronological age.

Binet and Simon’s test succeeded primarily because they intentionally chose what Binet called the “questions of practical life,” rather than material learned in school (Binet & Simon, 1916, p. 72). The men believed their 1908 scale was just one part of an important investigation of a child and also requested that a physician examine each child for health problems (“the medical method”) and a teacher report on school problems (“the pedagogical method”). The Binet-Simon Scale was, in their words, “the psychological method” (Binet, 1916, p. 40).

Information from the child’s physician and teacher would be combined with the results of the Binet-Simon scale to arrive at a decision as to whether a child could benefit from normal instruction in school. Thus, a child of six who had a mental age of six would be placed into a normal classroom (assuming no indication to the contrary from the other reports). A ten-year-old who scored a mental age of seven, if evidence from the physician, teacher, and parents were considered to confirm the “diagnosis,” would be placed in a classroom of students who were subnormal mentally.

The Stanford-Binet Test
The American version of the Binet Scale was developed by Lewis M. Terman, an American psychologist and professor who had a very different agenda from that of Binet. Terman greatly admired Alfred Binet. Unlike Binet, however, Terman was a committed eugenicist and an enthusiastic fan of the work of Galton and other European and American eugenicists. Terman believed mental ability was inherited. He also embraced the racial hierarchy, and he strongly believed that environment made no difference; heredity was, to his mind, the whole story. Reflecting on intelligence test scores, Terman asserted, “It would hardly be reasonable…to expect that a little incidental instruction in the home would weigh very heavily against these…native [genetic] differences in endowment” that his test illustrated (Terman, 1916, p. 116).

Because he was a committed eugenicist, Terman’s goal was not to sort, but to sift. Sorting would be arranging children into different classes by ability level; to sift would be to get rid of the least capable children and not allow them to attend school at all. To prevent wasted resources, Terman wished to remove from school all students who were incapable of learning (Barr, 2003). Terman was adamant about removing subnormal children from schools, for they “wasted the resources of the state” (Terman, 1914, pp. 15, 16). He claimed, “When we know more about the physical basis of mental life we shall quit teaching grammar to feeble-minded children who cannot learn to count money” (Terman, 1914, pp. 15, 16).

Terman had graduate students in a seminar course help him adapt the Binet test to use with American schoolchildren. However, by the time he, his students, and several statisticians had constructed the American test, the intention, the assumptions, the administration process, and even the questions had changed.

In using Binet’s test with American children, Terman was proud to tell others that American children did much better than the children of Paris, neglecting to mention that the Paris test had been designed for children living in the slums, whose opportunities for learning were far scarcer than those of Terman’s subjects. When the American test—now a very different test from its French precursor—was completed and published in 1916, Terman named it the Stanford-Binet test.

There are many differences between the Binet-Simon Scale and the Terman’s Stanford-Binet test, leading to divergences in the ways each researcher interpreted his test’s findings. Four of these distinctions are critical in the context of our discussion:

· First, Terman’s Stanford-Binet test focused on school-learned abilities, in contrast to Binet’s focus on practical knowledge gained in everyday life.

· Second, Terman’s version, unlike Binet’s original, was timed. The Stanford-Binet Test of Intelligence was not a leisurely activity, nor was it part of a long, careful investigation by physicians and teachers. The Stanford-Binet test took about an hour. Group tests of intelligence had been developed during WWI, so thereafter, tests could be given to many students at once. For Terman and his colleagues, time was money, and the advent of group tests made it possible to assess intelligence for many individuals in comparatively little time.

· Third, Terman attached much greater significance to the results of the Stanford-Binet test than Binet did with his test. Terman believed his test indicated the level of ability with which one was born, a claim not made by Binet. Terman believed a very young child (a baby awaiting adoption, for example) could be tested and by that test, it would be possible to tell whether that child would grow up to be capable of going to graduate school. Further, Terman believed the test assessed much more than intelligence: he thought his test assessed moral character and physical excellence as well. Binet believed children had different paths and speeds of mental growth; Terman believed that children could be tested once and that from the results, parents and teachers could predict that child’s accomplishments in life.

· Fourth, Terman believed the results of his test reflected the entire value of an individual child. One could imagine lining children up and pinning a number (an IQ score) onto their shirts, and ranking the children from the most inferior to the most superior.

The Stanford-Binet Test of Intelligence immediately became a best-seller in schools, clinics, orphanages, and asylums for the feebleminded. It became the most widely used instrument in psychological research and school placement in the United States from the early 1920s until after WWII.

In a typical application of the test that occurred in the early 1920s, Terman and a graduate student used the Stanford-Binet test to classify first graders in Oakland, California. There were no kindergartens in Oakland, so children began schooling at age six. Within a month of entering school, students were tested and placed into classes of children with homogeneous mental ability. This was a great help to teachers, for they had as many as 40 to 60 students in a class. Following Terman’s advice, teachers slowed down and watered down the curriculum for the subnormal children, provided a normal curriculum for the average children and accelerated the curriculum for the brightest youngsters (Barr, 1992).

Now, let’s shed some light on some additional relevant facts that seem to have been ignored at the time. In Oakland, a port city, many of the dock workers who loaded and unloaded ships were Portuguese-speaking immigrants. Nearly every child in the “subnormal” classroom was a Portuguese-speaking child from a dock worker’s family. Children in the normal and accelerated classes were, for the most part, children born to English-speaking American parents. Why did the results fall along these lines? Quite simply, the test was administered in English (Dickson, 1920).

Such bias was not confined to the West coast. Researchers in Delaware used the Stanford-Binet test to measure the intelligence of African American children who lived three or four miles from the only school they were allowed to attend. It was too far for most to walk, and no transportation was provided. Many of the children researchers tested had never attended school (Mental Defect, 1919). You probably can guess the conclusions drawn by researchers based on the test results.

There was another element to the bias, as well. Although the test was used principally to study normal children, Terman’s particular, lifelong interest was in the extremes of the scale: those children with the lowest mental abilities, and those with the highest. While developing the test, Terman reported in his personal correspondence to others that children living on farms, minority children, children of immigrants, and children in orphanages performed more poorly on the test of intelligence than did white, middle-class, city-dwelling children (Terman, 1919, pp. 67–68).

Based on the performance discrepancies he observed between the city-dwelling, middle-class, white children and those without the same advantages, Terman concluded that children from farms and orphanages, and children of minority and immigrant populations, were subnormal. That is, he believed they had inherited lower levels of mental capability than white children in cities. This idea becomes especially interesting when you consider that Terman grew up on a farm in a rural area of Indiana.

Today, we would say that Terman’s Stanford-Binet Test of Intelligence assessed, with fair accuracy, the aptitude of white, middle-class children who had attended very good schools. We also would point out that the test was biased against all those other groups of children. We now recognize that environment, culture, and—in particular—language play important roles and are as important as heredity in determining a person’s mental ability.

Standardized Achievement Tests
After group tests of intelligence were developed to assess and sort soldiers during WWI, Terman and others developed standardized achievement tests in the 1920s. Standardized achievement tests (the Stanford Achievement Test, for example) even today are highly correlated with tests of intelligence. That means that much of what the achievement test measures is the same as that which a test of intelligence measures: both types of tests measure the quality and quantity of education. Test makers assumed children taking either test had access to fine schools, capable teachers, quality textbooks, and families with resources to enrich their lives.

Studying Race Using Scores from Tests of Intelligence
In the decades after Terman’s first Stanford-Binet Test of Intelligence appeared, it rapidly became the leading outcome measure used in a variety of educational and psychological studies. From about 1920 to 1945, a number of studies were organized around racial or ethnic groups. During these years, researchers were describing intelligence (and achievement) and the differences between these measures in racial and immigrant groups and groups of white, middle-class children born in the United States. During this era, as immigrants settled in and prospered, and especially as Americans emerged from the Great Depression and WWII, the differences among groups of children began to diminish. Goodenough—incidentally, one of Terman’s students—claimed in this era that race was the most important variable of investigation, but findings became less and less interesting as poor and immigrant children became assimilated into middle-class life. Additionally, rural children began to be brought to consolidated village schools, and African American and Native American children began to have access to schooling, or to improved (though still not quality) schooling. As more groups of children gained access to adequate schooling, the gap in intelligence and achievement test scores between white, middle-class students and racial minorities diminished somewhat. But gains have stagnated during the past couple of decades, owing to a contemporary movement toward resegregation of schools (Barr, 1992).

Minority students often come from poorly funded schools with fewer facilities and older textbooks, and they frequently live in homes where there are fewer opportunities for academic stimulation than you would find in a middle-class, white household. In the past, children of minority races attended schools that received only a fraction of the funding that schools for white children received, meaning that the parents of minority students sometimes are less educated than the parents of white American children. Furthermore, in families who immigrate to the United States, children may be just learning English, far later than their classmates whose first (or only) language is English. These factors profoundly influence the results of tests of intelligence and achievement, but they have not generally been assigned sufficient weight when results are interpreted.

The Legacy of Testing
The test of intelligence and standardized achievement tests were developed in the era of eugenics, when racism was rampant and the racial hierarchy was accepted as an explanation for different levels of performance between individuals with different backgrounds. Because the test was constructed using white, middle-class youngsters as the standard of performance, children who did not fit that description, or who did not have access to excellent schools, were not accurately assessed. Those criticisms still stand today.

The Stanford-Binet test and its cousin the Stanford Achievement Test could have provided opportunities for researchers to investigate the lives of poverty lived by so many of the children who performed poorly on the test. The tests could have thrown a spotlight on the inequalities in school funding and teacher preparation and textbooks that made schools unequal and resulted in unequal student outcomes—and continue to do so today. Instead, in the eugenics era, all outcomes were interpreted as the results of heredity. The inferior performance of children of different races was attributed to poor heredity and to real differences among the races, rather than to the unequal distribution of resources and advantages in American society.

Today, with the implementation of the No Child Left Behind program, the effects of standardized tests are even more evident in classrooms now than in the past. The same criticisms still hold: children who are not fluent in English, children who come from homes where no adult has time or education or resources to help the student learn, and children of racial groups for which poverty is commonplace on the whole perform more poorly on these tests than children with more advantages.

Where does the bias in testing originate? It comes from a false assumption that all children are on a level playing field—that every child taking the test has a good home life, enough food to eat, restful sleep, kind parents with time and interest to help their young scholar, a well-trained and sympathetic teacher, a classroom with up-to-date books, and a well-funded school. Test developers believe all children have equal opportunities in life and are treated equally fairly in the classroom. We know those assumptions are often not true.

IV. Exploitation of Vulnerable Populations
From around 1900, when physicians and scientists began in earnest to carry out formal research studies, they were always short of test subjects—people they could test or on whom they could perform experiments or surgeries. Physicians often found subjects among those living in institutions such as jails, hospitals, mental institutions, homes for the feebleminded, and orphanages. They also found subjects among the uninformed, rural poor; among nonwhite racial groups; and, in the past, among slaves (Lederer, 1997).

Today we call these people vulnerable individuals and these groups vulnerable populations because they do not have the usual protections that we expect individuals in our society to enjoy. They are available to be used by ethical and humane researchers, and, unfortunately, also by unethical and inhumane researchers.

Children
Research on children has always been difficult because many parents refuse to grant permission for experiments using their children as subjects. Physicians struggled to obtain the cases they needed. One example of such a situation was the difficulty researchers had in obtaining blood samples from children for the study of antitoxin treatment for diphtheria, a dreaded childhood disease. Around 1915, a physician named Abraham Zinger ran clattering down a wooden stairway of a hospital in New York City, the hospital superintendent in angry pursuit. With permission to take a few blood samples from hospitalized children, Zinger had instead taken blood samples from nearly all the children on the ward before he was discovered (Barr, 1992).

Sometimes physicians even performed surgeries on schoolchildren without parental permission. In 1906, Jewish parents rioted outside a school in New York where physicians were surgically removing their children’s adenoids without parental permission (Tyack, 1992).

Children in orphanages often were subjects of scientific experimentation. Physicians who attended to ill children in orphanages frequently were granted permission by orphanage superintendents to use the children in scientific studies and experiments. Though most children living in orphanages at that time had at least one living parent, orphanage superintendents assumed parental responsibilities for their charges. Believing they owed some favor to physicians who attended ill orphans, the superintendents permitted these experiments.

In 1921, after learning that children in an orphanage had been allowed to repeatedly develop rickets and scurvy (medical conditions that are caused by vitamin deficiencies) in order to test cures, social worker Konrad Bercovici spoke out against those studies. In an article in the magazine The Nation, he wrote, “A child is placed in an infant asylum because it is left ‘without proper guardianship’ because its parents ‘are too destitute to care for it properly.’ It is never intended to take the place of a guinea pig in a…laboratory” (Bercovici, 1921, p. 913).

Though the researchers’ intent was to learn about the normal course of disease and the development of normal children, their efforts were based on false premises. Children in orphanages were most definitely not normal (i.e., typical) in many ways. Most came from destitute families where they had been poorly fed, and many were considerably underweight. The orphans had received poor medical care, if any, all their lives. They lived in crowded, lonely, stressful conditions that affected their health. Their immune systems were compromised from the stress of institutional life; in an epidemic of measles in an orphanage for infants, for example, over 30 percent of infants would die, even though few if any children in nearby middle-class families would have died from the same infection. Thus, using experiments conducted on orphanage children to draw conclusions regarding typical children, although intended to make medical research easier, produced some findings that were very difficult to explain (Barr, 1992).

Adults
Children were not the only vulnerable populations used for medical research. Many adults were vulnerable to medical experimentation because of age, disease, disability, class, or circumstance. For example, prisoners were once considered acceptable subjects, even if they did not grant permission, owing to their stigma as prisoners and the relative ease of access to them. In 1910, in fact, the Journal of the National Medical Association suggested in an editorial that prisoners could help pay their debt to society by offering themselves for scientific experimentation (as cited in Washington, 2006).

Other vulnerable adult populations included individuals who had been deemed feebleminded. Feeblemindedness was a broad label that was used to refer to the poor, the mentally ill, and the sexually promiscuous, especially unwed mothers (Barr, 1992). Incidentally, more women than men were labeled as feebleminded by their physicians, by mental health institutions, and occasionally by their families. Women who were considered feebleminded were prime subjects for medical experimentation and abuse (Washington, 2006).

African Americans
Even now, African Americans continue to be considered one of the most vulnerable populations for medical research (Washington, 2006). In 1910, Dr. Thomas Murrell of the U.S. Public Health Service stated that “the future of the Negro lies more in the research laboratory than in schools” (Washington, 2006, p. 157)—a statement that seems unimaginable to us today, but that reflected a common perspective of the era in which it was made.

Some reasons for the increased vulnerability of African Americans are a low level of awareness of the risks of medical research; high incarceration rates (prisoners are still a prime population for medical research); and high levels of poverty, causing high use of the public areas of the health care system where researchers can reach them. Furthermore, stigma, discrimination, and racism are thought to enter into some researchers’ decisions regarding which groups to subject to risky medical procedures.

Today, because of codes of ethics, much medical research has been moved from American soil to “offshore” locations: in other words, into the third world. Recently, there has been an increase in the participation of Africans in medical research that cannot take place in the United States because it would violate accepted codes of ethics (Washington, 2006). In the past, the most common purposes of medical experimentation were observation (especially to see if a disease process was the same in the black human as in the white human), the search for cures for various diseases, curiosity about the black body, perfection of surgical techniques, and development of treatments that promoted a political or racist agenda.

From the time African Americans arrived in Jamestown, Virginia, in 1619, African American men women and children have been consistently targeted for medical experimentation and research in the United States (Washington, 2006). A slave named John Brown is known to have fallen prey to the experimentation of a physician named Thomas Hamilton. Using heat, Hamilton blistered Brown’s legs, arms, and hands to find out how deep the blackness penetrated his skin (Washington, 2006). James Sims, a gynecological surgeon who is commemorated with a statue at New York’s Central Park, near the New York Academy of Medicine, conducted painful vaginal surgeries on slave women without anesthesia in the 1850s. Sims was looking for a cure for vaginal infections that were routinely caused by the distressingly poor conditions that slaves and other poor people lived under (Washington, 2006).

Case Study: The Tuskegee Syphilis Study
Observing Syphilis
One of the most egregious abuses was the federal government’s “Study of Syphilis in the Untreated Negro Male,” commonly referred to as the Tuskegee Syphilis Study. It began in Macon County, Alabama, in 1932 and did not end until its aims and abuses were exposed in 1972. Though the study was portrayed as a therapeutic activity that would help poor black sharecroppers, the U.S. Public Health Service (USPHS) carried out the study only to observe the course of untreated syphilis in black men. Here was their justification:

…Such individuals seemed to offer an unusual opportunity to study the untreated syphilitic patients from the beginning of the disease to the death of the infected person. An opportunity was also offered to compare the syphilitic process uninfluenced by modern treatment, with the results attained when treatment had been given. (Vonderlehr et al., 1936)

Race science in the early 1900s held that African Americans were more sexually promiscuous and had higher levels of venereal disease than white people. Following a very old common belief that black people and white people must be different subspecies of humans, physicians believed that venereal diseases in black populations were different than those in white populations (Lombardo, 2006).

Enticing Participants
Researchers did not explain to participants that they would be untreated and that even when effective medications became available, participants would be prevented from obtaining them. In other words, there was no “informed consent.” (Washington, 2006; Lombardo, 2006). Men were encouraged to participate with the promise of free medicine, meals, and burial insurance. They did not know the purpose of the experiment; some, in fact, may not have known that they had syphilis. Participants were told that they would be treated for “bad blood” (Lombardo, 2006; Washington, 2006)—a term that was commonly used in the African American community to designate ailments that were not understood. Figure 2.4 shows a copy of the letter that was sent to potential participants in the study inviting them to be treated for their “bad blood.”

Figure 2.4 Letter to Potential Participants in Tuskegee Syphilis Study

Source: Tuskegee Study, 2007

The “special treatment” offered in the letter was, in reality, a diagnostic lumbar puncture (Washington, 2006), an extremely painful procedure that had no possible benefit to the subject. Often, subjects experienced problems such as severe headaches after the lumbar puncture, and there was a risk of serious infection.

Not all the men in the study actually had syphilis. Out of 600 hundred participants, 201 healthy men made up the control group; the rest were in the “experimental” group. The existence of an experimental, or treatment, group in a medical study usually implies that an attempt is being made to help cure some illness or to try out some promising new treatment or cure. This treatment group, however, was a sham, because the study intended only observation, not treatment. When men in the control group contracted syphilis, they were switched to the experimental group.

In some cases, men tested erroneously positive for syphilis. A disease called yaws, common in the south in the 1930s, had symptoms similar to those of syphilis and was caused by a closely related bacterium. Thus, a man infected with yaws might have tested positive for syphilis.

Figure 2.5 A USPHS Doctor Collecting a Sample of a Participant’s Blood (date unknown)

Source: Tuskegee Timeline, 2007

The Sham Treatment
Instead of medicine, the men were given vitamins, low-dose arsenic, and mercury, all of which were older, less effective versions of treatment (Washington, 2006). Over time, many of these essentially untreated subjects died. Even when penicillin was found to be an effective agent for curing syphilis in the mid-1940s, the men were denied its use. Participants who attempted to obtain the drug from local doctors or from the military could not obtain it, because the USPHS physicians regularly forbade its use with study participants (Washington, 2006). Approximately 30 men did manage to receive treatment from doctors outside the study, even though PHS doctors had discouraged black physicians and military physicians from administering treatment to study participants (Washington, 2006).

The key researchers in the Tuskegee Syphilis Study—all USPHS surgeon generals—were Hugh Cumming, Taliaferro Clark, and Raymond A. Vonderlehr, who oversaw the study from 1932 to 1942. All three were graduates of the medical school at the University of Virginia, which at that time used eugenics as a medical model for understanding venereal disease among the different racial populations (Lombardo, 2006).

Exposing the Abuse
In 1965, a white physician named Irwin Schatz read about the study in a medical journal and was outraged. In a letter to the USPHS, he wrote, “I am utterly astonished by the fact that physicians allow patients with potentially fatal disease to remain untreated when effective therapy is available” (Washington, 2006, p. 168).

For seven more years, the study continued, until a young Polish immigrant by the name of Peter Buxtun, who worked as an interviewer for the USPHS in the 1960s, became aware of the details of the study. Buxtun began to write letters to the USPHS requesting that the unethical study be discontinued. Frustrated with his lack of progress, Buxtun ultimately shared the information with Associated Press reporter Jean Heller in 1972. The next day, the story ran in the New York Times under the headline “Syphilis Victims in U.S. Study Went Untreated for 40 Years.”

Following the publication of the article, the study was promptly discontinued, and the government began to examine its own medical practices. The investigation led to congressional hearings, a class-action lawsuit, a multimillion-dollar out-of-court settlement, and, in 1997, a public apology from President Bill Clinton on behalf of the U.S. government to the study participants and their families.

Looking back at the Tuskegee Syphilis Study, we see that it is one of the most atrocious examples of racism and perverted race science in the annals of American medicine. Race science flourished in an era when there were not yet codes for the ethical conduct of research. Today, we recognize the vulnerability of certain populations and the need for special protection for them. Medical codes of ethics have been established—and adherence thereto is closely monitored—to prevent any human rights abuses of this type in the future.

Think about it

Read the Tuskegee Syphilis Study page of the Web site of the Centers for Disease Control and Prevention, where you’ll find a timeline of events and the full text of President Clinton’s 1997 apology to participants. Also, learn the origins and mission of Tuskegee University’s Legacy Committee and its enduring impact on the biomedical profession.

Worldwide codes of medical ethics were not in place when the Tuskegee study began, but were in put into practice during the study’s 40-year duration. Following WWII, the Nuremberg Code, a worldwide code of ethics for the medical profession, was written and ratified following the discovery of Nazi medical experimentation on captive Jews during the war. The World Medical Association Declaration of Helsinki, originally adopted in 1964, is the international code of medical ethics in force at present.

V. Race Science and Modern Genetics
A New Idea
In the 1970s, American statistician and geneticist Richard Lewontin, who was studying variability in fruit flies, decided to study the variability of proteins for different blood groups among people. He was attempting to answer the age-old question: Are there different subspecies of human beings? And if so, did those differences correlate to the varying races of humans?

He was surprised to discover that within each racial group was 85 percent of the protein variation in blood types found in the whole human race. Between racial groups, he found even less difference—a mere 8 percent. This result demolished the old idea of race as something real and biological. Dividing people into racial groups made no sense in light of his study, because much of the variability in humans across the world was found inside every racial group, and only small differences were noted between racial groups. Lewontin’s research findings, subject to some criticisms on statistical grounds, suggested that the picture of genetic diversity in humans did not represent borders that separate races (Wells, 2006, 20-21).

Dr. Spencer Wells
Today, some of the most encouraging research on race is being carried out by geneticist Spencer Wells. Director of National Geographic’s Genographic Project and author of the 2002 book The Journey of Man: A Genetic Odyssey, Wells addresses the questions of how groups of humans came to settle where they did and what routes brought them there. He uses genetic information from people around the globe to trace human migration patterns from Africa to various locations all over the world. Using tools and technology developed through years of genetic research, Wells discovered that “humans were still living in Africa until 50,000–60,000 years ago, and only after this time did they leave the continent to populate the rest of the world” (Wells, 2006, p. 160).

Think about it

The Genographic Project has led to some fascinating conclusions about the origins of humans of all nationalities and ethnicities. Visit the Genographic Project’s Web site to learn more about the project. While you’re there, be sure to click on the tabs marked Genetics Overview and Atlas of the Human Journey.

Dr. Wells’s research has provided the information we needed to trace the roots of modern humans out of Africa tens of thousands of years ago. He used DNA to discover that people all over the world are genetically related—in other words, we all share a common ancestor.

One large group of Africans migrated south and east—in many cases, crossing land bridges that no longer exist—populating India, southern Asia, and Australia along the way. The other major group migrated north to populate most of North Africa, Europe, Asia, and the Americas. Wells’s research found that this latter group accounts for approximately 90 percent of the world’s current population (Wells, 2006). The Journey of Man, a video produced by Wells, explores this research. Wells’s work, building on previous research by many other geneticists, added to our knowledge of human history, using the analysis of the DNA samples collected from around the world and enabling the development of a history and a map of how our ancestors populated the planet. Wells’s work helps confirm Lewontin’s earlier finding that we are all members of a single extended family that can be traced back to a common ancestor in Africa.

Think about it

Are you interested in your genetic ancestry? Your personal DNA can be analyzed to determine your genetic history. You can find out who your ancient ancestors were and learn about their genetic journey across the globe.

Another Paradigm Shift
These fairly recent discoveries have brought about another shift in the meaning of the word race. Earlier in this module, we discussed the discarding of racist views that prevailed during the opening decades of the twentieth century, when anthropologists began to reject the view that nonwhite people were inferior to white people. That change—still not completely embraced in all corners of American society—is now being supplanted by another change in the meaning of the word race.

In the past, race was a name for visible characteristics: skin color, eye folds, hair texture. Then, geneticists discovered that we are all members of one race—the human race—and that we are all descended from a small group of African people, making race an attribute that is nothing more than skin deep. Still suffering from the painful legacy of eugenics, geneticists have dismissed the term race as meaningless (Royal & Dunston, 2004).

Modern geneticists now believe that visible racial indicators and each person’s own description of his or her race are imperfect, but moderately useful, indicators, or proxies, for the path one’s ancestors took out of Africa. Such information provides clues about the possibility of genetic mutations and susceptibilities to disease that occurred in a given group migrating out into other parts of the world. Francis Collins, director of the National Human Genome Research Institute at the National Institutes of Health, finds

‘race’ and ‘ethnicity’ are poorly defined terms that serve as flawed surrogates for multiple environmental and genetic factors in disease causation, including ancestral geographic origins, socioeconomic status, education, and access to health care. Research must move beyond these weak and imperfect proxy relationships to define the more proximate factors that influence health. (2004, p. 1)

So, race once again has meaning, but this time, it is not used as a way to disparage any particular groups of people. The “more proximate factors” that Collins references arise out of individual genetic heritage. For scientists, race and racial categories now take a backseat to the individual variability of the human genome, for each of us is genetically unique.

Think about it

Can race be a reliable indicator of any genetic or biological feature? In some cases, the answer is yes, and that is why race still noted in medical research. Tay-Sachs disease is a fatal condition that affects infants, causing serious problems that lead to death around age 4. Carriers of the genetic mutation that causes this disease are often found in Eastern Europeans and in Jews of Askhenazy descent. A great deal of research has been undertaken to better our understanding of the disease. Researchers have also investigated how race is involved, because the members of one group of humans that share a common ancestral migration pattern are more likely than any other individuals to carry this specific genetic mutation.

Individual Variation
To geneticists, race is completely overshadowed by the individual variations found in our genes (Royal & Dunston, 2004). There is so much genetic variation among us that grouping people by traditional ideas of race helps only in relation to conditions like Tay-Sachs disease, where a genetic mutation follows an ancestral migration that has been defined as a racial group. Geneticists more often examine the genes of an individual to find maximally useful treatments for disease, or to find patterns that suggest greater susceptibility to a disease that is found within a patient’s immediate family.

Summary
We have looked at several types of race science in the nineteenth and twentieth centuries. After examining some critical issues underlying race science in topic I, we examined eugenics, a detrimental version of race science, in topic II. Eugenics is the belief that heredity is the only factor that determines a person’s value, and that through careful control of reproduction, the human race can be “improved,” just as strains of pigs, cows, sheep and horses were improved by good breeding and bettering the stock. Eugenicists advocated the control of reproduction through sterilization of unfit people (those with mental or physical ailments or defects, and those with low levels of intelligence), often without their permission.

Like prize livestock contests, Fitter Families contests were held at state fairs, bestowing awards upon families that met certain criteria, such as intelligence, healthiness, and racial purity. These contests popularized and promoted the values of “good breeding” among human beings.

Eugenicists believed that even a person’s level of morality and tendency toward criminality were coded into his or her genes, and that the environment a person lived in and the varying opportunities and advantages of life made little difference. Eugenicists turned a blind eye to the ways extreme poverty, malnutrition, poor schools, child abuse, discrimination, and lack of opportunity derailed the normal development of children and adults in different racial groups.

Eugenicists celebrated the Nordic type of person—blond and blue-eyed—as the most desirable type of individual. Immigration laws were passed in our nation to favor people from Germany and Scandinavia and to greatly limit the entry of people from southern and eastern Europe. Only when the Nazi agenda was shown after WWII to have been built directly on American eugenics ideas did Americans finally turn away from their preoccupation with “breeding” and their fascination with the Nordic type.

In topic III, we followed the development of tests of intelligence created in the age of eugenics. We looked at the study of ability and character in the common practices of observing and photographing the faces of people. There is a long history of searching for criminal types, and the equally long history of searching in the facial features of children for evidence of mental inferiority or superiority. Surely, it was popularly believed, something—bumps on the head, speed of reaction time, skull measurements, facial features (“types”), or a mixture of these (as in Galton’s “personal equation”)—would reveal the secrets of the human character to investigators. Finally, Alfred Binet found a way to assess everyday, practical thinking and reasoning in children and was thus able to develop a scale for measuring mental ability.

But American researchers, it seems, were not content with the examination Binet developed to sort children in the Paris slums. They disregarded Binet’s insistence on a careful, leisurely investigation of the home life and school progress of children before decisions were made about mental abilities. Stanford professor and psychologist Lewis Terman, a committed eugenicist, modified Binet’s scales for America and changed the nature of the test entirely.

Terman’s test, the Stanford-Binet Test of Intelligence, measured school learning and was standardized on white, middle-class children. It was initially a quick one-hour screening, administered one on one, but after WWI, when group tests were developed, group testing of hundreds of children at a time became possible.

Because children in poor families and children of minority groups do not always have access to good schools that is equal to that of middle-class, white children, there have always been visible gaps among the test scores of these groups. Terman’s test was biased by its assumption that each child taking the test enjoys the opportunities and advantages of the white middle-class child. The Stanford-Binet test and its very close relatives, standardized achievement tests (also related to the tests used in the No Child Left Behind program today), ostensibly confirmed the racist views of social scientists of the day—that nonwhite racial groups had inferior ability levels. Critics argue that these same biases still hold for today’s tests.

Another type of objectionable race science, as we discussed in topic IV, involved the use of members of vulnerable populations as involuntary subjects in medical research. Groups that were deemed inferior were of great interest to scientists seeking subjects for medical experimentation. When parents refused to allow their children to become subjects, children without parents to stand in the way—that is, children in orphanages and other institutions—were used instead. Similarly, adult prisoners, patients in mental hospitals, and, poor, uneducated black people in rural areas were used as guinea pigs in medical studies that withheld treatments that could have saved or prolonged their lives. The most infamous of these studies, the Tuskegee Syphilis Study, was conducted by an agency of the U.S. government. Starting after WWII, an ever-evolving series of medical codes of ethics has been written, enforced, and continually updated to protect all subjects of research from being used in ways that we now recognize as inhumane.

In topic V, we looked at some of the responsible and legitimate science going on today in an effort to answer such questions as, “Where did we come from?” and “How did we get here?” The work of Spencer Wells and the Genographic Project has used the latest scientific tools and technologies to discover that all humans share a common ancestor that originated in Africa some 50,000 to 60,000 years ago. The differences we observe among individuals in different racial groups—skin color, hair texture, facial features—resulted from migration and climatic adaptation to the environment, as we discussed in module 1. The migrations branched out like the limbs of a tree, as descendents of the original African group fanned out to people the rest of the earth.

Toward the end of the module, we examined a new view of race that is emerging in light of modern discoveries about human genetics. Studying blood groups in 1970s, statistician and geneticist Richard Lewontin was amazed to discover more blood-type variability within a single racial group than between different racial groups. This suggested that our familiar definitions of race made no sense in the context of blood types. Lewontin’s discovery has been confirmed as modern race scientists have become able to read our genetic history by analyzing the mutations we all carry in our DNA.

An upbeat, empowering message has emerged from the most recent research: Separate races are not biologically based; they are merely social constructions, a part of our cultural history. Racial groupings are considered meaningless by contemporary geneticists, for they find that at the molecular level, we are all members of one great extended family—one single race, the human race.

After discarding the term race, however, geneticists recently have begun to use the word again. It now stands as a proxy to suggest the branching paths that a person’s ancestors took as they migrated out of Africa. This notion of race permits the investigation of genetic mutations related to diseases that appear only on certain branches of the human tree. Thus, the term race once again has meaning, but not the derogatory valuation given to racial groups by white scientists over a century ago. In the twenty-first century, the word loosely correlates to strands of human migration, and no value judgments are attached.

Race science is a term that has been used to describe both admirable and dubious sciences, for genuine investigations of differences and for differences that can be interpreted in a biased way, to demonstrate inferiority. Some race science, as we have seen, helped to further racist ideas about the inferiority of nonwhite groups. However, we find that race science today can be a tool for helping us understand that our differences are only skin deep, that we as human beings have much more in common than we ever believed.

References
Appiah, K. A. (1992). In my father’s house: Africa in the philosophy of culture. New York: Oxford.

Barr, B. (1992). Spare Children, 1900–1945: Inmates of orphanages as subjects of research in medicine and in the social sciences in America. Unpublished dissertation, Stanford University.

Barr, B. (2002). Thinking pathology, discovering normality: Lewis M. Terman and the Stanford-Binet test of intelligence. Paper presented at Cheiron: The International Society for the History of Behavioral and Social Sciences, June, 2002.

Binet, A., & Simon, T. (1916). (Trans: E. S. Kite). The development of intelligence in children. Publications of the Training School at Vineland. Baltimore: Williams and Wilkins.

Binet, A., & Simon, T., with marginal notes by Lewis M. Terman. (1980/1916). (Trans: E. S. Kite). The Development of Intelligence in Children: The Binet-Simon Scale. The Training School at Vineland. Nashville, TN: Williams Printing Company.

Black, E. (2003). War against the weak: Eugenics and America’s campaign to create a master race. New York: Thunder’s Mouth Press.

Collins, F. S. (2004). What we do and don’t know about ‘race,’ ‘ethnicity,’ genetics and health at the dawn of the genome era. Nature Genetics, 36, S13–S15.

Cornwell, J. (2003). Hitler’s scientists. Science, war, and the devil’s pact. New York: The Penguin Group.

Cravens, H. (1978). The triumph of evolution: American scientists and the heredity-environment controversy, 1900–1940. Philadelphia: University of Pennsylvania Press.

Dickson, V. (1920). “What first-grade children can do in school as related to what is shown by mental tests.” Journal of educational research, 2 (no. 1).

Dorr, G. M. (2000). Assuring America’s place in the sun: Ivy Foreman Lewis and the teaching of eugenics at the University of Virginia from 1915–1933. Journal of Southern History (66), 2, 257–296.

Fowler, O. S. (1856). The illustrated self-instructor in phrenology and physiology. New York: Fowler, O. S. and Wells.

Goodenough, F. (1934). Developmental Psychology. New York: D. Appleton-Century.

Gould, S. J. (1996). The mismeasure of man. New York: W. W. Norton & Company.

Graves, J. L. (2004). The race myth: Why we pretend that race exists in America. New York: Dutton.

Gulick, L. P., & Ayres, L. H. (1913). Medical inspection of schools. New York: Russell Sage Foundation; Survey Associates.

International Eugenics Discussion. (2007, September 28). In Wikipedia, The Free Encyclopedia. Retrieved September 28, 2007, from http://en.wikipedia.org/w/index.php?title=International_Eugenics_Conference&oldid=160937227

Jackson, J. P., and Weidman, N. M. (2004). Race, racism, and science: Social impact and interaction. Santa Barbara, CA: ABC-CLIO.

Kenna, J. C. (1964). Sir Francis Galton’s contribution to anthropology. Journal of the Royal anthropological Institute of Great Britain and Ireland, 94 (2), 80–93.

Lederer, S. E. (1997). Subjected to science: Human experimentation in America before the second world war. Baltimore, MD: Johns Hopkins Press.

Lombardo, P. A. (1997). Eugenics at the movies (book review). The Hastings Center Report, 27 (2), 43.

Lombardo, P. A. (2006). Eugenics, Medical Education, and the Public Health Service: Another Perspective on the Tuskegee Syphilis Experiment. Bulletin of the History of Medicine, 80, 291–316.

Marger, M. M. (2006). Race and ethnic relations: American and global perspectives, 7th ed. Belmont, CA: Thomson/Wadsworth.

Mental defect in a rural county: A medico-psychological and social study of mentally defective children in Sussex County, Delaware. (1919). U.S. Department of Labor: Children’s Bureau. Washington, DC: Government Printing Office.

“Nuremberg Code.” Directives for Human Experimentation, Office of Human Subjects Research, National Institutes of Health. Retrieved September 28, 2007, from http://ohsr.od.nih.gov/guidelines/nuremberg.html

“Notes and News,” Journal of Educational Psychology, 4 (3), (March, 1913), 245.

Royal, C. D., and Dunston, G. M. (2004). Changing the paradigm from ‘race’ to human genome variation. Nature Genetics, 36, S5–S7.

Rydell, R. W. (1984) All the world’s a fair. Chicago: University of Chicago Press.

Stubblefield, A. (2007). Beyond the pale: Tainted whiteness, cognitive disability, and eugenic sterilization. Hypatia, 22 (2), 162.

Study of Types of Character. (1877). Mind, 2 (8), 573–574.

Terman, L. M. (1914). The hygiene of the school child. New York: Houghton Mifflin.

Terman, L. M. (1916). The measurement of intelligence in children. New York: Houghton Mifflin.

Terman, L. M. (1919). Typescript of a meeting on school [intelligence test] scale. Stanford University Special Collections, Lewis M. Terman collection (SC 38), box 12, folder 14. Stanford, CA: Stanford University Libraries.

Tuskegee Study of Untreated Syphilis in the Negro Male. (2007, September 27). In Wikipedia, The Free Encyclopedia. Retrieved September 28, 2007, from http://en.wikipedia.org/w/index.php?title=Tuskegee_Study_of_Untreated_Syphilis_in_the_Negro_Male&oldid=160704442

“The Tuskegee Timeline.” U.S. Public Health Service Syphilis Study at Tuskegee. Centers for Disease Control and Prevention. August 9, 2007. Retrieved September 28, 2007, from http://www.cdc.gov/tuskegee/timeline.htm

Tyack, D. (1992). Health and social services in public schools: Historical perspectives. The Future of Children, Spring 1992.

U.S. Immigration and Naturalization Service. (2002). Statistical Yearbook of the Immigration and Naturalization Service, 1999. Washington, DC: U.S. Government Printing Office. Retrieved July 2, 2007, from http://www.dhs.gov/xlibrary/assets/statistics/yearbook/1999/FY99Yearbook.pdf

Vonderlehr, R. A., Wegner, C. T., et al. (1936). Untreated syphilis in the male Negro. Venereal Disease Information, 17, 260–265 as cited in “Tuskegee Syphilis Study,” retrieved July 2, 2007, from http://www.brown.edu/Courses/Bio_160/Projects2000/Ethics/TUSKEGEESYPHILISSTUDY.html

Washington, H. (2006). Medical apartheid: The dark history of medical experimentation on black Americans from colonial times to the present. New York: Doubleday.

Wells, Spencer. (2006). Deep ancestry: Inside the Genographic Project. Washington, DC: National Geographic.

Immigration
Origins
Great numbers of immigrants came to the United States as indentured household servants, indentured workers, or slaves. The ancient customs and rules of indenture and slavery came from England (Youcha, 1995).

Slavery is unique for its appalling violations of human rights, yet the institutions of indenture and slavery shared some common features. Both were economic systems that brought people to live in America. Both helped establish and maintain inequalities in society. Both helped define and populate the lower strata of American society. Thus, both were concerned with creating and maintaining poverty, that big elephant in the corner.

Indenture and slavery will be presented, as they should be, as separate topics in our discussion, but you will notice at times elements of one slide imperceptibly into the other. For example, following the Civil War, when slavery was “long gone,” the legal structure of indenture was used in the Midwest until the 1950s as a way to continue de facto (by custom) slavery when the laws no longer provided for it (Barr, 1994). We begin with indenture.

Indenture
From colonial times, indenture was a way for children to be trained in occupations before formal schooling was widespread in the United States. Parents would place their son of age 12 or so in the home of a master who would teach him a trade. Parents would place girls around the same age into a household to learn “housewifery.” Parents and the master signed an elaborate contract, keeping one copy and filing the other in the colony, town, or city records.

Indentured children were under contract from around age 12 to around the ages of 18-21. During these years, the master was charged to adequately feed, clothe, and house the child and to teach him a trade or teach her household tasks. The master also was required to teach the child to read well enough to read the catechism. Some contracts also stipulated the master would teach the child to “cipher,” a word for the operations of basic arithmetic. At the end of the period of training and unpaid service, an apprentice was usually given a Bible and a new suit of clothes and was turned out into the working world. An indentured youth was “bound out,” the phrase reflecting the binding nature of the indenture contract.

Indenture Contracts
Indenture contracts are attractive documents that chart social changes from colonial times to the nineteenth century, when factory and farm labor replaced indenture. Indenture contracts from the colonial era are written in a fine, flowing hand. The father of the child often signed with a large X indicating he was illiterate. One, from The Massachusetts Bay Colony, in 1731, bound a boy to a “housewright,” or carpenter. The contract stipulated that the apprentice would not leave his master without permission and would obey his master and faithfully keep his secrets. From the Middle Ages on, every skilled occupation had its secrets, and those of a “housewright” were the skills of a fine carpenter and builder (Newton Archives, doc. no. 1).

By the mid-eighteenth century, some indenture documents were typeset with blanks left in for names and other particulars. These typeset contracts had a section in which the errors of apprentices were laid out in cautionary prose, giving us a catalog of the ways in which adolescents misbehaved over two hundred years ago. “Goods he shall not waste, embezel [sic] purloine [steal], or lend unto others, nor suffer the same to be wasted or purloined. Taverns or Ale-houses he shall not frequent, at cards, dice or any other unlawful game he shall not play, fornication he shall not commit, nor matrimony contract with any Person during the said term” (Newton Archives, doc. no. 2).

In the same contract, the responsibilities of the master were to teach the apprentice the “trade, art, or mystery of husbandry” (farming). The contract contained the qualifier, “if the apprentice be capable to learn.” The master was also to offer “good and sufficient meat, drink, washing and lodging, both in sickness and health.” Written in by hand was the addition, “and to learn the apprentice to read, wright [sic], and cipher” (Newton Archives, doc. no. 2).

Each indenture document, whether written by hand or typeset, was cut in some very distinctive and decorative way across the top. When the years of service were completed, the young adult and his parents brought their copy of the contract back and filed it with the other copy from the master. To avoid fraud and to assure both parties that all aspects of the original contract were fulfilled, the two pieces of the contract were placed head to head, and the decorative cuts were expected to match exactly.

A contract alone did not ensure apprentices were treated well. Some youth, treated well in their years of indenture, later became paid workers in the shop or on the farm where they trained. Others, mistreated or starved, overworked or beaten, suffering from homesickness or turned out in the winter when there was insufficient farm work to do, ran away before their contracts were fulfilled. Authorities struggled to find these youngsters and to return them to their placements.

Over time, contracts became more detailed in their requirements of the master and of the apprentice, but the existence of standards always depended on a system of monitoring that simply did not exist. Thus, the outcomes of indenture depended on the personal goodwill and concern of masters and the vigilance of kin to discover whether youngsters were being educated and treated humanely.

Adolescents
Indenture was a thoughtful way to socialize adolescents. Social scientists today recognize indenture removed teens from their homes during the most stressful years of childhood and raised them under the supervision of others. Anyone who remembers adolescence or has raised teenagers will attest to the fact that teens are much more polite and responsible with people who are not their parents! Sometimes parents tried to place children with relatives or with grandparents, but even in Plymouth Colony it was recognized that one should not place children with close relatives, especially grandparents, because grandparents were especially likely to indulge grandchildren (Youcha, 1995). Thus, there was flexibility in this system, but not to coddle adolescents.

Apprenticeship
The words apprenticeship and indenture originally reflected the inequality of training across social classes. In the United States, the terms were used interchangeably, though originally a distinction was intended. Apprenticeship was an ancient practice derived from the guild system of the Middle Ages. It properly referred to lengthy training for very highly skilled professional occupations such as goldsmith, silversmith, or lawyer.

However, in the United States, the word apprentice was used in indenture contracts for placement in all levels of work. In contrast to apprenticeship, indenture was intended to train youngsters from the middle and lower social strata for more ordinary occupations. Also, in the United States, the indenture contract was used to regulate all types of training, from the most highly skilled apprentice learning to be a shipwright to the least skilled child learning housewifery. And, apprentice was the word used for all youth under contract.

Apprenticeship and the Wealthy
The indenture or apprenticeship system was stratified based on wealth and social connections. Children of the well to do and children of merchants were apprenticed for training to become lawyers, doctors, goldsmiths, silversmiths, shipbuilders, printers, or other highly skilled workers. The apprentice left his parents and moved in to live and be educated at the home of the person doing the training.

Some types of training were very detailed and exacting and prepared youth for trades that required extensive formal learning. If the training was very prestigious, for example, silversmithing, law, or medicine, parents paid the master to teach their son. Such training assured a youngster entered a lucrative profession. Apprenticeship was often a route to upward mobility, or, for upper-class families who fell on hard times, a way to educate and care for a child they could not afford to support.

Apprenticeship and the Middle Class
Youngsters in the middle and lower classes were often indentured to learn husbandry, the word for farming. From colonial times until the Industrial Revolution in the nineteenth century, the farm was the unit of production. That is, it was the factory; families produced most of the things they needed on their land, in their homes, and by their labor. Thus, training in farming involved many skills, as did housewifery, the work of girls.

Girls and Apprenticeship
The “housewifery” work assigned to girls needs a share of our attention and illustrates the point. On farms, the family grew, processed, and preserved food; made candles; cut firewood; raised sheep, spun wool, and wove cloth; sewed clothing; slaughtered and butchered farm animals for food; and built barns, houses, and furniture. This work was in addition to regular household tasks such as cooking, laundry, and cleaning. Women were especially knowledgeable about medicinal herbs and medical care.

Each farm or plantation was a factory, and each member of the family had many skills. Though the father was the moral preceptor, the mother was often the teacher, though the father sometimes taught children and apprentices to cypher (do arithmetic.) A young woman hoping to move up in the world benefited from being indentured in a grander home than that of her parents. She learned essential skills and prepared herself for a life better than that of her family.

As the American colonies grew and towns became cities, new opportunities arose outside farming. In towns, young men could learn specialized occupations such as printing, rope making, and shipbuilding, and young women could work in grander homes than those in the country.

Journeymen
When indenture ended, a young person began adult life with job skills. An apprentice who had finished his term could work as a “journeyman” anywhere. A journeyman is someone who has learned a trade and works for another by the day. Jour is the French word for “day,” thus journeyman is a day laborer, someone who does not live in. Webster’s Third New International Dictionary of the English Language, Unabridged (1995) defines journeyman as “An experienced or reliable worker.” Interestingly, the word today has a negative meaning as well. Journeyman is also a term for someone who is just plodding along.

Immigrants and Indenture
Many thousands of immigrants from Europe indentured themselves to come to America. Indenture provided a way for thousands of penniless young people in England, Ireland, and Scotland to come to America. A young woman starving in the potato famine in Ireland in the mid-nineteenth century could sign an indenture contract with the captain of an immigrant ship heading for Philadelphia, for example. When the ship docked in Philadelphia, the ship’s captain would sell these indenture contracts to individuals for enough money to pay the costs of the passage and to provide him with an income.

The indentured individuals would then serve their American masters for seven-to-nine years of unpaid labor, or whatever the contract stipulated. In this way, poor individuals would be able to come to the United States. Many young Irish women found employment as indentured house servants by using this social arrangement, occasionally referred to as “white slavery” (Herrick, 1969).

Indenture and the Poor
Indenture also was used in the lowest levels of society. At the bottom tier of society, indenture served a different clientele in a different way; it was a child welfare system for the poor. The indenture certificate for the boy being bound out to a housewright, which we described above, was written for a child not yet two years of age. He was taken from his widowed mother or relinquished by her to the authorities charged with the “binding out” of the poor. His contract was for 19 years.

The boy who was apprenticed to learn husbandry, was but four years old, bound out by the justices of the peace in 1742, for 14 years. Why would someone take a child of two years to be an apprentice? In this preindustrial economy, and in the lower social levels, children were viewed as an economic resource, and poor children were put to work as soon as they were able to follow commands.

Indenture and the Rights of Poor Families
In indenture, the rights of families of the poor were not always respected; unmarried mothers, widows, and widowers often had no say in the placement of their own children. From the viewpoint of town officials in Massachusetts, binding out provided a home, food, and training to children who were abandoned or orphaned or whose parents could not or would not feed them.

Perhaps you are wondering about orphanages. Orphanages began to be built after 1800, but even when they existed, some superintendents indentured children out so that they could accept additional needy youngsters into the institution. Parents placing children in such institutions were surprised to find their child indentured out without parental permission within a week or two of entering an orphanage (State of Connecticut Archives, 1907).

When a poor child was indentured by town officials or by a superintendent of an orphanage, the child’s whereabouts were kept secret. This policy often prevented children from reuniting with parents and siblings in the future. Into the twentieth century, as infants began to be adopted, secrecy in adoption records gradually became the rule as it continues to be in many states today.

Maybe you are wondering what was the difference between then and now? Into the twentieth century, officials frequently bound out or adopted out children without seeking permission from parents. Siblings were separated, lost track of each other, and were permanently lost by their families.

Until the 1920s, an unwed mother did not have legal custody of her own child, so children born out of wedlock routinely were removed from their mothers (with or without permission) and bound out (in the nineteenth century) or adopted out (in the early twentieth century) by officials of the town, county, or nearby orphanage (Barr, 1994).

Vendue
In the eighteenth century, indenture coexisted with a kind of slave auction in the Massachusetts Bay Colony. Homeless people who entered villages were “warned off” by the officials, who did not wish to assume additional burdens of caring for the poor, elderly, indigent, mentally ill, or unwell from other communities.

But within each community the poor and indigent were handled in another way, by being auctioned off and their youngest children indentured out. The process was called vendue, a French word, meaning “sale.” At a public auction, families or individuals who were poor were auctioned off to the lowest bidder. Whoever offered to care for the Widow Jones, for example, for the least money provided by the town, would be authorized to care for her for the year of 1794 (Newton Archives, doc. no. 3).

In this system, infants and children were split from parents, husbands and wives could be split from one another, sibling groups were split apart, and infants were taken from parents and auctioned off or placed in indenture contracts.

Three features of this system bear notice for our discussion of slavery and our ongoing concern with poverty. First, vendue enabled the village to care for the poor at the lowest possible cost. Second, it separated parents from children and siblings from each other. Third, bidders carefully examined individuals on display, felt their muscles, and evaluated their potential as workers, making vendue indistinguishable from a slave auction.

Slavery
Whereas indenture was a way for the poor in the British Isles willingly to migrate to the United States, slavery was the way for blacks from Africa unwillingly to be brought to the United States. Prior to 1780, three times as many Africans as Europeans came to America (Smith, 1999). In looking at indenture, we found there was little freedom for young apprentices. You are about to discover, however, that there was vastly more freedom in the harshest indenture contract than there ever was in the day-to-day experiences of slaves living in the American colonies.

Definitions
Slave is a word with an ancient lineage. A slave is an individual who has no freedom, (often) no representation as a human being in the legal system, and no right to marry or own property. A slave is held in servitude or bondage. Through the practices of slavery, the individual is forced to submit to a dominating authority. Sometimes masters were humane, but brutal masters were also common.

There is a verb, to slave, that means, “to employ at hard labor.” The verb also indicates “to wear out with hard labor,” which seems an understatement when we discover the long hours slaves worked in eighteenth- and nineteenth-century America (Webster’s Third New International Dictionary of the English Language, Unabridged, 1995). It seems that there always have been slaves, often individuals taken as captives during wartime.

Slaves in the distant past, for example, those in the Roman Empire, did nearly all the manufacturing, farming, and household work. In some societies, they had opportunities to improve their lot by education, or even to free themselves from bondage. For example, slaves captured by Muslim overlords could convert to Islam and rise in both rank and stature. Most slaves hoped for freedom, called manumission. In ancient Rome and a few other societies, manumission was permitted (Franklin & Moss, 2000). Manumission was permitted in early colonial Virginia, but was ended with passage of stricter laws governing slavery.

American Slavery
Slavery in the ancient past differed from slavery in the American colonies. However dire the situation of slaves in the past, it was worse in the American colonies. Over time, laws were passed that codified slavery as a much more inhumane practice than it was at first. A 1705 law in Virginia stipulated that “all slaves shall be held to be real estate” and that a master whose punishment killed a slave would not be held responsible (Sambol-Tosco, 2004, p. 1).

Much later, the Dred Scott case, argued in 1857, held that “blacks were not citizens and therefore were not guaranteed rights under the U.S. Constitution” (Sambol-Tosco, 2004, p. 2). These laws suggest the level of dehumanization that prevailed under slavery. Slave children were not exempt from long hours of grueling work or from cruel punishment.

A seven-year-old slave boy was required to stand all day for weeks to fan flies off a bedridden master. Only when the man died was the child free to go outside. Very young children used miniature hoes to work long hours in the fields alongside adults. On one plantation slave children who did not pick all the worms off tobacco plants had a choice; they could be forced to bite all the worms in half or suffer a savage lashing.

An adolescent girl, who was a relative of Frederick Douglass, was charged with watching a baby but fell asleep while the baby was crying. She died of the injuries she suffered when beaten by her master for failing to do her job (King, 1995).

Categories of Slaves
There were three categories of slaves―industrial slaves, field slaves, and house slaves. Plantation owners rented out industrial slaves to perform factory work, but most slaves worked in the fields. Although tobacco and sugarcane continued to be important crops, cotton soon became king on southern plantations. Of the 2.8 million slaves laboring away on farms and plantations in the south in 1850, most could be found picking cotton (Franklin & Moss, 2000, p. 144). The average adult slave was expected to pick at least 150 pounds of cotton per day. During harvest time, this amount could increase (p. 146).

The size of the farm or plantation and the type of crop harvested would often determine the nature of the work assigned to slaves (Franklin & Moss, 2000). Slaves on rice plantations had very individual and specific duties, whereas slaves on cotton plantations worked in groups supervised by an overseer. Other factors such as age, trustworthiness, skill level, whether the slave lived in a city or town, and the needs of the master and his family were important factors in defining the daily work of slaves. For example, Frederick Douglass lived as a small boy for two years on a plantation he described as more like a business than a farm. He remembered there were quite a few slaves on the plantation where tobacco, wheat, and corn were grown. Close to the house, slaves performed essential tasks such as blacksmithing and shoemaking (Douglass, 1987).

Too young for field labor, Douglass herded cattle, shooed chickens, swept the yard, and attended to the whims and wishes of Colonel Lloyd’s (the master’s) daughter. Thus, Douglass would be classified as a “house slave.” A house slave is defined as any slave who performs work that requires close proximity to the main house or the master’s family or work that did not require going to the field. A house slave might serve as a cook, nurse, butler, or personal attendant in the master’s household. Typically the position of house slave was coveted, for it brought some privileges that field hands lacked.

House slaves were as much at risk as field slaves, however, when the master’s wrath was kindled. In his autobiography, Douglass described being selected to go to Great House Farm, a nearby large plantation, as one of the most sought-after privileges on Colonel Lloyd’s plantation. Tending the master’s horses was another, except when the master became angry. Douglass remembers when Colonel Lloyd gave old Barney, an older house slave 30 lashes for a misstep with his prized mules. Slaves served a variety of roles and functions in the plantation economy that often revealed a stratification of labor based upon gender, age, and the needs of the master and the plantation.

Slave life was always precarious, but there was a special hazard to slaves living on small plantations where only a family or two of slaves served the master. In these settings, it was common for owners to sell away “excess” or unwanted slaves, separating husbands from wives and siblings from parents and each other. In addition to inhumane treatment, cruel punishment, and grueling work, the selling away of family members contributed markedly to the heartbreak of slavery (King, 1995).

Oral Histories of Slaves
Oral histories from former slaves were not always accurate windows into the experience of slavery because they were often biased in several ways. Federal workers in the Works Progress Administration interviewed former slaves in the 1930s, but many had been very young when slavery ended. Some historians believed many of these individuals were too young when slavery ended to appreciate its full impact on their lives.

Others were too polite to recount to interviewers, who invariably were white, the horrifying treatment they or their parents and siblings received as slaves. When former slaves gave accurate accounts, the interviewers could not quite believe the sheer brutality of their accounts. Eudora Ramsey Richardson, director of the WPA for The Negro Studies Project in Virginia believed the ex-slaves’ narratives were too biased to be correct. She recounted how she barracked herself in a cottage for two weeks without even the use of a phone and totally rewrote the material (Writer’s Program of the Works Project Administration in the State of Virginia, 1940). One wonders what she changed and what she removed!

Other former slaves were unwilling to disclose their true feelings or experiences to an interviewer, for masking one’s true feelings was an important strategy for surviving slavery (King, 1995).

Reconstructed memory is also blamed for some distortions. To give an account of something in the past, one’s brain pulls the fragments of memory and reorganizes them with the present context in mind. Thus, the white schoolteacher conducting interviews in the 1930s provided a context that may have caused the former slave to provide memories of what he or she believed the interviewer wanted to hear.

In addition, federal interviewers such as Eudora Ramsey Richardson struggled with decisions about how to capture the speech in typed transcripts. Should the dialect be taken down as spoken, at the risk of leading readers to believe slaves to be ignorant and uneducated, perhaps childish? Or, should the language be edited into Standard English for readability, resulting in the loss of the voice of the individual?

King (1995) discussed an interview of an individual who, when young, had been a slave in Virginia. The former slave remembered his childhood pleasure at keeping his master’s horses beautifully groomed and healthy, a job that kept him from the fields and a responsibility that had benefits for both slave and master. This account had all the elements just mentioned; the child was very young when slavery ended, he reported only the pleasurable aspects of slavery to the interviewer, and we have no window into his true feelings. The account is transcribed in dialect, giving a hint of the speaker’s voice, but when read it seems somewhat patronizing in tone. Thus, we see slavery through a variety of lenses that distort meaning and make it more difficult to capture the full, unbiased picture.

Slavery Elsewhere
Slavery in Latin America took a different course than in America. On islands such as Jamaica and in Latin America, slaves outnumbered their masters, and following slave uprisings, including a successful one in Barbados, laws were passed restricting their movement. Spanish colonies enacted laws to protect slaves from brutal masters, but they were rarely enforced.

Slaves in Brazil were allowed to learn to read and write, a practice that was forbidden by law during the uneasy years just before the American Civil War. In Brazil, literacy enhanced the social mobility of blacks and encouraged assimilation. Assimilation is said to occur when ethnic and racial minorities give up aspects of their language and culture and intermarry with other groups to blend in with the dominant population. Portuguese and Spanish colonists freely intermarried with African women (Reichmann 1995). This relative lack of stigma in racial mixing in Brazil created many mulattos, alongside a relatively large African-Brazilian population.

However, stratification persists in Brazil today, where at least 45 percent of the Brazilian population self-identify as black (Reichmann 1995, p. 36). In 1951, the government prohibited racial discrimination in Brazil, but contemporary Brazilian society is highly stratified along racial lines. African Brazilians with former slave ancestry suffer many of the inequities found among African Americans in the United States. Sociologist Luiz Claudio Barcelos found many African Brazilians are denied advancement in all levels of Brazilian society, and Brazil has developed achievement gaps in education comparable to those between African Americans and whites in the United States (Reichmann 1995). Both groups have a legacy of slavery and carry the stigma of biological racial inferiority largely developed as a rationalization for slavery.

Slavery and the Law
A different course was followed in the United States, where strict racial codes and prohibitions against slaves intermarrying or having sexual relations with whites were rigidly enforced in efforts to preserve racial “purity” and maintain strict caste lines. Despite these strict codes of sexual conduct in the American colonies, white plantation owners and white and black slave overseers frequently violated women slaves. Although little distinction was made in terms of the hard labor extracted from both sexes, women slaves faced this special vulnerability because of their gender.

Slavery in the American colonies began with a need for labor and was quickly codified into law. This pattern began shortly after the Dutch frigate washed ashore in 1619 with 20 Africans aboard (Franklin & Moss, 2000, p. 65). The Africans were not slaves in the traditional sense but were treated more like indentured servants. They were accorded land and other rights similar to those of white indentured servants once their period of service was successfully completed (Franklin & Moss, 2000). According to Virginia records, there were many indentured and free blacks in Virginia during this early period. However, with the increasing need for labor, this period didn’t last long.

Blacks who were free lived in very tightly knit communities to protect themselves from being kidnapped and sold back into slavery. Their security came from the fact that everyone in the free black community and most of the whites in the adjacent town knew them. Free blacks were those legally freed either by masters or by purchasing their freedom in a process called manumission.

As involuntary immigrants stolen from the shores of their homeland, Africans came ashore with no common language, culture, or intimate knowledge of the land. In addition, as black Africans they were easily recognizable, and their numbers, because of the slave trade and the growth of the American-born African population, were deemed plentiful.

Percentage of Black Population and United States Population from 1790–1910

/var/folders/gk/dmqjb7w17tz4rp0j9g_p3m7c0000gn/T/com.microsoft.Word/WebArchiveCopyPasteTempFiles/SOCY423_popChart.gif

Source: Franklin, J., & Moss, A. (2000). From slavery to freedom: A history of African Americans. New York: McGraw-Hill, p. 623.

Corn, sugar, cotton, indigo, and tobacco fields needed tilling, forests needed clearing, and homes needed to be built. Who would do this work on a continual basis? Indians were not deemed suitable laborers for agriculture because of their reverence for the land, their knowledge of escape routes, and their connections with other tribes. However, Africans made a “suitable” choice. Following the example of their Caribbean and British neighbors, American colonists looked to British and Dutch slave traders to solve their need for labor. Though the legitimate business of slavery ended in 1808, illegal slave trade continued into the 1850s (King, 1995).

The move to full legalized slavery was made incrementally through customary practices and the passage of laws. In 1661, Virginia blacks who ran away were made to serve a lifetime of indenture, the same punishment meted out for runaway whites. This was a gradual, but significant move toward “perpetual servitude” (Franklin & Moss, 2000). One year later in 1662, Virginia stepped up its efforts to legalize slave status through legislation identifying a rule: if one’s mother was a slave, one was a slave. In 1663, Maryland enacted a law that made slaves of anyone black, even free blacks at the time, regardless of the status of their mothers (Franklin & Moss 2000).

This law was eventually moderated in 1681 by adding the clause that black children born of free black women and white women would be granted free status. Slavery, unlike indentured servitude, was an inheritable condition in America. Laws were passed in the colonies that legalized slaves’ status as servants with few rights; even free blacks were subject to search and seizure and could lose their freedom.

Chattel Slavery
Mothers passed down the yoke of bondage to their children, creating what sociologists refer to as a chattel system of slavery. Frederick Douglass, who was born to a slave mother in Talbot County, Maryland, is an example. She was separated from him shortly after birth, thus rendering Douglass a slave in spite of the fact that his father was a white slave owner.

Fear-Based Laws
King (1995) reports that in 1831, after the Nat Turner insurrection, and in the uneasy years leading up to the Civil War, conditions became harsher for slaves. Among laws tightening the hold on slaves were ones forbidding anyone to teach slaves to read or write. White slave owners became extremely fearful of their slaves rising up against them after the sometime preacher Nat Turner went on a killing spree one hot summer day in 1831 in Southampton, Virginia, that left 59 white men, women, and children dead (Franklin & Moss, 2000, p. 164). The backlash was fierce.

Turner hid in the swamps for six weeks before he was caught and executed, and during those six weeks many slaves were killed because of suspicious behavior or suspected involvement. Almost immediately the Virginia General Legislature in 1832 passed the “Black Laws” that severely restricted the movement and activities of slaves. Slavery in the Americas had taken on a different color and all the essential elements of caste.

Perpetual Slavery
Perpetual servitude was binding regardless of religious conversion. In 1667, Virginia passed a law that gave slaves the right to be baptized but no right to freedom. By 1790, there were 21,193 slaves and 4,682 free blacks living in New York (Franklin, 2000, p. 98). By then, the New York Colony had also enacted laws that did not permit slaves who converted to Christianity to seek freedom or any special privileges.

By 1706, four colonies—Maryland, North Carolina, South Carolina, and New Jersey—had joined Virginia and New York in denying freedom upon converting to Christianity. Each colony had institutionalized the practice that baptism freed an individual from sin, but not from slavery. Favorite scriptural passages that focused on the duty of slaves to obey their masters were used to justify slavery. Thus, slaves were converted to Christianity, but it brought them nothing in the way of increased privileges, more humane treatment, or freedom.

Ethnocentrism
Not only racism predominated in America; ethnocentrism also underlay unequal treatment of slaves. In module 1, we described English ethnocentrism, which led to the view that all things not English were inferior. This belief was imported directly into American colonial society. Thus, Africans were viewed as biologically inferior to Europeans. This racial ideology reinforced by “science” helped prohibit social contact and upward mobility for blacks.

A “Peculiar System”
Slavery in the American colonies came to be called “peculiar.” Great distances separated the legal rights and customary treatment of whites from the absence of rights and frequent mistreatment of slaves. Yet blacks and whites lived together in the great plantation houses, and white masters often were nursed, tenderly cared for, and raised by black slaves. White children played with slave children. In the best of conditions, a tremendous amount of intimacy, interdependence, and reciprocity characterized the relationship, yet a tremendous, unbridgeable distance separated the two races.

Despite this social distance, some white masters fathered children by female slaves. These children were often denied and their mothers sold as a result. This was the case with Louisa Picquet in the early 1830s. Louisa was born a slave in Columbia, South Carolina, fathered by slave owner John Randolph (Mattison, 1988). She recalled that her mother told her never to reveal this secret. Louisa so resembled her half-brothers and sisters by Randolph’s wife that it resulted in both Louisa and her mother being sold to a slave owner in Georgia.

Slavery in the Americas came to be called peculiar because the social construction of slavery contributed to continued inequities and hostilities across racial lines, despite nearly equally great pressures toward assimilation. Voluntary immigrants from any part of the world found it much easier and more desirable to assimilate into American society than did African Americans.

The Caste System and Its Legacy
When slavery was abolished, a rigid caste system lay in its wake. This racial stratification placed African Americans at the bottom of the hierarchy. Caste results when strict lines or boundaries are drawn between racial groups permitting little or no movement between them. Caste lines are usually, but not always, drawn based upon visible characteristics such as skin color.

To us, the Burakumin, a stigmatized group in Japanese society, show few physical distinctions to separate them from the larger Japanese population. Yet, the Burakumin are believed to be inferior and less human than other Japanese. As a result, they suffer much of the same racism and discrimination evident between blacks and whites in America (Neary, 2003).

In studying American life, social economist Gunnar Myrdal found the intractability of caste more difficult to overcome than slavery itself (Myrdal, 1944). He reported that after emancipation southern whites were willing to do without resources and liberties to maintain caste boundaries between themselves and newly freed slaves.

The slave tradition, racism, ethnocentrism, and laws passed in the late nineteenth century worked together to solidify caste boundaries and ethnic stratification that placed and kept African Americans at the bottom of American society and that denied them equal rights.

Summary
The story of coming to America is a story of voluntary and involuntary immigration. One group of voluntary immigrants came desperately searching for a new life, and another group of involuntary immigrants came desperately clinging to an old one. Both indenture and slavery facilitated immigration to the United States. But immigrants coming under indenture chose to come, whereas Africans were brought here against their will.

Indenture helped in part to train young people at a time when there was no universal schooling or when groups or disadvantaged individuals lacked access to schooling.

Although slaves did gain some skills, in the main they were exploited as laborers. Under slavery, there were no basic human rights and seldom any effort to educate.

The slavery and indenture systems both provided nearly free labor to facilitate the industrial development of the United States. Abuses occurred in both systems, but the slave system was vastly more inhumane in its treatment of individuals.

Clearly, the American economy benefited from unpaid laborers in the past, but today it continues to suffer from the caste system left in the wake of slavery. Both indenture and slavery positioned immigrants to enter the lowest strata of American society. Indentured individuals, usually white, were ultimately able to be absorbed into some level of the Anglo core society hierarchy.

In contrast, African Americans continue today to face a rigid caste structure that allows only limited access to the advantages of American society.

References
Barr, B. (1994). A hive of industry: The curriculum of the Iowa soldiers’ orphans’ home, 1900–1945. Unpublished manuscript.

Herrick, C. (1969, c. 1926). White servitude in Pennsylvania; indentured and redemption labor in colony and commonwealth. New York: Negro Universities Press.

Douglass, F. (1987). Narrative of the life of Frederick Douglass, an American slave. In Gates, H. L. (Ed.), The classic slave narratives (pp. 248–331). New York: Penguin Books.

Franklin, J., & Moss, A. (2000). From slavery to freedom: A history of African Americans. New York: McGraw-Hill.

King, W. (1995). Stolen childhood: Slave youth in nineteenth-century America. Bloomington, IN: Indiana University Press.

Mattison, H. (1988). Louisa Picquet, the octoroon: A tale of southern slave life. In Gates, H. L. (Ed.), Collected black women’s narratives (pp. 1–60). Oxford: Oxford University Press.

Myrdal, G. (1944). An American dilemma. New York: Harper and Row Publishers.

Morgan, P. (1999). Rethinking early American slavery. In C. G. Pestana & S. V. Salinger (Eds.), Inequality in early America (pp. 239–266). Hanover, NH: University Press of New England.

Neary, I. (2003). Burakumin at the end of history: History of social class in Japan. Social Research, 70, 269–296.

Newton, Massachusetts, Town Archives:

1. Indenture of Peter Cary, son of Mary Cary late of Plimouth, October 5, 1731.

2. Indenture of Joseph Emmory, a minor, a poor child about the age of four years, June 16, 1742.

3. The poor of Newton set up and to be struct off to the lowest bidder per week for one year from April the 14, 1794.

Reichmann, R. (1995). Brazil’s denial of race. NACLA Report on the Americas, 28 (6), p. 35.

Sambol-Tosco, K. (2004). Slavery and the making of America. [Online]. Available: http://www.pbs.org/wnet/slavery/experience/legal/history2.html

Smith, B. (1999). Black women who stole themselves in eighteenth century America. In C. G. Pestana & S. V. Salinger (Eds.), Inequality in Early America (pp. 134–159). Hanover, NH: University Press of New England.

State of Connecticut, Archives, Hartford, Connecticut. (1907). Connecticut Temporary Homes. Agreement for placing children from County Temporary Homes. Chapter 108, Public Acts of 1907 and correspondence filed with agreements.

Youcha, G. (1995). Minding the children: Child care in America from colonial times to the present. New York: Scribner.

Webster’s third new international dictionary of the English language, unabridged. (1995). (993). Springfield, MA: Merriam Webster, s.v. “journeyman, slave.”

Writer’s Program of the Works Project Administration in the State of Virginia. (1940). The Negro in Virginia. New York: John F. Blair.

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Writedemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order