Neonatology on the Web

Retrolental Fibroplasia: A Modern Parable – Chapter 10

www.neonatology.net

Retrolental Fibroplasia: A Modern Parable – Chapter 10

Why did the RLF epidemic begin in the United States in the 1940s? Clues to the answer to this question are to be found, I believe, in some developments which took place immediately after World War II and in the material circumstances of the country during this period. It was an optimistic time in the history of American medicine. When peace was restored, this country, unlike most others, was in a position to direct its attention and considerable resources to deal with medical issues of national importance. Prominent among these was the frequent loss of life among newly born infants. The attack mounted to respond to this challenge in the arena of public health was similar in many ways to the quantitative strategem which brought American victory in the war: mobilization of enormous material assets and rapid increase in technologic development. For the moment I will postpone a discussion about limited strategies in medical “warfare”: the slower and safer approaches used in the face of spotty “intelligence”. In this chapter I will indicate some of the consequences of the “mass action” approach — how sheer expansion of activities influenced events in American nurseries in the 1940s and 1950s, and I will suggest that the momentum of actions continued to the present time. Some of the elements in this inflation were the increased visibility of premature infants; affluence; proliferation of programs, facilities and equipment; publicity; and the increased influence of authoritative opinion. Some of the disastrous “events” (which I will describe in this chapter) bore a striking resemblance to the RLF prototype.

Innovative effort usually follows closely on the heels of newly visible problems. This relationship was seen in developments which took place in the United States when there was a sharp change in perception of the scope of the topic of premature birth: in 1949, the underdeveloped newborn infant suddenly achieved numerical prominence. Before this time, there was a limited amount of statistical information on prematurity from local areas or from individual hospitals. The Standard Certificate of Live Birth, prior to 1939, did not call for a statement either on the duration of pregnancy or on birth weight. An item on “number of months of pregnancy” was carried on the revised certificate for 1939, but the data obtained were not satisfactory for tabulation. On the 1949 revision of the certificate, the items “Length of pregnancy-weeks” and “Weight at birth” were added. At intervals beginning in 1949, the National Office of Vital Statistics published special reports which indicated that infants weighing less than 2.5 kg (5 lb 8 oz) accounted for a higher toll of infant life than any other condition. The U.S. Children’s Bureau presented these data in its publication Statistical Series and emphasized the need for concerted action at local, state and national levels. In 1949 the Committee for the Study of Child Health Services of the American Academy of Pediatrics pointed out that in medical schools in the United States training in the care of newborn infants was the weakest feature in the pediatric course and that many teaching hospitals provided very little experience in the care of premature infants. During this same year the U.S. Children’s Bureau recommended to the New York Department of Health that the Department of Pediatrics of the New York Hospital- Cornell Medical Center in New York City offer a series of institutes on premature infant care for physicians and nurses. This program of instruction trained physician-nurse teams from many parts of the United States and the instruction model stimulated teaching programs in other areas of the country.

Small babies in the United States also became visible in a literal sense — their clothes were removed. Before the late 1940s, these infants were effectively hidden under layers of swaddling clothes. Only their faces could be seen through single glass doors of incubators with opaque walls (Chapter 7). When the post-war transparent incubators became available (p 48), the new American practice began. (It was theorized that the respiratory movements of a very small infant with a soft rib cage were hampered by heavy clothes and blankets). Nurses and doctors stared at the naked babies as if they were seeing them for the first time. The variations in breathing rhythms recorded with a spirometer in 1942 (Chapter 7) were now visible to the unaided eye. In addition, the outward signs of distressed breathing (in-drawing of the chest wall) became evident for the first time. It was found that the respiratory complications of premature infants were very common in the first days of life [especially the respiratory distress syndrome, previously recognized only at post-mortem as hyaline membrane disease (Chapter 8)]. The innocent measure of unwrapping babies exposed them as targets for increased medical actions. The naked infants were examined more completely, observed more closely, and treated more actively than ever before.

Visitors to the United States in the days of the RLF epidemic often voiced the suspicion that American affluence was somehow responsible for the fact that the mysterious affliction occurred most frequently here. In retrospect, the guesses were not far off. American expansion of facilities, on a scale which was beyond the means of many countries, was related to the spread of the disorder. Organized programs for specialized care of small babies were developed in some areas of this country before World War 11 (Chapter 2), but proliferation of specialized centers began during the 1940s. Federal aid for construction of hospital facilities began with the passage of the Hill-Burton Act in 1946. New construction reached a peak in 1949 with completion of the first 1000 projects which received this aid: Hill-Burton funds enabled communities to plan construction of expensive premature infant centers. In 1947, a specialized nursery was opened in Denver at the University of Colorado. It was the focal point of a state-wide program to provide specialized hospital care for premature infants. Another activity of the center was the training of physicians and nurses so that they would be qualified to organize similar projects in other hospitals and communities. In New York City, ten premature centers were established between 1948 and 1953 (the development cost was $2914 per infant bed space). Specialized-bed capacity grew from 52 in 1948 to 303 five years later. An ambulance transport service was established, under the joint administration of the Department of Health and the Department of Hospitals, to transfer babies from the place of birth to the specialized centers (560 infants were transported in the first year of the service; 897 were moved in the fifth year of operation). On July 1, 1950, a payment program was instituted in New York City: costs of care were met by municipal and state funds when parents were unable to pay for specialized care. In 1953, of 2348 babies in centers, 83 percent required full financial assistance. (The average total cost of care for a premature infant in a center, based on an average stay of one month, was $350 during the first 5 years of the program.) North Carolina initiated a program in 1948, and five centers for care of premature infants were opened. Similar programs were developed in other areas; by 1951, three-fourths of states provided some special facilities for the care of premature infants.

Many of the new facilities were designed to provide oxygen from outlets in the wall next to each incubator (piped in from a central source in the hospital). This convenience did away with the frequent and cumbersome task of changing bulky oxygen tanks. Oxygen was now administered continuously, with more ease, and with less conscious attention given to the amount of oxygen consumed-and to cost. The very fact that oxygen was “built-in” provided tacit approval for its free use in American centers.

The spread of costly centers was matched by proliferation of expensive equipment. There was an upsurge of interest in “hardware,” now made practical by available funds. In addition to the new Chapple-type incubator (Chapter 7), a number of mechanical contrivances were introduced. One of the earliest was developed in Houston, Texas; the device, called the “Positive Pressure Oxygen-Air Lock” (later, simply “air-lock”), was described in 1950. It was a complicated and expensive machine for resuscitating and oxygenating asphyxiated newborn infants. The apparatus was developed on the basis of the inventor’s hypothesis that the newborn infant with respiratory difficulty is managed best by continuing, in so far as possible, the mechanical effects of labor (rhythmic compression of the infant by uterine contractions). The device consisted of a closed chamber in which increased pressures of an oxygen and air mixture were applied to the whole body of the infant occupant. In operation, the “lock” was heated and humidified, the infant was placed in the chamber ventilated by a 60-percent oxygen-air mix, the compartment was sealed and the pressure raised to 2 lb per square inch above atmospheric. The positive pressure was then cycled to 3 lb per square inch over a period of 30-40 seconds; following the “build-up”, it was reduced to 1 lb per square inch over a period varying from 15 seconds to as short as 5 seconds. (The pressures used were chosen to mimic the forces on the infant during the most active stage of uterine contractions during labor.) The cycles were repeated at 1-minute intervals until the infant was breathing regularly and skin color was satisfactory; then a steady pressure of 2 lb per square inch of the 60-percent oxygen-air mix was maintained until the infant was removed from the chamber. The initial experience with the device involved 55 infants with respiratory problems who were born in St. Joseph’s Maternity Hospital between January 4 and March 26, 1950. Babies remained in the chamber for periods ranging from 1 1/2 to 24 hours; 3 premature infants were in the “lock” for periods of 4-10 days. The death rate of newborn infants in this hospital fell from 1.9 percent the previous year (6324 births) to 1.5 percent (1372 births) in the first 3 months of 1950. The improvement was attributed to the new apparatus and the report concluded that the oxygen-air lock was indicated in any situation requiring oxygenation of newborn infants. Despite many voiced objections concerning the validity of the underlying principles and doubts about the claimed physiologic effects, the device was soon adopted for use in many hospitals throughout the country. Before long, small babies were placed in the lock before symptoms appeared if they were considered to be at risk of developing respiratory distress. Oxygen concentrations above 60 percent were used frequently. When the association between oxygen exposure and RLF was established, there was a strong suspicion that the risk of eye complications was increased at above-atmospheric oxygen pressures. The fear was never substantiated, but use of the air-lock quickly declined. It was largely abandoned even before results of a formal evaluation of the purported lifesaving effectiveness of the device were announced in 1956. The early claims of success were not supported: among 72 distressed infants placed in the air-lock for 48 hours, 33 percent succumbed; among 71 in a group of concurrent controls who were not treated with the apparatus, 25 percent died.

The air-lock was followed by a succession of treatment devices (see Table 10-2), but none endured as long as the mist-nebulizer. There were a number of configurations, but all depended on oxygen under pressure (or, less frequently, compressed air) as the propellant force to generate small droplets of water which were blown into infant incubators. Water mist was used to reduce insensible water loss from the body during a period of imposed thirsting after birth (a practice which I will describe shortly). Additionally, nebulized mists were used in the hope of relieving respiratory difficulty by thinning the secretions in the respiratory tract and lung. This approach was given a sharp boost by a startling report from North Carolina, published in 1953. A nebulized mist (using a detergent-like preparation called Alevaire) was claimed as a cure for the respiratory problems of newborn infants! The author cited the favorable experience of others who used the new mist for treatment of various lung conditions in adults and related his own encouraging results in easing the symptoms of older children with respiratory difficulty. This background led him to undertake the treatment of newborn patients. Beginning in April 1951, he placed every infant encountered with symptoms of “asphyxia neonatorum” (a catch-all label for newborn infants with respiratory difficulty) in an incubator filled with detergent-mist. Eighteen small patients were treated: all recovered. This encouraging result was contrasted with a 64-percent mortality among 45 babies with similar symptoms who were treated by other methods in five nearby hospitals. The author concluded the 1953 report of his experience with these words:

It is my considered opinion, after a year’s experience, that this is an almost infallible weapon for combatting neonatal asphyxia . . . it enables one to attack this previously discouraging problem with vigor, enthusiasm and confidence . . . one might consider rational, the treatment with [detergent-mist] of all premature babies
The unrestrained language used in this paragraph reveals a good deal about the general outlook of the time, a kind of desire and optimism rolled into one.

In earlier chapters, I traced the origin of the proposals for routine oxygen treatment and in the past few pages I have suggested that the expansion of activities played a role in the increased use of new intensive measures (including the liberal use of oxygen) for the care of premature infants. Now, I wish to emphasize that the changes in caretaking procedures took place in a disjointed manner. The vagaries were related, I believe, to the time-honored empiric approach to clinical problems and to social pressures which encouraged the impatient application of innovations by physicians in the course of their everyday practice. For example, when it became known that “premature infants breathed in a more normal manner in an oxygen-enriched environment” (Chapter 7) and that low oxygen in the blood of these babies could not be detected from outward appearance, (Chapter 7), it seemed entirely reasonable to consider the use of routine oxygen treatment as a corrective measure. The reasoning was sound and physicians responded in predictable fashion. Some said, in effect, Let’s try it and see, others indicated, Let’s wait and see. I must make it clear that there was no organized campaign to establish a national policy of administering oxygen to all premature infants. Individual activists, and I was among them, jumped from consideration of reasonable theory to application in everyday practice. We crossed the boundary into the unknown quite unconsciously. The move seemed a minor departure from established practice (supplying oxygen only to sick infants) supported by good results reported in Chicago in 1931-33 (Chapter 7). The need for a formal “field” test of the shift to a liberal policy was never considered. This form of optimistic, informal clinical experimentation was customary. Physicians frequently conducted empirical tests of novel and wholly untested treatments in their offices and in hospitals. The results were tabulated after a period of time and reported in statements which began, “In my experience . . . ” Moreover, in the immediate post-war period, the spectacular results of newly available treatments (especially penicillin and other new antibiotics for serious infections) encouraged bold explorations. The hope for quick cures was kept alive by news coverage of medical breakthroughs which used the terms “miracle drugs” and “wonder cures.” And the spotlight of publicity played an important role in determining the subsequent play of events. This was seen following wide newspaper and magazine coverage of the original article announcing detergent-mist cures; both use and belief spread quickly. Physicians and parents throughout the country were soon clamoring for the new treatment. Interest, hope, and belief were raised to new heights by an article describing the flight of a mercy plane which delivered the detergent preparation to a small hospital for treatment of a desperately ill baby. Following all of the high drama and hyperbole, it was almost impossible to get a fair hearing for the mundane question: Does it really work? In May 1953 we began an evaluation of the new treatment by means of a controlled clinical trial. At the conclusion of the test (which involved 200 infants at Babies Hospital), we were unable to find a beneficial effect of detergent mist. When these results were published on March 26, 1955, a representative of the company which manufactured the preparation said to me, “It won’t hurt our sales.” He was right. The negative report had very little effect on the widespread practice. Another formal evaluation of detergent mist (conducted in Canada) and a trial of the purported beneficial effects of plain water-mist both concluded with similar negative findings. Nonetheless, mist treatment of babies with respiratory symptoms continued for years. Refutations which later appeared inconspicuously in medical journals did little to change the initial judgment made in widely circulated newspaper headlines. Another example of the power of the press to persuade occurred on September 28, 1953. Time Magazine published an article entitled “Too Little and Too Much,” which reviewed the subject of RLF. In addition to reporting Campbell’s experience (Chapter 4), Time repeated Szewczyk’s opinion, that “. . . sudden removal [from oxygen] to normal air may cause retrolental fibroplasia.” As a result of this publicity, the “sudden removal” theory achieved a credence which endured for years after evidence to the contrary was published in medical journals.

Following the dramatic events of the first 12 years of the RLF episode, it became obvious that the potential for harm as the result of unrestrained therapeutic exuberance in premature nurseries had become magnified. The expansion of activities and organization of programs throughout the country now involved literally thousands of babies who were treated in new and untested ways; the stakes had been raised considerably. And, the script for potential disaster was outlined in one ordered pattern of action: new proposal — wide application — belated recognition of the possibility of disastrous complications — formal evaluation. The depressing scenario was reenacted in a number of instances which were strikingly similar to that of the RLF incident.

One episode began in 1949, following studies conducted in Boston. It was suggested that many premature infants are born with a surfeit of water and electrolytes in the body. The suggestion was supported by a common observation; the tissues underlying the skin (especially hands and feet) of small newborn babies are often edematous. When fluids and feedings were withheld in a set of planned observations, edematous babies excreted more urine than nonedematous controls. The outward signs of well-being seemed to be satisfactory during the period of thirsting and fasting. Although the blood of these infants became somewhat concentrated, it was shown later that the hemoconcentration could be prevented by placing the babies in incubators filled with water vapor. These observations had important practical implications. The accepted practice of feeding small babies as soon as possible after birth (first with sugar water and, in a few hours, with milk) was always threatened by a feared complication — vomiting and inhalation of the feed into the lungs. If feedings could be withheld safely in the first days, it seemed reasonable to hope that the risk of lung complications from inhalation would be reduced. This reasoning formed the basis for a change in feeding practice which began in the 1950s and quickly spread throughout the United States and, to some extent, abroad. Small infants were placed in incubators saturated with water vapor and all fluids and feedings were withheld until edema in the tissues was no longer evident. The period of initial thirsting and fasting varied from 12 hours to as long as 4 days after birth. The new practice was challenged by Ylppö in Finland (a pioneer in the field of premature infant studies) who argued that premature infants must receive fluid in amounts totalling at least 5 percent of body weight on the first day of life. A controlled trial conducted in Germany in 1955 indicated that premature infants who received early first feedings had a higher survival rate than controls whose feeding was delayed. The European criticism had no influence on the American practice; it continued without serious challenge for more than ten years. In the 1960s, several analytic surveys of past records indicated that brain damage (especially spastic diplegia; see Chapter 8) occurred most frequently among premature infants who had been fasted in the first days of life. By the late 1960s, the thirsting and starving era was over.

Another dramatic shift in feeding practice occurred in the United States in the 1940s. The change was influenced by studies conducted in New York in 1941 when the premature infant’s difficulty in absorbing milk-fat from the intestine was described in quantitative terms. It was found that when human milk or a cow’s-milk mixture was fed, a significant fraction of the ingested calories was lost in the stools as unabsorbed fat (Table 10-1). The feeding of half-skimmed milk mixtures resulted in a reduction of fat in the stool. These observations led to a reasonable proposal: if premature infants failed to gain weight on human milk, the calories lost in unabsorbed fat could be reduced by changing the composition of the milk. The proportion of poorly absorbed fat could be reduced, it was suggested, yet the total caloric value of the milk could be maintained by increasing the concentration of well-absorbed protein and carbohydrate. The suggestion was adopted quickly and more generally than originally advised. A half-skimmed cow’s-milk mixture was offered from the first feeding; it was not reserved as an alternative for use in infants who failed to gain weight after a trial on human milk. Soon a significant proportion of the (approximately) one-quarter million infants of low birthweight born in the United States each year were receiving a commercial version of the artificial mixture instead of human milk. Not only was the amount of milk-protein increased, but the predominant kind of protein was changed. In cow’s milk the principal protein is casein; in human milk whey protein predominates.

Table 10-1

Milk Feedings for Premature Infants —
Type and Composition of Feeding, Loss of Calories in Stools

Type of Milk Fed
Human
Milk
Unskimmed
Cow’s-Milk
Mixtures
Half-skimmed
Cow’s-Milk
Mixtures
Fat intake (as percent of caloric intake)*30-5530-5515-20
Protein intake (as percent of caloric intake)*710-2014-20
Principal type of proteinwheycaseincasein
Loss of calories in stool** (as calories per kg of body weight per day)ca. 20ca. 20ca. 10
* Balance of caloric intake made up of carbohydrate (sugar).
** Unabsorbed milk-fat.

There was ongoing debate about this major shift in feeding practice. Although infants fed the artificial-milk mixtures did gain weight relatively rapidly, there was some evidence that much of the increase was accounted for by storage of water and minerals (rather than a primary increase in body tissue). Questions were raised about the burden imposed on body metabolism if more protein was absorbed than could be utilized in the formation of tissues of fixed protein composition. The speculation was not idle. Small babies who received high-protein feedings had elevated concentrations of a protein metabolite (the amino acid tyrosine) in the blood during the first days of life. Since this amino acid was thought to be capable of injuring the developing brain, fears arose, but the practice continued. Debate about the possibility of brain damage went on for years. It was not silenced by follow-up surveys conducted in the 1960s which were unable to demonstrate an increased frequency of neurologic complications. In the past few years, additional suspicions concerning the safety of feeding cow’s milk have been raised. The intensity of the debate has increased as a number of metabolic changes have been measured which appear to be related to the quantity and to the quality of protein fed to premature infants.

In the 1970s studies conducted in Finland in collaboration with an American group suggested that prematurely born infants have a limited capacity to make taurine (a free amino acid which is not incorporated into protein; there is evidence that it is important to normal function of the retina and to normal development of the brain). In carefully designed randomized feeding trials, the Finnish-American study demonstrated that infants fed casein-predominant milk preparations (which contain virtually no taurine) had very much lower amounts of this substance in their blood and urine than was found in concurrent controls who received human milk (a rich source of taurine). The findings suggested that the rapidly growing premature infant may be dependent on a dietary source for this essential material. Although the effects of the long-standing practice of feeding premature infants low-taurine milk mixtures are unknown, studies of taurine deficiency in other species have revealed some disturbing results. For example, in the cat, taurine deficiency is associated with degeneration of the retina; blindness results if dietary taurine is not supplied. These dire speculations and other studies of non-nutritive advantages of human milk for premature infants have been responsible for recent increased interest in a return to use of mother’s milk in this country. The order of events which I have just described resemble the sequence associated with shifts in oxygen treatment practices in the same American nurseries. Widespread change took place quickly; “field” testing (if it occurred at all) followed far behind.

The hapless scenario was not confined to slowly evolving events in which unexpected complications were difficult to detect (the changes in the eyes seen weeks after oxygen treatment was stopped, or signs of neurologic handicap detected years after a period of initial fasting in the first days of life). During the 1950s, there were several treatment catastrophes in which the outward signs of serious complications occurred immediately. The time scale was compressed, but the script resembled that of the RLF saga. The first of these incidents originated in a change in opinion/practice which occurred between 1947 and 1949. For a few years prior to this time, bacterial infections, when they were identified in premature infants, were treated with the post-World War II “miracle” drugs (sulfonamides and penicillin), but results were quite poor. The principal difficulty seemed to be the vague nature of early signs of serious infection in small babies. Diagnosis was often delayed or missed entirely. In 1947, a new approach to this problem was proposed in Edinburgh, Scotland: penicillin was administered for the first week of life to all infants who weighed less than 1.6 kg (3 lb 9 oz). An improvement in survival rate was reported. A similar experience was reported in 1949 from Mälmo, Sweden following the use of penicillin and sulfanilamides. The practice of routine administration of antibacterial drugs was not widely used in Europe, but it was quickly adopted in the United States. At the annual meeting of the American Pediatric Society in 1949, a leading authority from Boston recommended a number of measures for the prevention and control of infections in newborn infants. Among the suggestions, he advised, “In nurseries for premature infants outside [of] obstetric hospitals, all infants, with the possible exception of healthy newborn infants, should be given a prophylactic course of treatment with sulfadiazine and penicillin.” He noted that, “The vital statistics from this nursery for 1948 show a dramatic reduction in mortality for all infants weighing 2 kg (4 lb 7 oz) or less.” In the discussion of this report, a commentator from Denver said, “… we use sulfadiazine and antibiotics freely.” When the premature center at Babies Hospital in New York opened in late 1949, the new practice suggested at the annual meeting was adopted. Antibacterial drugs (penicillin plus oxytetracycline or chloramphenicol) were administered to all infants transferred-in from other hospitals in the hope of reducing the risk of outbreaks of infection which might be “imported” from other nurseries. This routine continued for 3 1/2 years. In 1953, a new combination of drugs for prevention was considered: penicillin plus a newly available sulfonamide drug (sulfisoxazole). The new agent had a practical advantage over previous preparations; it could be administered by injection at infrequent intervals (once or twice each day to maintain satisfactory levels in the blood). The new regimen was prescribed at Babies Hospital for 1 1/2 years with no recognized hint of difficulty. The mortality rate of very small infants was quite high in the early 1950s. The deaths were associated with a number of fatal conditions (especially hyaline membrane disease, and kernicterus, a damaging, often fatal form of brain damage which is a complication of jaundice in new born infants). But, “in our experience…” the frequency of fatal infections was relatively low in association with the preventive treatment.

When the results of the RLF Cooperative Trial became known in 1954, our skepticism concerning all unevaluated innovations in premature care began to grow. A recommendation for a new antibacterial treatment program was made in that year, and we seized on the opportunity to begin a long-delayed systematic examination of this element of care. At this time, we thought the grounds for recommending preventive treatment were quite reasonable; only the ideal agent(s) seemed in doubt. Consequently, we decided to compare the results of the proposed new treatment (subcutaneous oxytetracycline) with those of the “established” drugs (penicillin plus sulfisoxazole). We were completely unprepared for the denouement at the end of this exercise. We anticipated the controlled trial would be the first in a series of exploratory attempts to find an ideal preventive regimen. It seemed unlikely the differences would be striking and we thought we were in for a long search. Much to our amazement, the first trial gave a definitive result. To our horror, the mortality rate was highest (and strikingly so) in infants who received the “established” treatment! Infants who received penicillin plus sulfisoxazole had fewer infections, as compared with the babies who received the new treatment, but this was irrelevant. Kernicterus was found nine times more often among infants who succumbed after receiving the standard treatment. There was little doubt that this unexpected (and, at the time, completely inexplicable) complication accounted for the increased number of fatalities. We took little comfort in the undeniable fact that the formal trial “saved” half of the infants from exposure to the unsuspected hazards of a treatment which had been used so confidently for the previous 1 1/2 years. If a controlled trial had been carried out at the time of the original shift in practice, the “saving” in lives would have been truly impressive. It was not until 1959 that the mechanism underlying the disaster was uncovered. It was found that sulfonamide drugs (especially sulfisoxazole) “released” the yellow pigment bilirubin from its binding to albumin in the blood of jaundiced infants; the toxic pigment was then free to enter the brain. Since sulfonamide drugs were used widely in the treatment of newborn infants (many of whom were jaundiced), this fatal complication must have occurred often. However, the national dimensions of the kernicterus outbreak were never reported.

The deep-seated perseverance of the custom of informal experimentation in medicine is illustrated vividly by an incident which occurred while the lessons of the RLF and the kernicterus outbreaks were still ringing in the ears of pediatricians. The episode began one quiet Saturday afternoon in 1956. Ethel Dunham, who was revising her textbook (Premature Infants), came to New York to ask my colleague, Hattie Alexander of Columbia University, about the subject of antibacterial treatment. Doctor Alexander was a renowned authority on the subject of infectious disease. She was disturbed by the very poor experience in treating the class of infections caused by coliform bacteria. These infections were becoming prominent in newborn infants, as others caused by organisms which responded to penicillin were subsiding. Mindful of the recent kernicterus disaster in our nursery, she told Doctor Dunham that new steps must be taken cautiously. Doctor Alexander proposed that a controlled clinical trial be conducted to explore the effectiveness of a combination of agents in infants at highest risk [those weighing under 2 kg (4 lb 7 oz) at birth]. Shortly afterward, at a seminar on premature infants which Doctor Richard L. Day and I coordinated, Doctor Alexander gave a short talk on antibacterial therapy and announced her proposal in these words:

“… the following combination of antibiotic agents would be worthy of trial . . . [in] all infants whose birth weights are less than 2000 grams:
1. Chloramphenicol
2. Erythromycin
3. Sulfadiazine”

(The dose of chloramphenicol recommended was 100 mg per kilogram per day, the dose used in older infants and children). This talk, in a small parlor room of the Biltmore Hotel in midtown Manhattan on October 7, 1956, was the only time that the proposal was ever made publicly (the report of the seminar did not appear in print until 9 months later). Despite the express recommendation for an evaluative trial, the caution was not heeded. Instead, uncontrolled use of the suggested treatment regimen (and other variations using chloramphenicol) spread quickly to all of the states of the Union. The sorrowful consequences slowly became evident in the next few years. In all parts of the country, nurses and physicians observed a strange new disorder of premature infants. It came to be known as the “gray syndrome”: on the third or fourth day of life, the babies developed distention of the abdomen, vomiting, irregular respirations, and pallor. Cyanosis and poor circulation of blood to the peripheral tissues quickly followed and the victims developed a ghastly gray color of the skin. In a few hours, they were dead. During these years there were several severe influenza epidemics in the United States; deaths from pneumonia occurred frequently in the nurseries. As a result, it was some time before the association was made between the new drug treatment of infections and the horrendous gray syndrome. All doubt ended in 1959, when the results of a controlled clinical trial, conducted in Los Angeles, were reported: the mortality rate among infants who received chloramphenicol was substantially higher than in untreated concurrent controls. It was found, belatedly, that premature infants have a limited ability to transform and to excrete chloramphenicol. The relatively high doses administered resulted in fatally high levels of drug in the blood.

What stands out in this review of calamities (and there were other similar episodes) is unrestrained medical behavior: a double standard of evidence was used to guide actions. In preclinical investigations involving animals or in pilot observations in infants, the rules of scientific evidence were carefully observed. At the next step in investigation, the first application of new treatment in the “field,” the cautious rules were abandoned. The safeguards inherent in the hedging strategy of formal evaluation (limited exposure of infants to unknown risks through use of controls) were exploited as an afterthought. Over and over, the barn door was locked after the horse had escaped!

On one occasion, the “door” was locked just as the “horse” was escaping. This incident began with a drug company application to the Food and Drug Administration on September 16, 1960, for distribution of a new drug in the United States. The drug, thalidomide, had been in general use in West Germany since 1957: it was well regarded there as a safe and useful medication, especially in the treatment of nausea in pregnancy. The company was required to present the results of clinical testing in this country before F.D.A. approval was granted. To meet this requirement, the firm sent 2.5 million tablets to 1267 “investigators” in the United States. However, it was clear that no formal studies were expected. A manual issued to salesmen employed by the company stated, “. . . the main purpose is to establish local studies whose results will be spread among hospital members. You can assure your doctors that they need not report results if they don’t want to . . .” At this time, and quite by chance, Doctor Frances Kelsey, of the F.D.A., read a short letter to the editor in the British Medical Journal of December 31, 1960. The letter mentioned the possibility that use of thalidomide might result in certain neurologic symptoms in the feet and hands (peripheral neuritis) of adult users. Alerted by this suggestion, Doctor Kelsey began to request more information concerning the complication from the company. On February 23, 1961 she requested a complete list of investigators to whom the drug had been furnished, hoping to check on possible neurologic effects. The company sent a list of only 56 investigators who had used the drug for a period of 4 months or longer. Little by little it was learned that thalidomide taken by pregnant women was the suspected cause of congenital malformations. As early as October 1960, the first two cases of grossly deformed babies (with seal-like deformities of the limbs) were presented at a medical exhibit in West Germany. In late 1961, the West German Minister of Health issued a statement warning women not to take the drug. Finally, thalidomide was withdrawn from the market in West Germany on November 25, 1961. Following this action; reports from country after country indicated that thousands of malformed babies had been affected.

The drug never emerged out of investigational status in the United States, thanks to the actions of Doctor Kelsey. Later the F.D.A. conducted an inquiry which revealed that hundreds of the “investigators” who received the drug for study in this country failed to keep adequate records. They did not know which patients received the drug, nor when it was prescribed and at what dosage. When the hazards became known, the F.D.A. investigators were unable to contact many of the patients. More than half of 1258 physicians interviewed had no record of the quantities of the drug returned or destroyed, pursuant to the manufacturer’s instructions.

Legislative recognition of this situation came with the passage of the Kefauver-Harris Amendment to the U.S. Pure Food and Drug Laws in 1962. For the first time, there were legal regulations governing the formal and limited steps taken in testing the safety and efficacy of a new drug for use in human patients. Unfortunately, this law, and subsequent F.D.A. regulations, have not provided adequate protection for the fetus and newborn infant. Only rarely are the controlled testing programs carried out in these immature patients (whose metabolism of drugs is frequently quite different from that seen in older individuals). Moreover, many informal clinical investigations (involving drugs which are not “approved” for use in the fetus and newborn, and physical maneuvers which do not involve drugs) are still carried out in the casual style which characterized thalidomide “investigations” in this country. As late as 1972, one leader in perinatal medicine said:

Therapeutic programs which evolve are difficult to subject to controlled studies, particularly when they appear to be successful … I think in some instances, gradual change in therapy often leads to sounder practice . . .

Unfortunately, the results of most “gradual changes in therapy” that have taken place in the evolvement of modern perinatal medicine do not bear out this optimism (Table 10-2).

Table 10-2

Results of some “Proclaimed” Therapies
in the Development of Perinatal Medicine

Consequences*
Gradual Changes in TherapyLed to
Sounder
Practice
Led to
Disaster
Misled
into
Fruitless
Byways
Testosterone to stimulate growth of prematures ? 
Thyroid hormone . . ibid . . .  x
DES to prevent miscarriage x 
Progestins to prevent miscarriage x 
Exchange transfusionx  
Supplemental oxygen for periodic breathing x 
Initial thirsting and starving x (?) 
Synthetic vitamin K prophylaxis x 
Low-fat, high-protein feedings ? 
Sulfisoxazole prophylaxis x 
Chloramphenicol prophylaxis x 
Gastric emptying to prevent RDS **  x
Sternal traction for RDS  x
Epsom salt enemas for RDS x 
Rocking-bed for RDS  x
Alevaire for RDS  x
Water mist for RDS  x
Acetylcholine for RDS  x
Respirator support in RDSx  
Continuous positive airway   
pressure for RDSx (?)  
Feeding gastrostomy for prematures ?x
Ice water resuscitation for asphyxia  x
Sodium bicarbonate bolus infusions in asphyxia?  
Lowered thermal environment x 
Routine hexachlorophene bathing ? 
Phototherapy for hyperbilirubinemiax (?)  
* Most of these judgements rest on as infirm a base as the original claims of benefit (see chapter notes).
** Respiratory distress syndrome (see Chapter 8).

The phrase “gradual change in therapy” conjures up an image of slow, cautious exploration, but is this accurate? I said earlier that there was a Let’s-wait-and-see response from some physicians when they first learned about the reasoned suggestions for routine oxygen treatment. At the 1949 meeting, when routine antibacterial drug treatment was advised, one commentator said “I am disturbed at . . . [the] advocacy of routine treatment with penicillin and sulfadiazine for all babies. Would not more difficulty arise from that procedure in the long run?” What converted the doubters? It was not, as I have shown, the presentation of reliable evidence from formal trials; there were few die-hards who waited that long. Moreover, the conversions took place so quickly that it seems likely some social forces were at work. A sociologic study, reported in 1957, of the diffusion of a innovation among physicians provides a number of insights into the dynamics of the conversion process: the propagation of “fashions” in therapy.

Coleman and co-workers examined the social influences which intervened between the initial prescription of a new drug by a few innovators and its final use by virtually an entire medical community. Data were collected in four American cities for 15 months after a new antibiotic drug with wide potential applicability was released for general use. The researchers conducted sociometric interviews and classified 125 physicians, on the basis of individual attributes, into two mutually exclusive classes: primarily physician-oriented (principally on the basis of recognition given by colleagues) or primarily patient-oriented (respect by patients and general standing in the community). In addition, they traced the social structure which linked the doctors together: professional relationships (advisors and discussion partners) and friendship ties. The month when each physician first prescribed the drug was determined by systematic monitoring of the prescription records of pharmacies in the cities over the period of study. Coleman’s group found that physician-oriented doctors generally used the drug earlier than their patient-oriented colleagues. However, a plot of the curve of cumulative proportion of new users had the same shape in both groups (the curve for the patient-oriented physicians was merely displaced to later starting times). In both groups the movement resembled a “snowball” process: the number of recruits each month increased in proportion to those who were already converted. (The mathematical equation of this curve characterizes rates of population growth, certain chemical reactions, and other phenomena which obey a chain-reaction process.) Additionally, there appeared to be successive stages in which interpersonal influences played a role in diffusion of the innovation through the community of physicians. The first social networks which appeared to be influential were those which connect doctors in professional relationships of advisors and discussion partners. A little later the friendship ties seemed to exert a persuasive influence. Finally, by about 6 months after the drug was released, the social networks seemed completely inoperative as chains of influence. Early, when a minority of physicians were prescribing the drug, intellectual assurance from esteemed colleagues and emotional support from friends seemed to be needed by those who were uncertain. As the “snowball” process gathered momentum, usage was no longer novel and late recruits turned less and less to individuals for validation and approval.

The Coleman group observations suggest that advisors play a pivotal role in the initial phase of the diffusion of an innovation. This was borne out in the shift to routine oxygen treatment in the 1940s. Early adoption of the new practice took place in the leading (physician-oriented) university hospitals in the United States. I can recall enactment of the advisor role when the routine was just beginning in New York. Doctors from small hospitals visited Columbia University and asked not only about the rationale for the shift, but they also wanted to know if this was occurring in other prestigious institutions. Before long, routine oxygen was used so widely that the questions stopped. Acceptance of this “fashion” in treatment was complete.

The speed of propagation of information about treatment “fashions” has increased considerably since the days of mist treatment in 1953 and the Coleman study in the mid-1950s. Physicians and the public are bombarded with medical news in print media (and, of course, television). Physicians receive additional information in the form of digests and summaries of original reports and of lectures. These appear in medical newspapers and magazines which are sent free of charge to doctors (the number of these drug-advertisement-supported publications has increased considerably since the 1950s). As a result of the profusion of medical news reporting, it is increasingly probable that a physician will first learn about a new treatment from some secondary source-an abbreviated account that does not present all of the evidence, nor the details of the design of studies on which the conclusions are based. Furthermore, encouraging, positive results are reported in the news more frequently than failures. For example, an article appeared on page 1 of the New York Times on October 22, 1964 under the headline “Fatal Baby Disease is Reported Cured.” The information was obtained from a talk delivered the day before in Miami, Florida to an audience of physicians attending the annual meeting of the College of American Pathologists and the American Society of Clinical Pathologists. The story began, “A cure for the mysterious hyaline membrane disease, which takes the lives of up to 25,000 infants in the United States each year, was reported here today [October 21] at a medical meeting. A little more than a year ago the disease was fatal to Patrick Bouvier Kennedy, infant son of President Kennedy.” The reporter explained that a new treatment had been devised based on the idea that newborns have too much water in their bodies; in prematures, in particular, excretion of water by the immature kidney is impaired. Water tries to escape through the incompletely developed lungs, he continued; this chokes off the air supply, leading to the symptoms of respiratory distress and death. The simple new theory led to a simple new treatment: enemas of saturated epsom salts immediately after birth to draw off water from the body tissues. “It is not yet certain that the theory is correct,” the reporter noted, “but it is certain that the treatment works.” The concentrated enemas were given to 28 sick infants in five hospitals in Louisville, Kentucky; all 28 improved dramatically. Babies who were suffocating became “normal” in an hour or less. The New York Times reporter interviewed the originator of the enema treatment and he relayed the suggestion that “. . . all premature infants be given the treatment prophylactically [since ] it cannot be determined which infants will develop the disease . . . and the treatment itself appears to be without hazard. Prophylactic use of epsom-salt enemas will prevent the development of the disease and save many lives.” This news was reported extensively in general news media (including Time Magazine on Oct. 30, 1964) and in the medical news publications (including Medical Tribune on November 11, 1964). Even the highly respected Lancet carried an annotation (November 21, 1964) which began, “A new treatment for premature infants with hyaline-membrane disease was suggested at a meeting . . . last month . . . the full details have not yet reached us . . .,” and the editor went on to describe the available facts gleaned from the New York Times article. The news stories were much more efficient than the doctor-to-doctor grape-vine: use of enemas spread more quickly than did prescriptions of chloramphenicol 8 years earlier-and the denouement came more quickly. On July 5, 1965, Andrews and his associates reported the results of magnesium sulfate (epsom salt) enemas given to 10 newborn lambs: concentrated solutions were uniformly fatal (5 lambs who received 50-percent-solution enemas developed elevated levels of magnesium in the blood, signs of magnesium intoxication, and all died 23-46 minutes later), and less concentrated solutions also produced disturbing results (2 of 5 lambs receiving 25-percent-solution enemas developed signs of magnesium intoxication; I died 2 hours later). Needless to say, these alarming observations had a chilling effect. Use of the outlandish treatment fell off sharply, but not completely. Seven years after the results in lambs were reported, it was still found necessary to publish a warning against the dangers of the epsom-salt enemas (an instance had been encountered of fatal magnesium intoxication following an enema treatment for hyaline membrane disease). Unfortunately, the story of the incredible epsom-salt enemas is not unusual. Once brought to life by press attention, even the most bizarre treatment (Laetrile treatment for cancer is a notorious example, see chapter notes) may lead a hydra-like existence.

Roger Bacon, in the 13th century, warned against uncritical acceptance of authoritative opinion, but it was still a major stumbling block in medicine during the years which I have recalled in this chapter. Unhappily, skepticism has fallen even lower in the years since. Physicians depend, more than ever, on the judgments and opinions of authorities because of an exponential increase in scientific information and an increase in the complexity of medicine. At the same time the voice of authority is louder and more broadly cast than ever: the electronic revolution carries the voice of experts and the jet airplane carries the experts themselves into every corner of the country. The potential for misinterpretation of the kind which took place in that small hotel room in 1956 (concerning chloramphenicol) is now enormous. Moreover, more attention is paid to the speaker than to the content of his remarks. Objective evidence of this “personalization” phenomenon was obtained by Naftulin and co-workers. They framed the following hypothesis: given an impressive lecturer and lecture format, even an experienced group of listeners (professional medical educators) will be seduced into feeling they have learned something-even when the content of the lecture has been irrelevant, conflicting, and meaningless. A distinguished-appearing, authoritative-sounding professional actor was dubbed “Doctor Myron L. Fox” and given impressive but fictitious credentials. He spoke to several important audiences on “Mathematical Game Theory as Applied to Physician Education.” The lectures and the question-and-answer sessions were filled with double-talk, neologisms, non sequiturs, contradictory statements, meaningless references to unrelated topics — and some good jokes. Satisfaction questionnaires returned by the audiences gave the lecture a high rating. The phenomenon has been labeled “The Doctor Fox Effect” and it deserves special recognition. It raises an issue concerning the responsibility of authorities who speak to practicing physicians, and more so when their remarks are reported in the press. If authoritative statements are accepted completely uncritically, lecturers have an incommutable obligation to use restraint and self-criticism. The duty is particularly great when speaking about unevaluated innovations to physicians who are pressed to find solutions to their everyday medical problems. Authoritative lecturers should stimulate their listeners to responsible contemplation of incomplete evidence, instead of irresponsible, unrestrained action. The disturbing consequences of impatient action which I have reviewed recall an apocryphal saying in factories which manufacture fireworks:

“It is better to curse the darkness, than to light the wrong candle.”

Return to Contents Page