jackmccallum.comjackmccallum.com/history of medicine.docx  · web viewthe wonder of the four...

63
History of Medicine Egyptian Medicine Medicine in Ancient Egypt suffered from the same disabilities as the rest of Egyptian science—the separation between reality based on observation and that based on belief had not been made. There was literally no separation between science and religion, and, in medicine, the situation was even more obvious since physicians were almost all priests as well. The Nile River Valley, along with the other three great alluvial systems that gave rise to civilizations and religions—the Tigris/Euphrates, the Yellow and Yangtze Rivers, and the Indus/Ganges system—provided a unique setting for the evolution of mankind from nomadic hunter-gatherers to settled farmers and herdsmen. The climate was warm and the growing season long, but the soil provided the magic. In each river system, people learned to domestic grains (wheat, barley, rice) and animals (cows, sheep, goats, pigs). The wonder of the four valleys was the regular floods that brought new nitrogen rich soil and kept them capable of supporting populations that would otherwise have been unthinkable. The Nile, with its length and floods so regular they defined the calendar, was the best of all, and humans have lived beside it for 250,000 years. The self-renewing river made the narrow valley capable of supporting 450 people in each square mile, and, by 3000 BCE, Egypt had a population in excess of 1 million, virtually all of whom clustered on the river’s banks. The other defining geographic factor in ancient Egypt was the serpentine area bordering the river. Above the delta whose multiple outlets reach for the Mediterranean, the Nile flows as a single strand bounded by high bluffs. The valley is almost never as much as 20 miles wide and is sometimes right against the banks of the river itself. Beyond the narrow fertile plain to both the east and west are formidable deserts; Nilotic Egypt is functionally an island.

Upload: trinhkhanh

Post on 30-Jan-2018

220 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

History of Medicine

Egyptian Medicine

Medicine in Ancient Egypt suffered from the same disabilities as the rest of Egyptian science—the separation between reality based on observation and that based on belief had not been made. There was literally no separation between science and religion, and, in medicine, the situation was even more obvious since physicians were almost all priests as well.

The Nile River Valley, along with the other three great alluvial systems that gave rise to civilizations and religions—the Tigris/Euphrates, the Yellow and Yangtze Rivers, and the Indus/Ganges system—provided a unique setting for the evolution of mankind from nomadic hunter-gatherers to settled farmers and herdsmen. The climate was warm and the growing season long, but the soil provided the magic. In each river system, people learned to domestic grains (wheat, barley, rice) and animals (cows, sheep, goats, pigs). The wonder of the four valleys was the regular floods that brought new nitrogen rich soil and kept them capable of supporting populations that would otherwise have been unthinkable. The Nile, with its length and floods so regular they defined the calendar, was the best of all, and humans have lived beside it for 250,000 years. The self-renewing river made the narrow valley capable of supporting 450 people in each square mile, and, by 3000 BCE, Egypt had a population in excess of 1 million, virtually all of whom clustered on the river’s banks.

The other defining geographic factor in ancient Egypt was the serpentine area bordering the river. Above the delta whose multiple outlets reach for the Mediterranean, the Nile flows as a single strand bounded by high bluffs. The valley is almost never as much as 20 miles wide and is sometimes right against the banks of the river itself. Beyond the narrow fertile plain to both the east and west are formidable deserts; Nilotic Egypt is functionally an island.

However, the size of the population, the fertility of the land, and effective farming and herding changed how people functioned. Instead of males of a single family or small tribal group going off to hunt while the women scavenged edible plants, a single farmer could feed twenty people. The surplus he produced made it possible for others in his social group to undertake activities beyond those directly tied to feeding themselves—religion, government, warfare, and understanding how the world works. Science, technology, and medicine became possible.

Stepwise, the settlements along the river organized themselves into administrative nomos which eventually coalesced into Upper and Lower Egypt and, finally, in 3200 BCE, united into a single empire that would last more or less continuously until the beginning of the Christian era. For perspective, for the United States to have survived that long, it would have to have been founded five hundred years before Socrates and Plato were born.

Page 2: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Productive land and a surplus of goods define wealth, and wealth almost inevitably generates acquisitiveness and conflict. There is clear evidence that ancient Egyptians fought among themselves from the earliest times of which we have evidence and they fought in fairly large actions—one mass grave dating to about 2000 BCE is filled with skeletal remains with bones smashed and penetrated by flint arrowheads.

Actually, the practice of medicine was important in Egyptian society as far back as we can trace a record. One interesting though speculative tradition dates to around 3000 BCE. The god Osiris is said to have regained the use of his legs after the mother god Isis and Thoth (the inventor of medicine) manipulated his injured neck. Tradition also has it that Imhotep was the architect of the pyramid of Pharaoh Djoser as well as his chief physician. The semi-mythical figure eventually morphed under the Greeks into Aesculapius, a god of healing. The first named physician of whom we have a definite record was Hesy Re, Chief of Dentists and Physicians to the Third Dynasty pyramid builders of approximately 2600 BCE. Public works employees (and by implication the military) had been cared for by the state for a very long time.

Rameses II, who ruled from 1279 to 1213 BCE, divided Egypt into 34 districts, each of which had its own barracks, and each had a physician, but the idea of state employed medicine had a long tradition antedating Rameses’ military. Workers hired or conscripted by the state to build pyramids and other state projects were subject to injury and disease and the state provided care for them. Rameses used his standing army to expand his empire well beyond the narrow confines of the Nile Valley. He successfully campaigned and left garrisons as far west as Libya, as far south as Nubia, and as far north as modern day Lebanon. He built an army of 100,000 and armed it with hardened bronze. He fought the 17,000-man Hittite army in what is now Syria at the Battle of Kadesh with 20,000 soldiers of his own. The battle was probably a draw but monuments erected on the Pharaoh’s return unabashedly proclaimed it a victory. When Rameses’s immense armies went on campaign, they were cared for and accompanied by a state employed medical corps.

So what sort of physicians were these? To understand that, it will be necessary to briefly consider the ancient Egyptians’ concepts of nature and reality. With the possible exception of physical pain, humans seem to dislike uncertainty above all else. Occurrences, especially unfortunate ones, that come without explanation are profoundly discomforting, and that applies especially to maladies and injuries of the human body. But the Egyptians lacked almost every tool we currently use to extend our senses and make us able to decipher nature—telescopes, microscopes, magnetic resonance imaging and the like. There was very little outside direct trauma the Egyptians could explain based on their own observations. In order to restore a sense of order, they defaulted to explanations that did not rely on observed reality. Science can be differentiated from faith by the fact that the first explains observed reality entirely on the basis of rules intrinsic to nature itself—Euclidean geometry, Newtonian mechanics, Einsteinian relativity. Faith on the other hand invokes the intervention of forces outside of nature—miracles and divine intervention. For the Egyptians, explanations from within nature were sparse indeed. Explanations that resorted to the supernatural were, of necessity, many.

Page 3: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

That was especially true of medicine, and Egyptian medical practice and its practitioners were divided accordingly. The higher status practitioners, the w’bw were the priests of Sekhmet, the “Lady of Pestilence” responsible for infection and epidemics. We now understand that many of those infectious diseases arose from the concentration and proximity of humans with animals (measles, influenza, smallpox) or with other humans (the various dysenteries and upper respiratory illnesses) that were the unavoidable concomitants of settling into a defined area. The ancient Egyptians, lacking any understanding of infection, assumed they were due to divine mischief that could be ameliorated with proper intervention. Sekhmet caused disease; Re, Thoth, and Isis could heal and someone needed to speak to them as well. Of the handful of surviving Egyptian medical manuscripts, all but one comprise collections of remedies from various minerals, plants, and animals and prayers, usually used together.

The London Papyrus is a palimpsest that is largely illegible but part of which describes treatment of burns that may have resulted from the Santorini volcanic eruption. The Lesser Berlin Papyrus deals predominantly with relations between mothers and children and charms to be used by them and by midwives. The Greater Berlin Papyrus describes medicines to be used for worms, breast diseases, heart disease, hematuria, bellyache, and infertility. All are relatively short, partially illegible, and not terribly informative. The Kahun papyrus deals partially with the diseases of women and partially with those of animals. The Charles Beatty papyrus is devoted to ano-rectal disease.

The two most interesting medical documents from ancient Egypt both passed through the hands of American antiquities speculator Edwin Smith in the 1870s and may have even come from the same Egyptian dealer and possibly even the same site.

The Ebers Papyrus, dating to about 1550 BCE, was bought from Smith by Georg Ebers in the winter of 1873-4 and is currently in the library of the University of Leipzig. The scroll is only 14” wide but fully 65’ long when unrolled. It is a compilation of case reports and treatments clearly compiled from a variety of sources that probably date to as far back as 3400 BCE. Although there are a few surgical cases at the end of the papyrus that deal with such things as removal of fatty tumors, excision of a thrombosed vein, or drainage of abscesses , most of it is taken up with medications and related spells. It includes treatments for worms, gray hair, lumbago, burns, gangrene, scabies, fleas, various wounds (including those caused by flogging), and witchcraft. There are over 700 prescriptions in the document including a number that would still be considered somewhat rational—castor oil for constipation; aloe for rashes; honey, copper, and possibly mercury and arsenic (all of which have some antiseptic effectiveness) for various infections. Malachite, a copper containing powder also used as a cosmetic, is prescribed for blindness most likely caused by trachoma which is still endemic along the Nile. The document also describes the medication “mumia,” said to drip from rocks and to be very valuable and useful in healing broken bones and sword, spear, and arrow wounds. The papyrus recommends making incisions with hot knives that would have acted as cauteries. Wounds were bound with fresh meat on the homeopathic “like heals like” hypothesis but possibly useful since the meat contains proteins that could at least theoretically aide hemostasis. The Hearst Papyrus contains many of the same prescriptions and descriptions as the Ebers although often in abbreviated fashion suggesting that it might have been a “working copy” to be used on site by a practitioner while the longer papyrus might have been a library or research volume.

Page 4: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Although not mentioned in the Ebers Papyrus, excavations in Egypt have unearthed small clay jars in the shape of capsules from the poppy which were likely imported from Cyprus and almost certainly contained opium. There is also archaeological evidence of fractures splinted with papyrus impregnated and stiffened with acacia gum from as early as 2750 BCE.

Although the Ebers Papyrus is longer, the Edwin Smith Papyrus is far more interesting. Acquired in the 1870s at Luxor (Thebes) by the same collector and from the same seller as the Ebers, the Smith was subsequently translated by University of Chicago Egyptologist James Breasted who published the manuscript, its glosses, and its translation in a two magnificent volumes. The original dates to about 1600 BCE but, based on the vocabulary and hieroglyphics used, is a transcription of work going back to about 2600 BCE. Unlike the Ebers, the Smith deals exclusively with surgical and traumatic cases and is rationally organized by site of injury starting with the head and working down the body in 48 illustrative cases. In each area, there are descriptions of findings followed by a diagnosis, a prognosis, and suggested treatments. Within each body area, there are gradations of injury from least to most severe along with the admonitions “A disease which I will treat,” “A disease with which I will contend,” and “A disease not to be treated.” An example would be a blow to the head with a non-depressed fracture which would be treatable. A sunken fracture with no violation of the skin would be one with which to contend. A depressed fracture with an open wound and exposed brain accompanied by bleeding from the nose and ears and a stiff neck should only be bandaged and observed. If, after two to four days, the patient was febrile and delirious and the wound had a smell like the feces of “small cattle” (a term for goats or sheep), he should no longer be treated. If, on re-examination, the patient was found to be pale but conscious and not febrile, he was in the category of problems amenable to treatment.

Egyptian surgeons saw enough of the latter that they knew the brain surface was corrugated and they described it as like molten metal with which they were obviously familiar and they knew how to drill a hole in the skull next to a depressed fracture so they could lever up the fragments. Other cases include paralysis from a cervical spine fracture and probable loss of speech from a brain injury. Two types of wounds are described—sick and healthy. The former can be sutured or held together with adhesive immediately. The latter are hot and feverish (the hieroglyph for fever being a brazier) and can only be taped loosely so they can drain as they heal.

The cases progress down the body to the mid back at which point the papyrus abruptly ends in the middle of a sentence and the middle of a word. Evidently the scribe reached the end of his day and never came back to finish.

The nature of the cases and the organization bring up two points. First, they are all trauma. Second, they are all clinical and lack the magical accoutrements of the Ebers and other papyri. It is entirely possible that the Smith Papyrus was a manual of military medicine since the injuries are, in large part, the kind that would have been sustained in battle. Also, since the cause of the problems are not in doubt, the approach is entirely empirical and does not rely at all on the supernatural. The book is intended for the swnw (soonoo), clinicians a level below the w’bw.

Page 5: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

One of the most striking aspects of Egyptian medicine was its failure to progress. Both the Ebers and Smith Papyri were “reprints” of knowledge from 1,000 years earlier. When Herodotus came to Egypt in the 5th century BCE, there was no further progress except for the fact that the number of practitioners was large and they had subspecialized to the point that each dealt exclusively with a single body part (eyes, teeth, stomach, anus), and they were almost entirely reliant on chants and potions. Empirical medicine had atrophied and would not advance much more for the next 2000 years.

Medicine in Mesopotamia

Mesopotamia, from the Greek for between the rivers (the Tigris and the Euphrates) lies within modern Iraq. The four great river valleys along the Tropic of Cancer (the Nile, the Indus, and the Yellow and Yangtze are the other three) share the distinction of being cradles of civilization. Mesopotamia’s fertility, especially the valley’s ability to produce made concentrations of people possible, and concentrated populations had developed between as early as 5300 BCE It was not uncommon for these cities to reach populations of 30-35,000 and, with the surrounding farms and fields, to cover as much as 1,800 square miles. Larger populations and the ability to produce more food than the farmer and his family required led to specialization that had been impossible for nomadic hunter gatherers. With cities came leaders who could precipitate wars, soldiers who could fight them, and workers who could generate the wealth that made them worthwhile.

The various empires that ruled Mesopotamia lasted almost 5,000 years beginning with the ancient Sumerians who settled the alluvial lands at the apex of the Persian Gulf between 4500 and 4000 BCE. They created the mathematics from which our decimal system is derived, the time divisions still in use, and the first symbolic script. Eridu was the first real city and was composed of a mixture of herdsmen raising goats, sheep, cattle, and pigs; fishermen working the Gulf; and farmers whose irrigated fields produced barley, wheat, lentils, dates, and chickpeas. The farmers worked under the aegis of a state bureaucracy and produced the food excess necessary to support that government. The city was supplanted by nearby Uruk which thrived from 4100-2900 BCE and grew to house 50,000 people. Although the city had size and prosperity, there is no evidence that it developed defensive fortifications and the assumption was that it never had a standing or active military. By 2700 BCE, that had changed and virtually all of the larger Mesopotamian cities were walled.

In the late third millennium, Sargon I of Akkad (the exact site of that city is unknown) invaded Sumer and united the entire valley for the first time. Six hundred years later, the Amorites under Hammurabai conquered the valley and, as often happens to conquerors, were absorbed into the Akkadian-Sumerian culture they had defeated. Mesopotamia and its capital Babylon were conquered twice by the Assyrians in 1100 BCE and again in 745 BCE After the second conquest, the Assyrians stayed and built a new capital city at Nineveh. The Medes destroyed Nineveh in 612 BCE and were in turn displaced by the Persians under Cyrus in 539 BCE Finally, the Persians were themselves conquered by Alexander the Great in 331 BCE In spite of repeated, albeit infrequent, conquests, the Mesopotamian civilization

Page 6: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

remained remarkably coherent with the conquerors being absorbed into the local culture rather than replacing it.

After the Akkadian invasion, serial states in the valley between the rivers were aggressively martial. Almost from the beginning, the various city states fought regularly among themselves. After the valley was forcibly brought under a single government, the Mesopotamians turned their attention outward and fought with their neighbors. Sargon’s empire may have stretched as far west as Cyprus and included parts of modern Iran, Turkey, and Syria. The oldest known fortified city, Uruck, was built in 2900 BCE and covered over five square miles. Fortification inevitably led to battering rams and siege towers and long sieges meant problems of sanitation and supply.

In some areas, the Sumerians and their Akkadian successors were accomplished empirical scientists. They understood the relation between poor sanitation and disease and built sophisticated water supply and sewer systems. They knew a great deal about parasites and insects and their relation to disease although they lacked effective interventions to prevent those diseases; Sumerian priests had specific prayers against mosquitoes and the fly was the symbol of Nergal, the god of death.

Unfortunately, only two incomplete clay tablets survive that deal directly with Sumerian medicine, although those tablets, which date to 2300 BCE, are the oldest known medical documents. They deal almost exclusively with prescriptions and herbal remedies, but it is possible to infer a good deal about Sumerian medicine from later tablets since the flow of knowledge in Mesopotamia seems to have been smooth and well maintained from century to century.

Sumerian and Akkadian medical practitioners were, like their Egyptian counterparts, divided into priests and sorcerers—the baru who were seers and the ashipu who were exorcists—and empirical practitioners—the asu. The baru used omens to make diagnoses and prognoses. The ashipu used incantations to remove demons and placate the gods. They were also the most influential practitioners and held the highest status. The asu were technicians rather than priests or magicians and were of the educated middle class. They were the ones charged with caring for problems that clearly did not have a divine cause, and battle injuries would have been in that category. In 2400 BCE the asu were formally separated from the priests and placed under the secular government. This separation, besides improving their level of practice, made the asu responsible to available to the king and to his armies.

Mesopotamian medicine recognized three ways of healing: incantations and prayers by the ashipu; drugs, most of which were botanicals; and a limited repertoire of surgical procedures. Sadly, no text dealing specifically with military medicine survives so we are forced to guess about the treatment of war wounds and diseases based on what we have learned about Mesopotamian medicine in general. The asu recognized fever and hot, swollen wounds as the general and local signs of inflammation and, unlike the Greeks and all other practitioners to modern times had no illusions about pus being a laudable development. They used metal tubes to drain pus and incised abscesses and other wounds with tempered brass knives identical to those used by barbers for shaving. Wounds were treated in three phases: washing, poultices, and bandaging. The washes were most often with a mixture of beer and hot water although the alcohol concentration of the mixture was too low to actually kill bacteria.

Page 7: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Unlike the Egyptians, the Mesopotamians seem never to have learned to suture wounds or splint fractures leaving us to only imagine the result when bones were broken by a battle axe or bodies were lacerated by a sickle-shaped metal sword.

Very little survives that describes medical practice in the “Sumerian Dark Ages” between the fall of the Sumerian-Akkadian city states and the rise of the Assyrian Empire. A few Kassite tablets from the late third millennium BCE dealing with medications, baths, and poultices for specific symptoms and signs have also survived. For the most part, disease continued to be viewed as the result of supernatural intervention in human affairs and most treatment was directed toward placating angry gods and malevolent demons. The status of the empiricist asu gradually deteriorated while that of the priests and magicians increased.

Mesopotamian medicine recognized three ways of healing: incantations and prayers by the ashipu; drugs, most of which were botanicals; and a limited repertoire of surgical procedures. Sadly, no text dealing specifically with military medicine survives so we are forced to guess about the treatment of war wounds and diseases based on what we have learned about Mesopotamian medicine in general. The asu recognized fever and hot, swollen wounds as the general and local signs of inflammation and, unlike the Greeks and all other practitioners to modern times had no illusions about pus being a laudable development. They used metal tubes to drain pus and incised abscesses and other wounds with tempered brass knives identical to those used by barbers for shaving. Wounds were treated in three phases: washing, poultices, and bandaging. The washes were most often with a mixture of beer and hot water although the alcohol concentration of the mixture was too low to actually kill bacteria. Unlike the Egyptians, the Mesopotamians seem never to have learned to suture wounds or splint fractures leaving us to only imagine the result when bones were broken by a battle axe or bodies were lacerated by a sickle-shaped metal sword.

Very little survives that describes medical practice in the “Sumerian Dark Ages” between the fall of the Sumerian-Akkadian city states and the rise of the Assyrian Empire. A few Kassite tablets from the late third millennium BCE dealing with medications, baths, and poultices for specific symptoms and signs have also survived. For the most part, disease continued to be viewed as the result of supernatural intervention in human affairs and most treatment was directed toward placating angry gods and malevolent demons. The status of the empiricist asu gradually deteriorated while that of the priests and magicians increased.

Fortunately for medical and other historians, Assurbanipal (668-626 BCE), the last of the great Assyrian kings, was a compulsive book collector. When his library at Nineveh was destroyed in 612 BCE, some 30,000 clay tablets, including 800 that deal specifically with medicine, were buried in a trench from which they were unearthed in 1853 and mostly reside in the Kouyunjuk Collection of the British Museum in London. Most are prescriptions, but 40 tablets comprise the Treatise of Medical Diagnosis or Book of Prognosis and form the backbone of what we know about Mesopotamian medicine. It comprises over 3,000 case reports includes tablets related to omens to be observed while on the way to visit a patient, symptoms and signs pertinent to specific organs and body parts, diseases of women and

Page 8: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

children, and treatment of certain diseases. Unfortunately, the last group is damaged and largely undecipherable.

Although the various Mesopotamian civilizations lasted almost 5,000 years, their medical skill remained essentially static and generally behind that of their neighboring civilization on the Nile.

Old Testament Medicine

Biblical Jews tended to see illness and injury as visitations from God and mistrusted anyone who interfered with the natural course of those misfortunes. The earliest Biblical mention of a physician occurs when King Asa of Judah suffered from gangrene of the foot and “sought not the Lord . . . but the physicians.” They were, however, exposed to Egyptian and Babylonian medicine and were the world’s first experts in military sanitation and hygiene. When Moses led his people out of Egypt in approximately 1200 B.C.E., one of his primary functions was as hygienist for the group, and much of Mosaic Law deals with problems of group sanitation.

Subsequently, Hebrew priests doubled as sanitary police with broad powers of quarantine over those with infectious illnesses, especially leprosy and venereal disease. They supervised rigid sanitation of water, food, and the utensils used in food preparation. Deuteronomy 23:9-14 describes policing a military camp, mandates that latrines should be located away from the camp, and orders that each soldier carry a flat blade for burying the latrines after they were used. Although sanitary regulation of military camps returned with the Romans, neither the Greeks nor the Macedonians had any such rules. The Talmud, mostly in relation to rules of ritual slaughter, contains the only detailed gross pathological descriptions of diseased organs before Antonio Beniveni and Andreas Vesalius seventeen centuries later.

Greek Medicine

The history of ancient Greece stretches over two millennia from the borderland of literacy around 1600 BCE to 529 AD although that history is plagued with large gaps and major uncertainties of chronology and events, especially in the earlier centuries. The timeline has been roughly divided into seven epochs: the Mycenaean Age (~1600-1100 BCE), The Dark Ages (~1100-750 BCE), the Archaic Period (750-500 BCE), the Classical Period (500-323 BCE), the Hellenistic Period (323 BCE-146 AD), the Roman Era (146-330 AD), and a final phase (330-529 AD).

What we know of the Mycenaean era is largely gleaned from the poetry attributed to Hesiod and Homer, both of whom (if they were in fact real historical figures) lived around 750-650 BCE, about 400 years after the events they described. Moreover, the poems were almost certainly passed down as oral tradition and were not committed to paper (or more correctly papyrus) for centuries and were therefore subject to embellishment and error as they passed from generation to generation. The best

Page 9: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

known Homeric poems (The Iliad and The Odyssey) deal specifically with a war between the Mycenaeans and the city of Troy. Tradition and the 19th century archaeology of Heinrich Schliemann provide fairly consistent evidence that the war actually occurred and that Troy was a fortified city in western Asia Minor in what is now Turkey.

The Mycenaean Age and its city states and culture disappeared in 1100 BCE and Greek history remained essentially unrecorded until 750 BCE. The cause of the Dark Age remains speculative. It may have resulted from invasion by the northern Dorians or it may have been the result of climate change, widespread disease, or a combination of those factors. At any rate, Greek literacy which had been based on Linear B, a script similar to that developed by the Minoans of Crete, vanished and we are left with no written record of those centuries. Hesiod and Homer lifted the curtain, but they only described events in the distant past. For the next two centuries, Greek civilization gradually reestablished itself with considerable influence from the Assyrians, the Egyptians, and the Phoenicians from whom they borrowed a new alphabet.

The Classical Age began with the rise of Athens in the 5th century and its accumulation of allies in the Delian League. The city and the league amassed considerable wealth trading in such things as textiles and pottery and power from the navy built to support that trade. It was marked by the two Persian invasions being successfully resisted by combined Greek forces including those of Athens and Sparta. Following an ill-considered invasion of Sicily, a disastrous epidemic, the collapse of the Delian League, and a decisive military defeat, Athens was supplanted by Sparta which remained dominant through much of the 4th century. Sparta was, in turn, displaced by Thebes and its allies (the Boeotian League) which was succeeded by the League of Corinth and, finally, conquest of all Greece by the Macedonians under Alexander the Great who went on to conquer most of the Mediterranean world and Asia to the Indus River.

The Hellenistic Age extended from the death of Alexander in 323 BCE until the Roman conquest of Greece which culminated in the Battle of Corinth (146 BCE). Rule from Rome continued until Constantine built Byzantium and moved his capital there in 330 AD. The Eastern Empire continued as a successor to ancient Greece until 529 AD. The closure of Plato’s Athenian academy by the Christian emperor Justinian I marks a convenient to close the history of ancient Greece.

Before addressing medicine in classical Greece, it is necessary to understand something of the Greek approach to science, its successes, and its limitations. Greek science was qualitatively different from that of any preceding civilization. The first Greek advantage was their written language. Although literacy based on Linear B had been lost during the Dark Ages, adoption of the Phoenician alphabet gave the Greeks a simple language that could be broadly applied. Egyptian pictorial script and cuneiform pressed into clay were so complicated and cumbersome that they were of little use beyond recording administrative information such as laws and inventories. Written Greek was understood throughout the trading world and could effectively transfer knowledge. Empirical observations could be categorized and accumulated over time as opposed to those of Egypt that ossified and those of the Assyrians which were simply lost. The ability to transmit and categorize information made collecting it more valuable, and the more knowledge was collected and organized, the more it looked possible to explain what was

Page 10: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

happening in the world with generalized rules rather than resort to explanations one event at a time and to fall back on supernatural causation. The difference between science and faith is that the former explains nature using only action by forces from within nature while the latter falls back on intervention by a deity or other force external to nature itself.

The problem for Greek empiricists was the range of their observations. With limitations, the Greeks could measure time and distance, but they lacked all the tools we currently use to extend the five senses with which we observe. We have ways to enhance our vision (telescopes, microscopes, x-rays and magnetic imaging), our hearing (amplifiers, oscilloscopes, and stethoscopes), and our touch (thermometers, strain gauges). They had none of those, and, as a result, experiments were virtually impossible and the Greeks were forced to fall back on thought experiments. That worked very well in geometry and astronomy but less well in other areas, especially biology and medicine. One particular drawback to the Greek thought experiments was their susceptibility to incorrect analogies. Perhaps the most familiar such error is rooted in the desire to extend musical symmetry to physiology. They fully understood that vibrating strings could be divided into areas measured in whole numbers with resulting notes that defined musical intervals and harmonies, and they wanted the rest of nature to behave the same way. There were four seasons and four cardinal directions and there should be four elements (earth, air, fire, water). That led to four humors (blood, phlegm, yellow bile, black bile), four states of the matter (hot, cold, moist, dry), and four states of the body (choleric, melancholic, sanguine, and phlegmatic). Ideally, the body should be in balance, so one might be starved, purged, or bled to correct presumed imbalances. Unfortunately, those treatments persisted well into the twentieth century and accounted for an immeasurable amount of suffering and an untold number of deaths.

According to Aristotle, Greek science (the explanation of nature by nature’s laws rather than supernatural intervention) began in Asia Minor and the islands of its coast (Ephesus, Cos, Samos). It is not surprising that should be the case given the proximity to Asia and the opportunity to draw from Assyrian mathematics. Aristotle specifically credits Thales of Miletus (ca. 624-546 BCE) with being the first to try to explain nature by its own rules. Thales used geometry to indirectly measure the height of pyramids and to measure the distance of ships from shore. He observed movements of the sun and moon and is credited with accurately predicting a solar eclipse. He went astray with his thought experiments, explaining earthquakes by assuming the earth floated on a great sea and waves made it shake. His Miletian successor Anaximander, reasoning that the water had to rest on something, revised the theory to have the earth floating freely in the air. Regardless of the fact that both seem foolish now, they represented a significant epistemological advance over explaining the quakes as the wrath of Poseidon “the earth shaker.” Homer and Hesiod had explained each natural event as unique and caused by direct divine intervention. Thales and his successors categorized events and devised generalizable explanations; it is hard to overestimate the importance of that change.

The Miletian attitude toward science spilled over to medicine to an extent although it only with limitations. Extending non-magical thinking to medicine suffered a disadvantage similar to that in the other sciences—philosophy outstripped technical expertise. The catalogue of afflictions of the human body was not much different than our own (although frequencies of ailments were clearly different), but the number of afflictions that could be successfully treated by the Greeks was quite small. As in other

Page 11: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

science, Greek physicians were limited to information they could capture by using their five senses—the appearance of skin color and facial expression; the feel of temperature and pulse; the smell of body odors, excretions, and emanations; the taste of urine and sweat; and the sounds of normal and labored respirations. Suffering from the same limitations as other ancient civilizations, the Greeks added little to the broad corpus of medical knowledge except the ability to categorize and organize groups of cases. Autopsies were banned and Greek physiology was a litany of dangerous misconceptions, some of which would persist and do damage for two millennia.

Further inhibiting Greek medical development was the fact that they had to start from close to zero. The Chinese, the Indians, the Mesopotamians, and the Egyptians had all developed medical knowledge, albeit limited, but very little of that appears to have been transferred to Greece prior to the Classical Age. One possible exception was in their attitude toward medical practitioners. The Mesopotamians and Egyptians had both developed codes of behavior and standards of practice to which their physicians were responsible with the secondary effect that those practitioners were accorded both respect and social status. Whether they imported those attitudes and strictures or developed them on their own cannot be said for certain, but the Greeks certainly shared them.

The epistemology of Greek medical thought is difficult to trace with accuracy since so little of what was an extensive literature has survived. Socrates quotes Xenophon as having claimed that the texts available to a classical era Greek physician would fill a library. Unfortunately, we are left with a few references from the pre-classical poets, then nothing for four hundred years and the works attributed to Hippocrates, and again nothing for 400 more years. Within those limitations, we will do the best we can and divide the Greek approach to medicine as divided into roughly three phases.

In pre-classical Greece (~1200-550 BCE), religion, superstition, and magic were surely predominant. The first references to medical care come from Homer and deal with the direct treatment of battlefield injuries. Disease in general and epidemics in particular were the result of Apollo’s anger and over 100 shrines to Asclepius the healer (similar to and likely modeled after Imhotep of Egypt) were built throughout the Greek world.

In the 5th century, the Pythagoreans, most notably Alcmaeon of Croton and Empedocles, attempted to extract medicine from superstition and make it part of the general epistemology of science. As noted earlier, the Pythagoreans were obsessed with symmetry and viewed disease as an upset of natural harmony and surmised that the imbalance might be caused by external factors such as dietary deficiency. Unfortunately, that supposition led to generally harmful attempts to restore balance by removing things (food, water, blood, feces) thought to be in excess. Absent the evidence that might have been provided by dissection and direct observation, the Greek ideas of physiology are somewhere between quaint and ludicrous. Bone was a combination of earth, air, and water; the heart was the seat of consciousness; respiration took place through the pores of the body; the brain was there to cool the blood and the leftovers from that process were expelled through the nose as mucous. Most unfortunate of all, the Greeks, including Aristotle, tended to ignore observations that did not agree with their theories.

Page 12: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

A century later, in Ionia and the islands off the coast of modern day Turkey, a third phase emerged. Over 60 books based on direct clinical observation and categorization of those observations into reports of similar cases have been attributed to Hippocrates of Cos (~460-370 BCE), although it is almost certain that the corpus actually represents an accumulation of available texts and reports from a variety of authors. In On Ancient Medicine, Hippocrates vigorously supports an empirical basis for medical practice although for infections and other medical problems, the best he can do is emphasize diagnosis and prognosis since effective treatments were, for the most part, lacking.

The Greeks had a number of reasonably successful ways to manage trauma, although it must be noted that earlier eastern civilizations had already learned almost all of them. They knew to wash wounds and often used things like wine that have subsequently been shown to be mildly anti-bacterial. They knew how to suture and drilled holes in skulls to treat head injury. They made attempts to control bleeding (although they did not know how to effectively use tourniquets and could not suture arteries) and correlated pulse rate with the severity of injury. Dislocations, especially of the shoulder, were common wrestling injuries and the Greeks were surprisingly expert in treating them. They knew a shoulder could be put back in place by having someone put a foot in the axilla, grab the wrist, and pull—a technique the author learned and used frequently on a modern trauma service. Inventive, but less palatable, was the practice of putting a white hot cautery into the axilla to scar the tissue and prevent repeat dislocations.

The understanding and management of infection was perhaps the worst failing of Greek traumatology. Absent any concept of microbiology, Greek physicians fell back on Pythagorean speculation. Wounds were correctly described as swollen, hot, red, and painful and the implication was that the tissues around the wound had become corrupt. The pus that ran from the wound was thought to be the result of blood being drawn to those tissues and decaying. Infection and wound breakdown were so common that the Greeks used the same word (helkos) for both wound and ulcer. They also believed that pus (removal of the decayed blood) was a normal and desirable part of healing, hence the doctrine of “laudable pus” that would pollute surgical thinking until well into the 19th century of our own era. The implication was that blood should be kept away from the wound so tight dressing and bleeding were preferred—and harmful—therapy.

In the Hellenic era, Alexandria became the center of Greek medicine. A few autopsies were done and the understanding of human anatomy improved somewhat. Neurophysiology improved to the point that, where once nerves, once lumped together with arteries and tendons, were differentiated into motor and sensory types. Egyptian knowledge augmented the Greek pharmacopeia. The library at Alexandria probably amassed a considerable reference collection, but we will never know its extent.

The first references to medicine we have from the Greeks are in Homer. The Iliad lists 147 wounds to the combatants including 31 head injuries of which 100 per cent were fatal. Overall mortality among those wounded was, according to Frölich, 77.6 per cent. Care on the battlefield was done partially by physicians and partially by military leaders with medical training. The first Greek physicians of whom we have record were Aesculapius’ sons Machaon and Podalirius who were both physicians and ship captains. Military physicians were highly valued; of Machaon and Podalirius, Homer said, “A wise physician skilled our wounds to heal is more than armies to the public weal.”

Page 13: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Acute treatment of wounds in the Homeric battles was limited. Magic incantations over the wounds were the main way to stop bleeding although one assumes that hypotension eventually provided a degree of hemostasis. Homer refers to both dressing wounds and to leaving them open although the indications for choosing one over the other are not clear. The dead Patroclus was removed from the field by two companions “like a roof beam dragged by two mules.” Presumably living casualties were treated more gently. Once in a rear area, the wounded were taken to wooden huts near the shore where the wounds were washed with wine, covered with herbs, and bound. The wounded warrior might also be sedated with alcohol. Although Homer does not mention it, we know the Minoans used opiates extracted from poppies and it is not a far stretch to assume the Greeks did the same. Removing arrows was no small undertaking given that the heads were metal and had either two or three barbs that could tear muscle, vessels, or nerves during extraction. Beyond that, the word “toxic” is derived from the Greek “toxon” or bow, making it likely that at least some of the arrows were tipped with poison.

Hippocrates described a number of battlefield injuries including peritonitis following a javelin injury to the abdomen (presumably with bowel perforation) and opisthotonos and inability to swallow (presumably tetanus) following a minor laceration. Besides dressings, the classical era Greeks doused wounds with a variety of toxic chemicals including lead, copper, and cadmium. They tried the tourniquet to control bleeding but did not know how to ligate bleeding vessels. If the tourniquet were removed too soon, bleeding resumed and if it were left too long the limb became gangrenous, so the Greeks abandoned the practice and returned to incantations. They did devise a tin drain used to treat empyema (abscess between the lung and its lining pleura) caused by arrow or spear wounds to the chest.

Roman Medicine

Civilizations in Egypt and Mesopotamia spanned almost three millennia and that of Greece almost a millennium; it took Rome about 200 years to conquer the Italian peninsula and another 300 to establish the empire as a commercial and administrative entity. The empire lasted another half millennium before its center moved east where it survived—albeit in a significantly different form—for another thousand years.

Legend has it that Rome was founded on April 21, 753 BCE, but there is good archaeological evidence that humans had inhabited the central part of the peninsula for something like 14,000 years; stone tools and weapons at least 10,000 years old have been found in the area. The oldest identified Roman settlements were fortified villages on the Palatine and Quirinal hills lying in a bend of the Tiber River. In addition, there is evidence of a number of settlements on the Latium plain where the Tiber empties into the Mediterranean. Copper ore in Eritrea north of Rome and abundant iron ore from nearby Elba facilitated transition from stone to metal, and iron tools were being used for farming in the area by 700 BCE.

Page 14: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

After 650 BCE, the Etruscans from settlements north of Rome dominated the west central peninsula although, since there are no detailed records from the “time of seven kings,” we are reliant on the soft sands of oral tradition. In 500 BCE, Rome ejected the Etruscan rules and, by 394 BCE, had itself become dominant in west central Italy. By the 3rd century BCE, Rome controlled the entire peninsula as well as the nearby islands of Corsica, Sardinia, and Sicily. They also took the colonies of Magna Graecia in the south and defeated the Gauls in the North.

Having conquered the peninsula, Rome set out to establish commercial and military dominance of the Mediterranean world. Those ambitions led to the Punic Wars that began in 264 BCE. The Romans suffered stunning defeats at the hands of Hannibal, but rebounded to utterly defeat the Carthaginians and destroy their home city in 146 BCE. They defeated the Greeks at Corinth the same year that Carthage was razed, and although expansion into Spain and the eastern Mediterranean was still underway, Rome no longer had a realistic rival in the western world.

In 49 BCE, Julius Caesar used his legions to take control of Rome from the republican government. After Caesar was assassinated in 44 BCE, there was a brief period of civil turmoil before Octavian established total control as Augustus Caesar in 31 BCE. The subsequent empire continued to rule the western world and grew to a population that, at its apex, may have exceeded 3 million.

Rome was weakened by the Great Plague at the end of the 2nd century AD, and a subsequent profound loss of population as well as by the exigencies of maintaining administrative and military control of far-flung subordinate states gradually sapped the empire’s strength. Emerging Christianity, culminating in the conversion of Emperor Constantine in 380 AD, strained increasingly fragile social and cultural controls. To make matters worse for the Italian capital, Constantine established a second Roman seat of government in a newly built eponymous city straddling the straits between the Mediterranean and Black Seas. Rome had, by then, lost the ability to protect itself and the city was sacked by Germanic invaders in 410 AD, 455 AD, and 472 AD. The capital was finally moved entirely to Constantinople in 476 AD.

For as long as it lasted, Roman civilization contributed remarkably little to scientific innovation. Almost all of Roman science falls into either encyclopedic collection of earlier bits of knowledge or rehash and refinement of Greek natural philosophy. In some sense, this can be explained by the fact that the Romans had no tools for scientific observation—no ways to extend the five senses—that had not been available to the Greeks, or indeed to the Egyptians and Mesopotamians before them. In addition, they seem to have had little interest in the experiments of the mind by which the Greeks tried to understand nature and reality. Romans cared about the practical effects of knowledge but hardly cared at all about the epistemology behind that knowledge. Plato and Aristotle emphasized that the philosopher who could explain nature was far superior to the mere craftsman who knew how to manipulate it. The Romans disagreed; they had substantially no interest in science unless it could be shown to have practical value. In the end, the Romans added nothing to mathematics and they polluted astronomy with astrology and medicine with superstition.

Page 15: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

The most detailed Roman considerations of natural philosophy are found in the 22 volumes (only 2/3 of which are extant) written by an ex-patriot Greek physician. Galen was born in 130 AD in the city of Pergamum on the Mediterranean coast of modern Turkey. He had two signal advantages as a young man—his father was well to do, and Pergamum was a major center of Greek medicine with one of the better libraries in the empire. Galen began his studies at the Asklepieion in Pergamum and completed them in Alexandria before moving to Rome where he worked for 24 years and rose to become personal physician to Marcus Aurelius.

Galen’s version of physiology was a linear derivative of that of the Pythagoreans and Aristotle with one signal addition—Galen, unlike every Greek before him—experimented. He served as surgeon to the gladiators and availed himself of the opportunity to directly observe tissues (vessels, tendons, nerves, muscles, and bones) and organs exposed by gaping wounds. He also dissected and performed experiments on a variety of animals including primates and made his best efforts to incorporate what he saw into the canon of Greek natural philosophy. The results, although naïve and even harmful in retrospect, were accepted as fact for over 1500 years.

The core of Galenic physiology was the assumption that the soul had three divisions, each with its own spirit or pneuma. The psychic pneuma originated in the brain after nourishment by blood from the rete mirabele, a complex of vessels at its base. The nourishment is transferred to the ventricles where it is joined to air carried up through the nose and is converted to the psychic pneuma that travels through the nerves that carry sensation and give rise to movement. The leftovers from the intraventricular reaction are expelled from the pituitary gland and exit the nose. Unlike Hippocrates, Galen believed that thinking resided in the brain and he viewed the psychic pneuma as the divine part of the soul.

Vital spirit is produced in the heart with the help of air from the lungs and is carried through the body by the arteries. The vital spirit carries the heat of the body and is responsible for passions including lust and anger. The natural spirit arises from the liver, is nourished by food fermented in the gut, and is carried through the body by the veins.

Unlike Hippocrates, Galen did differentiate among nerves, vessels, and tendons. He cut arteries in living animals to prove that they contained blood and did the same with ureters to prove that the kidneys produced urine. In treating arterial wounds, he tied the vessels with silk imported from China, although, sadly, the technique of ligating vessels was subsequently lost. His efforts to stop bleeding with styptics composed of aloe, egg white, rabbit fur, and frankincense (also sadly) were not lost. Galen also tried direct pressure for hemostasis but never used a tourniquet for that purpose. His favored treatment for infected wounds was to remove blood from adjacent areas under the assumption that purulent discharge was the result of decay of excessive blood in the area of the wound as evidenced by swelling, heat, pain, and redness. These cardinal signs so invariably accompanied inflammation that they came to define it. Galen argued that the swelling and redness indicated an excess of blood in the area of a wound and recommended removing blood from an adjacent vessel to restore “balance” to the area. Although he was accepted as the incontrovertible authority on treatment of wounds, Glen contributed nothing useful to wound therapy and perpetuated a great deal that was harmful. It would take 1500 years before Ambrose Paré began to correct the Galenic errors.

Page 16: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

The psychic, vital, and natural pneumae could be altered and taken out of balance (here is the Pythagorean influence), particularly by either bad air—hence “mal aria”—or by poisons but also by pain; psychic upset including fear, grief, anger, lack of sleep, or too much happiness; and dietary deficiencies. This idea of humoral imbalance led to the most persistent and pernicious result of Galenic physiology, efforts to restore that balance. In theory, that restoration could be accomplished, much as inflammation could be treated, by removing body fluids thought to be in excess and the methods for that removal—bleeding, purging, sweating, and starving—were the basis of medical therapy and the cause of immeasurable suffering and death all the way into the 20th century.

What we know of Roman medical practice is actually quite limited. We have Galen’s books and the encyclopedic work of the layman Celsus, and smatterings from other writers. Although not a physician, Celsus was a polymath and a master of the Latin language. We do not know the exact dates of his birth and death, but we do know that he lived during the reign of Tiberius (14-37 AD). He was, like Pliny, a knowledge accumulator who wrote about warfare, rhetoric, and agriculture, and his De Re Medicina is the only ancient medical text to survive intact. It was lost for 1400 years before being rediscovered in 1427 just in time for the printing press. The first edition appeared in 1478 and it became one of the most reproduced of the early printed books.

Medicine as a profession was held in low esteem in Rome, at least prior to the plague of 293 BCE. The rash of deaths in that episode convinced the Romans that medicine had a place, and they were faced with deciding whether to import physicians from Egypt where they were known for their technical expertise, or from Greece where natural philosophy and the attempt to understand how nature worked were predominant. Somewhat surprisingly, the Romans chose the latter. In 219 BCE, the Greek physician Archagathus opened a clinic in Rome subsidized by the city’s government. He was principally a wound surgeon, and his overuse of the knife and cautery earned him the nickname carnifex or butcher. His reputation for aggressiveness contributed to the fact that medicine remained an unpopular profession in Rome and that most practitioners in the city were Greek ex-patriots. Later practitioners, most notably Asclepiades (124-50 BCE) de-emphasized invasive therapies, toxic medicines, and even bleeding in favor of wine and music which at least were not harmful. Over time, physicians’ reputations improved somewhat, and Julius Caesar eventually granted practitioners the status and privileges of citizenship. Nonetheless, Pliny complained that doctors were generally overpaid and warned that they were likely to perform harmful experiments on their patients if not watched carefully. He advised that botanical medicines and garden herbs were cheaper, safer, and generally more effective than the ministrations of professional practitioners.

Although the Roman government subsidized Archagathus and helped build his clinic, there is not another record of a Roman medical facility until they were mentioned in a decree of Antoninus Pius (137-161 AD) in connection with his concern over the number of tax exempt physicians. These “archiatri” were primarily responsible for caring for the poor. They probably worked in hospitals similar to those built for the military, but their patients were drawn from the ranks of poor free men and slaves. The rich cared for themselves. Formal medical training and licensure also came late. The first licenses to practice medicine in Rome came under Septimus Severus (193-211 AD) and the first lecture halls used to teach medicine were under Alexander Severus (222-235 AD).

Page 17: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Byzantine Medicine

The Byzantine Empire began when Diocletian divided the Roman Empire into Eastern and Western moieties in 285 AD and culminated with Constantine building a new “Roman” capital where the Black Sea empties into the Mediterranean and where Europe meets Asia between 324 and 330 AD. The city joined Egypt, Greece, Rome, China, and India as the center of one of the world’s great civilizations and the capital of an empire that lasted until Constantinople fell to the Turks in 1453. Altogether, the Byzantine Empire lasted over twice as long as the Western Empire and, at various times, encompassed even more territory. The empire derived its language, literature, philosophy, science, and medicine from the Greeks and its laws and politics from Rome. The empire’s religion and culture were an unruly mixture of Christian and barbarian influences even though Theodosius I (379-395) made Christianity the empire’s official religion in and its government was an even more unruly combination of a rigid military dictatorship and a bureaucracy that deserved to be called “Byzantine.” Even with the complexities of origin and function, the Byzantine Empire was the direct continuation of the Roman Empire into the Middle Ages and was the most powerful economic and military force in Europe for nearly a millennium.

The apex of that ascendancy came during the reign of Justinian I (527-65) who reconquered most of the old empire in the western Mediterranean and put Rome back under the empire for the next 200 years in spite of losing almost one-third of his population to the great plague (probably bubonic) of the mid sixth century. In the wake of that demographic and social catastrophe, the empire began to be pressed at its edges by the Slavs in the north who took central parts of the Balkan Peninsula and by the Arab Muslims in the east who took Syria, Palestine, Mesopotamia, and parts of North Africa, Egypt, and Armenia. Even after those conquests, the Umayyad Dynasty based in Damascus retained a strong Byzantine influence until 751 when the Abbasids conquered them and moved the capital to Baghdad and Persian influence became dominant. Arab interest in Greek learning reemerged in the ninth century with translations of Greek works into Arabic by the academic centers in Baghdad.

The empire was sapped by the seventh century wars against the Sassanid Persians followed by prolonged conflict with the Arabs the following century. The Byzantines regained some strength in the tenth century before losing most of Asia Minor to the Seljuk Turks after 1071. The situation was compounded in 1204 when soldiers of the Fourth Crusade sacked Constantinople. The empire was again divided into Latin and Greek sections. Although it was reconstituted in 1261, the empire never regained its old strength and the Byzantine-Ottoman Wars culminated in the city’s fall in 1453.

Given the empire’s longevity and its dominance of European culture and economy, it is puzzling that so little original thought, especially in science and medicine, came from Constantinople. Garrison referred to the Eastern Empire as “a cold storage plant for the remains of Greek science.” There is at least on general and one rather specific reason for the fact that the Byzantines created so little new scientific knowledge. The specific problem was a technical one. The limitation of Greek science was in the five senses. Observation as the basis for scientific knowledge is constrained by the bounds of the senses. The eye can see only so small and so far and the Greeks never developed instruments to extend the range of sensation—no microscope, no telescope, no amplifier, no accurate ways to measure small distances and weights. The easy observations had been made, and wit

Page 18: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

The more general also involves observation and lies in the conflict between Greek rationality and Christian belief. Perhaps the greatest contribution of the later Greek natural philosophers was in requiring that phenomena in nature be explained by the laws of nature without reliance on “external” influences. That is a position that explicitly excludes the supernatural and excludes explanation by belief. The Byzantines reversed that order of precedence; belief came first and if observation conflicted with faith, the observation was assumed to be erroneous.

The Alexandrian medical writers whose work has survived (Oribasius, Aetius of Amida, Alexander of Tralles, and Paul of Aegina) were encyclopedists whom Vivian Nutton has appropriately named “medical refrigerators of antiquity.” They unanimously revered Galen and regarded his work as a complete body of medical knowledge.

From a surgical point of view, the most interesting of the Byzantine writers was Paul of Aegina (607-690) whom DeBakey called the last of the classical Byzantine physicians. His Epitome comprises seven books, the sixth of which is entirely devoted to surgery. Paul’s work was translated into Arabic and the sixth book, which described a total of 120 operations, was copied almost word for word by Albucassis and remained a standard reference for centuries. Nothing survives that indicates any advance in surgical practice during the 800 years between Paul’s Epitome and the end of the Empire,

Surgery was of somewhat greater interest to the Byzantines than to the Greeks and Romans. Operations were done as public demonstrations of a surgeon’s skill (although the gore and screams of unanesthetized patients were said to have made on-lookers pass out), and Temkin says Alexandria was “full of little surgical butcher stalls.” That said, there is fair evidence that the Alexandrian surgeons washed both their instruments and their hands before surgery, a practice that certainly did not carry over to Western Europe. Byzantine surgeons also built artificial limbs, although they were little more than hooks for the upper extremities and pegs for the lower. Autopsies, which had been banned in Hellenistic medicine, were relatively common in the Eastern Empire and included dissection of condemned criminals and at least one living subject, the Christian prince of Scamari who was anatomized for heresy.

A significant problem for Byzantine medicine mirrored that for Byzantine science in general—the competition between science and Christianity. In general, medicine was viewed as useful but distinctly inferior to theology as an area of study. Christianity presented itself as a healing religion reliant on spirituality and supernatural intervention in the face of disease. Secular healers, and particularly overtly pagan ones such as Oribasius, were viewed with disdain or outright animosity. In the early centuries of the Eastern Empire, the healing cult of Aesculapius was widespread and temples to the Greek god of healing were spread from the British Isles to Egypt and Asia Minor. With the passage of time, Christian churches were built over many of the temples and Byzantine medical practice tilted away from empiricism and more toward spiritual as evidenced by Christian physicians’ (including Aetius and Alexander) fascination with Egyptian spells and incantations. Medicine became less scientific and more open to the idea of demonic possession as the cause of disease. Even the Asclepeion of Rome morphed into the church of Saint Bartolomeo. Eventually, every Asclepeion was either co-opted or destroyed.

Page 19: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

A positive aspect of the involvement of Christianity in Byzantine medicine can be found in the development of hospitals in the empire. In the west, Roman hospitals had started as places where soldiers or slaves were treated and had evolved into religious institutions meant to care for people too poor to provide for themselves. Between 370 and 379, Bishop Basil of Caesarea built several inns around the outskirts of his town to house itinerants, especially ill ones, and staffed them with doctors and nurses. John Chrysostom, Bishop of Constantinople, did the same between 398 and 404 in the capital. By the time of Justinian (482-565), these nosokomeia were the main site of medical care in the empire. Perhaps the best example was the Basilias that was built on the site of Caesarea’s Asclepeion and was fully staffed with physicians and caregivers. The hospital tradition continued to evolve into the later years of the empire. In 1176 Eirene, wife of Emperor John Comnenos funded a facility attached to the monastery of the Pantocrator that comprised 10 surgical beds, 12 beds for women, 20 beds for common diseases, and 8 beds for those with acute illnesses. Each section had two physicians, and the hospital included a complete outpatient clinic.

Islamic Medicine

The rise and spread of Islam, especially in its first two centuries, was unique in human history. The movement was started by Mohammed (570-632), a successful merchant in Mecca, a cultural and trading center in the Arabian Peninsula. Mohammed was a member of the Banu Hasham tribe of the Quraysh clan who were traditional guardians of the Kaab’a, a pagan shrine from which the tribe and the city derived significant revenue. The Quran (literally The Recitation) was said to have been verbally given to Mohammed by the angel Gabriel beginning in 609 and stretching over the next 23 years until his death. The new religion proclaimed by the prophet was not widely accepted in Mecca and Mohammed and some of his followers went north to the village of Yathrib, later renamed al-Medina, or The City in the hegira of 622. They would later return and conquer Mecca in 630, two years before Mohammed’s death.

Following Mohammed’s death, a series of four of his followers (the Rashidun or rightly guided) served as caliphs. Between 661 and 750, these four oversaw the conquest of the Sassanid Persian Empire and the conquest of the Syrian and North African territories of the Eastern Roman Empire. This period ended in a civil war fought between 756 and 761 between the followers of Ali, the prophet’s cousin and son-in-law (the Shi’a), and the majority followers of Ayesha, Mohammed’s principle wife and the daughter of the first caliph, Abu Bakr (the Sunni). In 661, Muawiya, a Sunni and a member of the Umayya clan, was accepted as Caliph and established a capital at Damascus. The Umayyads went on to conquer the rest of the Mahgreb, Iberia, the Sindh in the Indian subcontinent, Rhodes, Crete, Kabul, Bukhara, and Samarkand and to rule the Islamic world until 750.

In 750, a revolution in which the Shi’a were a majority of the rebels, unseated the Umayyads and replaced them with one of their generals, Abu Musim who started the Abassid dynasty. A branch of the defeated Umayyads migrated to Spain where they started the Western Caliphate with a capital at Cordoba that would last until 1031. The Abassids moved their capital to Baghdad. By the mid-10th

Page 20: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

century, the Abassid caliphs had lost much of their power to their own generals who exercised power as sultans. Through this time, much of the military was manned by Turkish mercenaries and there was an increasing influx of Turkish immigrants to Baghdad. After the middle of the 11th century, the Seljuk Turks essentially ruled the Islamic empire although the caliphs retained their titles until the city was sacked by the Mongols in 1258.

After the sack of Baghdad, the Islamic empire dissolved into a series of primarily military states including the warrior Ottoman state in western Anatolia that conquered Constantinople in 1453, Egypt and Syria in 1516-17, and India in 1605.

The place of science in Islamic society during the golden age is also peculiar. Islam in general sees nature as an extension of God and its study to be respected for that reason. If, however, one defines science as knowledge that does not rely on supernatural explanations, there is clear conflict with theology. The apex of the Islamic intellectual hierarchy was occupied by the ulama or religious scholars with the adabs or cultural elite just below. The falsafa or Greek philosophic and scientific tradition lay outside the religious mainstream but was, along with its practitioners, generally tolerated and was certainly better tolerated and more widely taught and practiced than the sciences in medieval Europe. The most widely respected of the Muslim scientists were those adept in multiple disciplines. The archetypal scientific polymath was probably Ibn Sina (Avicenna) who was a mathematician, a physicist, an astronomer, and a physician.

The détente between science and theology lapsed after the 14th century as the ulama began to see scientific determinism as a limitation on Allah’s omnipotence and heretical for that reason.

As with science in general, medicine in the Islamic golden age served both to preserve and transmit earlier knowledge and to provide advances of its own. The roots of Islamic medicine can be traced to 431 when Nestorius, the Archbishop of Constantinople, was expelled by Emperor Theodosius II for denying that Mary was the “Mother of God” and, by implication, the divinity of Jesus. Nestorius was exiled first to a monastery in Antioch and then to al-Khargah in Egypt, but a number of his followers relocated to Sassanid Persia where their Church of the East took root and ultimately spread to Central Asia, India, and China. The Nestorians became prominent physicians and many personal physicians to the Abassid caliphs were from the sect.

When Nestorius’s followers were expelled from Constantinople, many took their books with them and relocated to Edessa which had been a seat of learning since the 2nd century. In 489, Emperor Zeno expelled the Nestorians from Edessa and they took their libraries and relocated to Gundeshapur in the southwestern part of modern day Iran. Gundeshapur had housed a medical school since 271, and the city had been known for medical expertise since before the time of Alexander the Great. The Nestorians founded a new hospital and university at Gundeshapur. The city’s location afforded access to medical knowledge not just from the Greek works but also from Persia, India, and China and the school instituted a symbiosis of clinical medicine, research, and teaching that persists in modern medical education. Gundeshapur again benefitted from imperial cupidity when, in 529, Justinian closed the Athenian Academy. The Academy’s educators and their books joined the Nestorians. When the Muslims

Page 21: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

conquered Persia in 462, they not only preserved the medical school and hospital at Gundeshapur, they copied it and spread the model throughout their empire.

The Islamic hospitals were qualitatively different from the monastic European institutions that served primarily as shelters for the sick and poor. In the Muslim world, hospitals were tellingly called bimaristans or places of treatment and cure. This is partially attributable to Koranic mandates for care of the sick. The Caliph al-Mansour (714-775) built a hospital, university, and library modeled after that at Gundeshapur in Baghdad and set out to translate every available medical text from Syriac to Arabic. The archetype for modern hospitals was built in Baghdad by Caliph Haroun al-Rashid in 805 and was followed over the next two decades by 34 more in the city. Where there were hospitals, there were universities, and where there were universities there were libraries. The library at Cordova was said to have 600,000 books, the library at Cairo over 1,000,000, and that at Tripoli 3,000,000.

Hospitals in the Muslim world were places where the sick were not just housed, they were treated. They had separate wards for men and women and wards dedicated to specialty care including psychiatry, ophthalmology, and infectious diseases. Surgery was hampered by religious restrictions on dissection of human bodies and surgical practitioners were of lower status than those who practiced medicine. Cauterization was the preferred method of hemostasis and therapeutic venesection was common. Medical therapy, on the other hand, had a higher status and physicians in Islamic hospitals (often Persians and Christians rather than Arabs) made liberal use of opium, cannabis, and hyoscyamus. One text included 1,400 therapeutic substances, over 300 of which came from India or China and were not known to the Greek pharmacological expert Dioscorides. In fact, the words alkali, alcohol, aldehyde, and elixir all come from the Arabic.

Both the Eastern and Western Caliphates gave rise to important medical practitioners three of whom were dominant influences on both general and military medicine for centuries. The first of these, both in chronology and in influence, was Abu Bakr Mohammed ben Zakariah, better known as Rhazes (ca. 860-932), a name drawn from Rey, the city of his birth, near modern Tehran. Rhazes was a polymath who wrote as many as 230 books, 61 of which were medical. He spent his early adult years studying music and then poetry, philosophy, alchemy, astrology, and mathematics. Inspired by a visit to one of Baghdad’s hospitals, he took up medicine when he was just short of 30 years old. While still a junior physician, he was asked to help pick the site for a new hospital. Rhazes hung bits of raw meat around the city reasoning that the area in which the meat rotted the slowest would be the least likely to foster corruption and therefore the most healthful. He subsequently became chief physician at that hospital but continued to practice in his home city as well and also to travel in Syria and Egypt and even Iberia.

Rhazes, like other Islamic physicians, was principally an encyclopedists and preserver of ancient knowledge. His first language was Persian, but he wrote primarily in Arabic and concentrated on collecting and organizing Greek and Eastern medical knowledge. His Mansoury is a multi-volume compilation of surgery, toxicology, hygiene, and travel medicine. Its ninth book was the most frequently translated into Latin and was a standard reference on therapeutics in Europe well into the Renaissance. The Almansour dealt with fractures and surgery but also ventured into alchemy, astrology, herpetology, and discussion of angels. In that book, Rhazes also broached a number of subjects pertinent to warfare.

Page 22: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

He recommended putting tents in widely spaced arrangement at the top of hills with north facing openings in the summer and placing the camps in lower, sheltered areas and closer together in the winter. He associated animals with disease and recommended keeping them well away from soldiers’ living quarters. His El Hawi or Continens Liber, probably a posthumous compilation of Rhazes’ notes, was an attempt to collect all medical knowledge known to the Greeks, the Hindus, and other Islamic physicians supplemented by his own personal experience. The book contains the best early descriptions of and differentiation between smallpox and measles. Rhazes served as court physician to Abu Salih al-Mansur in Baghdad until his failing eyesight forced his retirement to his home city where he died in 932.

LIke Rhazes, Abu-Ali al-Husayn ibn Abdalah ibn-Sina or Avicenna (980-1037) was Persian, and actually studied for a time in Rey. He was born in a village near Boukhara on the Silk Road and was exposed to East Asian knowledge from early childhood. He claimed to have memorized the Quran by age 10, began studying medicine at age 16, and was treating the royal family while only 18. He went on to be personal physician to the caliph in Baghdad and to serve as advisor or vizier as well. Avicenna wrote over 450 treatises (of which 240 are extant), including the first comprehensive text on geology as well as works on philosophy, mathematics, physics, astronomy, astrology, theology, logic, and poetry. Only 40 of his surviving works deal specifically with medicine. Although his Canon was a standard European reference and was the primary text at the universities of Leuven and Montpellier, it was, like the Continens, an attempt to collect all known medical knowledge and Avicenna attempted to align Persian and Indian knowledge with the theories of Galen and Aristotle. It does contain descriptions of infectious diseases and recommends quarantine and hygiene in their control.

Although he described reduction of spinal fractures and sterilization of wounds with wine, Avicenna generally considered medicine superior to surgery. Perhaps the fact that, like other Muslim physicians, he preferred the cautery to the knife with resulting poor healing influenced that opinion. What he wrote pertaining to military medicine and surgery is almost entirely derived from Paul of Aegina in spite of the fact that he spent over a decade with Abu Jafar Ala Addaula and accompanied him on several military campaigns. Although the armies of the Eastern Caliphate were often provided with camel-borne hospital tents and full sets of surgical instruments, techniques were rudimentary. Anesthesia and antisepsis were unknown. Amputations were typically done with a mallet and a cleaver followed by hemostasis with hot oil.

At the other end of the Islamic world, Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi (Albucassis) was born near Cordoba in the Western Caliphate I about 936. Unlike Rhazes and Avicenna, Albucassis was of Arab descent and probably from the Medinan tribe of Al-Ansar. Also unlike the two eastern physicians, he concentrated his efforts in surgery. He has been called the father of modern surgery, and the more than 200 instruments he designed and procedures he described remained in common use in Europe almost up to modern times. He is also credited with using absorbable catgut for sutures to be left in the body.

Albucasis spent most of his life in Cordoba and served as personal physician to the Caliph Al-Hakam II. His most famous work was the thirty chapter Kitab al-Tasrif written around 1000 AD and translated into Latin by Gerard of Cremona during the latter’s 12th century study in Andalusian libraries. The book is quoted more than 200 times by French Renaissance surgeon Guy de Chauliac. Although, like other

Page 23: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Muslim physicians, he preferred the cautery to the knife, he did describe ligating blood vessels to control hemorrhage. Like Rhazes and Avicenna, Albucasis spent time as a military surgeon, and he wrote about thoracic and abdominal wounds and extraction of arrows. Like +Avicenna, he drew most of what he wrote from Paul of Aegina.

Medieval Medicine

The kindest thing one can say about medicine in the High Middle Ages is that that progress was slow. Garrison, who may be overly harsh, says the only advances of note between 476 and 1453 were the development of European universities and medical schools, the evolution of laws and regulations governing medical practice and licensure, and the establishment of hospitals that provided medical and nursing cane and were more than domiciles for the poor and itinerant. It is of note that the last two of those three were direct results of the Crusader experience in the Moslem east.

The origins of the university at Salerno are unclear, but it is not surprising that the prototypical European lay medical school should have arisen there. Salerno was a coastal town near Naples that had been used as a health spa since Roman times. Until the 10th century, it had been administratively part of the Eastern Empire and relatively free of Latin influence. The school was started in the 10th century and flourished after the Norman conquest of the area in 1076. Constantinus Africanus (c. 1020-1087) fled Carthage (possibly after being accused of being a magician) and brought Arabic medicine with its strong Hellenistic basis to Salerno. Knowledge carried down from the ancient Greeks was supplemented by organized case studies and by animal dissections—primarily pigs—to augment the study of anatomy and surgery. Over 100 books by as many as 40 different authors came from Salerno including the Tractatus de Aegritudinum Curatione that served as the standard European text in internal medicine well into the 12th century and the Regimen Salernitanum, a poetic collection of dietary and sanitary recommendations that was among the first medical books to reach print and that went through 240 editions. The Antidotarium by Nicolaeus Salernitanus was the first printed formulary and brought a number of complicated Arabic pharmaceuticals to Europe including the anesthetic sponge. Students at Salerno were trained in both medicine and surgery and the Norman King Roger II introduced a formal degree in medicine. His grandson Frederick II established medical licensure by examination to his Sicilian kingdom. The curriculum at the university was based largely on the teachings of Constantinus Africanus who drew on his own translations from Arabic medical texts.

Salerno was sacked by Henry VI in 1194 and the university went into gradual decline and had be superseded by schools at Padua and Bologna after 1224 and then by independent medical universities at Paris, Palermo, Naples, and Montpelier.

The medical school at Bologna was started by Hugh of Lucca after his return from the Crusades and emerged as a center for surgical training. Hugh denied the “Galenic” teaching that pus was necessary for proper wound healing but was unsuccessful in promoting that idea to the rest of Europe. The first real European attempts to study human anatomy by dissection were by Mundinus of Bologna who completed his Anothomia in 1316. The text was one of the earliest to be printed and, even though it perpetuated many of Galen’s errors, it remained popular well into the 16th century.

Page 24: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

A combination of the emergence of medical universities and the printing press after 1455 made it possible for the first time to see professional lineages as professors from one generation trained those that followed. At Salerno, Roger of Parma wrote a surgical instruction manual (Practica) in about 1170 that was re-issued by his student Roland of Palermo in about 1230. Hugh of Lucca followed them and taught Theodoric (1205-1296) who later became Bishop of Cervia and who wrote Cyrurgia in 1266. They all took the unpopular and anti-Galenist stance that wounds should be kept clean and dry and that waiting for pus to develop and secondarily closing wounds after the infection cleared was ineffective and harmful. Theodoric was unusual in that he practiced surgery while also serving as a priest and officer of the Church. He also amassed a considerable personal fortune from his practice which he left to the Church when he died.

At the same time Theodoric was at Salerno, another prominent surgeon, William of Saliceto (Guglielmo of Salicetti (1210-1277) was there as well. William, who learned wound treatment as a military surgeon, wrote Chirurgia in 1275, probably as a text to teach his son medicine and surgery. His book included both case histories and has been credited as the first true surgical anatomy text. He provided a vivid description of the crackling subcutaneous crepitus that results when a projectile punctures the lung and allows air to accumulate beneath the skin. He also had the courage to attempt suturing of perforated intestines and torn nerves although, in the absence of antibiotics, his success rate with the former must have either been zero or very close to it. Successfully suturing nerves is still a difficult problem. Like the other members of the Bologna faculty, William was skeptical about “laudable pus” and preferred early closure of clean wounds and avoidance of the cautery for hemostasis. Although he practiced two centuries before the European syphilis epidemic, William had the foresight to recommend use of prophylactics by all soldiers.

William of Saliceto’s most famous student was Guido Lanfranchi—also known as Guido Lanfranc—(1250-1306) who began his career in Milan but was exiled and moved to Lyons in 1290 and then to Paris in 1295 as a result of the civil conflict between the Guelphs and the Ghibellines.

Lanfranc would have preferred a faculty position at the University of Paris, but that was impossible since he was married and all faculty members at that university were clerics and assumed to be celibate. As an alternative, he joined the faculty at the College de St. Côme where, in 1296, he wrote his Chirurgica Magna, an encyclopedic compilation of the Arab surgical literature and what he had learned from William of Saliceto. Lanfranc opposed the division of surgery and medicine taking the point of view that surgeons were simply physicians that used their hands. Like his mentor, he attempted to suture nerves and he taught his students the old Roman technique of suturing arteries to control hemorrhage. He wrote extensively on treatment of head trauma and advised against routine trephination for concussive injuries. In addition to bringing Italian surgical knowledge to France and founding the first academic surgical training program in that country, Lanfranc served as personal physician to Philip the Fair. Lanfranc trained the Flemish surgeon Jean Yperman (1295-1351) who took his techniques for arterial ligation to the Low Countries where he wrote Chirurgie and became the predominant surgical authority in the Low Counties through the 14th century. And there the Bolognese surgical genealogy ends.

Page 25: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Henri de Mondeville, who succeeded Lanfranc as physician to the French king, also trained under Theodoric in Bologna and, like others from that school, derided the doctrine of laudable pus, preferring to wash wounds clean of foreign material and to avoid contamination with salves and ointments. His Cyrurgie (c. 1312) was the first surgical text in French and contributed to making the French leaders in European surgery for a time. Unfortunately, his dry treatment of wounds lost out to the teaching of Guy de Chauliac (c. 1300-1368).

Garrison called de Chauliac the “most eminent authority o surgery in the 14th and 15th centuries.” He was certainly one of the era’s most educated practitioners having studied at Paris, Montpelier, Toulouse, and Bologna where he learned anatomy from Niccolo Bertuccio who had been trained by Mondino de Luzzi. He took holy orders and served as personal physician to Popes Clement VI, Innocent VI, and Urban V. He was personally courageous and, unlike many of his colleagues, stayed in Avignon during the plague epidemics of 1348 and 1360 and survived the disease himself. Unfortunately, he did not adopt the Bolognese methods of wound management. De Chauliac recommended a plethora of ointments, salves, and plasters and was convinced that suppuration was a necessary part of wound healing. As regards wound therapy, Garrison said he “threw back the progress of surgery some six centuries.” His La Gran Chirurgie dominated European surgical thinking for two centuries until it was finally supplanted by the work of Ambroise Paré.

One other 14th century European surgeon is worthy of note. John of Arderne (?1307-1392) was said to have been admitted to the London Guild of Surgeons in 1370 although either that date or his date of birth are most likely incorrect. At any rate, he served as a surgeon in the armies of the Duke of Lancaster and of John of Gaunt in the Hundred Years War and was present at the Siege of Algeciras (1342-1344) where he likely encountered some of the earliest wounds inflicted by gunpowder weapons. He irrigated arrow wounds with hemlock, opium, and henbane and, in his later civilian practice, advocated using enough opium to put patients to sleep during surgery.

By the 14th century a clear hierarchy of medical practice had spread through Europe. Beginning with Avicenna in the 11th century, the cerebral activities of diagnosis and prognosis were accorded a higher value than anything that required physical interaction with a patient. Garrison correctly called the separation of surgery from medicine “the fundamental error of medieval medical science.” Much of medicine in the High Middle Ages was practiced by clerics and is has been said that the separation of surgery was a result of an 1163 Catholic doctrine from the Council of Tours (Ecclesia abhorret a sanguine) forbidding ordained members of the Church from any activity requiring shedding of blood. DeBakey has pointed out that this phrase is not in the records of that council although prohibitions against clerics performing surgery did emanate from the Lateran Council (1215), the Council of Nimes (1284), the Council or Wurzberg (1298), and the Council of Bayeaux (1300). Regardless, by 1300, surgery had been effectively taken out of the hands of clerics and was largely done by poorly trained and most often unlicensed practitioners who acted as barbers, phlebotomists, and wound surgeons. The separation extended to the laity as schools like that at Salerno began issuing university degrees in medicine (and not surgery) and licensure spread north from Sicily.

Page 26: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Unquestionably, surgery was made less attractive as a profession by the proliferation of draconian malpractice penalties tied to surgical outcome. Not only financial penalties but also physical ones such as amputation and even death could result from a bad surgical outcome, especially if the patient was from a higher social order. In the absence of reliable anesthesia, antisepsis, and accurate control of hemorrhage, bad outcomes were more common than good ones and being a surgeon to the aristocracy carried an inordinate risk.

Surgery was not even taught in the medical school at the University of Paris. The first hint of amelioration came at St. Côme where courses were organized in the first years of the 13th century with the intention of separating surgeons from barbers. Jean Pitard, who preceded Lanfranc as surgeon to Philip the Fair, founded the university in 1201 after returning from the Crusades where he had no doubt seen the need for qualified wound surgeons. As a result, in Paris clerical surgeons “of the long robe” supervised lay barber-surgeons “of the short robe” although the latter continued to push for the right to practice independently and, in 1372, Charles V expanded their privilege to include treatment of wounds.

In England, master surgeons got a separate guild in 1368 and combined with the physicians in 1421. Barbers, who were separately chartered as a guild by Edward IV in 1462, were permitted to let blood and treat wounds but practiced almost entirely on commoners. In Germany, the tradition of field barbers (feldshers) treating wounds persisted into the 19th century. Relegating treatment of wounds to the unlicensed and untrained was obviously detrimental to the development of military medicine in the High Middle Ages.

The late Middle Ages also witnessed some of the most disastrous pandemics in history and each was spread by soldiers moving between populations that had previously not been in close contact with one another. The first was the Black Death (bubonic plague) in the 14th century, the second the syphilis epidemic in the late 15th and early 16th century in Western Europe, and the third the sequential demographic collapse from smallpox, measles, and influenza among natives of Central and South America that accompanied the Spanish invasions.

Bubonic plague is a systemic infection caused by the gram negative rod Yersinia pestis (formerly Pasturella pestis). The bacteria can be found in over 1,500 species of flea although the oriental rat flea (Xenopsylla cheopis) is the classic vector. The fleas typically parasitize ground dwelling rodents. When their gut fills with clotted blood from feeding on the host, the fleas regurgitate a mixture of blood and bacteria during their next feeding. The rodent hosts usually have relative but not complete immunity to the bacteria and survive carrying the organisms in their blood from whence it can be ingested and spread by future flea bites. When the rodents become sick and die, the fleas seek alternative hosts, and that is when humans usually become infected, although the fleas are capable of harboring plague bacteria for up to a year. Although most human cases come from flea bites, it is also possible to contract plague either directly from a rodent host or from another human, usually from airborne infected droplets.

Once in a human, Yersinia multiplies quickly and typically spreads through lymphatic channels and accumulates in lymph nodes. The nodes become edematous, swell, hemorrhage, and necrose resulting

Page 27: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

in painful, black subcutaneous masses—buboes. The bacteria can also multiply in the liver and spleen and can cause microscopic blood clots in the kidneys, lungs, adrenals, and skin. The combination of consumption of available clotting factors and decreased production of those factors leads to widespread bleeding, particularly under the skin. The subcutaneous hemorrhage and cyanosis from lung damage cause the dark skin color that gives the Black Death its name.

Symptoms of pneumonic plague—fever, headache, malaise, and diffuse aches—start within two days of exposure and are quickly followed by coughing up blood, cyanosis, coma, and death. Mortality from the bubonic form of the disease ranges from 30-90 percent and in the pneumonic (airborne) from mortality is essentially 100 percent. A third form of the disease—septicemic plague—is even more virulent with the onset of symptoms within hours of exposure and death in a day or so and before buboes have time to form.

Plague came to Europe—probably brought by soldiers returning from

Ethiopia and Egypt—during the reign of Justinian in 542 and killed as much as one-third of the entire population of the Byzantine Empire before spontaneously dying out. Since plague vanished from Europe after 767, it is likely that a reservoir of plague was never established among European rodents.

Plague remained absent from Europe for over 500 years. It was, however, endemic in Asia, especially in the Himalayan borderland between India, Burma, and China. William McNeill hypothesizes that the disease did not cross to humans because residents in areas where plague was widespread in local rodents avoided contact with the mammals. Regardless of the explanation, epidemic plague was not a problem in Asia until the 12th century, and Asian rodents and their fleas were separated from Europe by the grass ocean of the steppes.

That epidemiological border was breached when Genghis Khan’s horsemen essentially erased the distance between Asia and Europe. Although occasional caravans had slowly traversed the steppes for millennia, Mongol horsemen could travel 100 miles a day, regularly carrying grain to feed their animals and the rats that shared that grain. When the Mongols invaded western China and Burma in 1252-3, the disease broke out into the general population and recurred regularly for the next two centuries. The population of China, which had been approximately 123 million before 1200 sank to 90 million by 1393.

Europe’s fate was sealed by a street brawl between Moslems and Italian traders outside the Black Sea trading port of Caffa. The Italians retreated behind the city walls and the Moslems enlisted the aid of a Janibeg, a Kipchak khan. Janibeg raised an army to besiege the Genoese in Caffa, and, because the city could be resupplied from the sea, the siege dragged on for three years until plague broke out among the khan’s troops. In the closest contemporary account of the episode, Gabiele de Musis of Piacenza (who was not actually present) wrote, “All medical advice and attention was useless; the Tartars died as soon as signs of disease appeared on their bodies; swellings in the armpit or groin caused by coagulating humors, followed by a putrid fever.” Be went on to say that the dying soldiers lost interest in the siege, loaded their dead onto catapults, and fired the corpses over the walls into the city “in the hope that the intolerable stench would kill everyone inside.” Because infected corpses are not likely to transmit plague, the story has been called apocryphal, but given the fact that rotting bodied were regularly used

Page 28: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

as weapons of war during the Crusades, the account is probably true even though rats crossing into the city were much more likely to have been the cause of the plague that broke out among the Genoese.

Regardless of whether it came from catapulted bodies, plague did break out among the Italians and they took ship and fled to the Sicilian city of Messina. There, de Musis picked up the story and described whole families dying almost as soon as the sailors returned to their homes. “When the sailors reached those places (Genoa and Verona) and mixed with the people there, it was as if they had brought evil spirits with them; every city, every settlement, every place was poisoned by the contagious pestilence, and their inhabitants both men and women, died suddenly.” By description, this was probably pneumonic spread.

Besides the Black Sea route, plague may well have come from South Asia by sea to the Persian Gulf and then across the Arabian peninsula to the eastern Mediterranean and from the same area into the Red Sea and overland to Gaza and the Nile delta.

Plague rapidly spread throughout Europe and raged from 1346-50 andSagain in the 1360s and then the 1370s after which time approximately 40 percent of the population had succumbed. The population reached a nadir between 1440 and 1480 and took six generations to recover. Plague died out in northern Europe after 1665 but remained endemic in the Middle East until the end of the 18th century. Depopulation from plague in the 1300s may have set the stage for the Seljuk conquest of Constantinople and the final demise of the Eastern Empire.

Medicine in the 14th century was hopelessly inadequate in treating plague and lack of faith in traditional methods set the stage for Renaissance reforms and the rise of the medical universities. However, bloodletting and lancing buboes were favored methods of treatment and both required either surgeons or barber-surgeons. Their increased importance encouraged the publication of surgical texts and the increased status that led to formation of surgical guilds and formal surgical training in Europe in the latter part of the 14th century.

Beginning with the Plague of Justinian and continuing to the Black Plague, this has typically been a disease of military movement. In fact, the last plague epidemic occurred in the 1970s during the Viet Nam War. Perhaps not unexpectedly, plague has also been a prime candidate for bio-warfare. The Japanese dropped plague infested rice over China in World War II, and Yersinia has been a component of most 20th century biological warfare programs.

The second great plague that started in the 15th century and continued well into the 16th was syphilis. Rumors that the disease had been contracted by Columbus’s crew and returned to Europe with them began from the onset of the epidemic, but more recently the suggestion has been put forward that syphilis is only an evolution of yaws, a common African disease caused by a variant of the same bacterial species. It has also been hypothesized that many of the cases described as syphilis (by one of its several names) were actually leprosy although forensic study of skeletal remains have not supported that idea. The most recent research has swung the pendulum back in toward the Columbian hypothesis.

Page 29: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

One source of uncertainty has been the fact that the disease described by 15th and 16th century authors bares little clinical resemblance to the modern disease. Syphilis is a sexually transmitted disease caused by the spirochete Treponema pallidum. In its modern form it presents as genital sores and, if untreated, a skin rash. At this stage appropriate antibiotics readily cure the disease. If not treated, syphilis can, several years later, cause diffuse tumors (gummas), cardiovascular damage (syphilitic aortitis), nerve damage (tabes dorsalis), and dementia (general paresis) although these have become quite rare since antibiotics have been available.

Syphilis was graphically described by European authors shortly after it appeared, perhaps best by Italian physician and poet Girolamo Fracastro’s in his poem Syphilis sive morbus gallicus (Syphilis or the French Disease). Although Fracastro coined the term syphilis in 1530, the disease went by a variety of other names during the epidemic.

The first cases were in Italy and were coincident with the French sack of Rome and then Naples under Charles VIII in 1494 and 1495 both of which were accompanied by sufficient rape to start a sexually transmitted epidemic. The French king’s army was composed of a variety of mercenaries including Flemish, Gascons, Swiss, Italians, and Spanish, some of whom were rumored to have been in Columbus’s crew.

The French troops called it the Neapolitan Disease, the Italians called it the French Disease, and, as it marched through Europe and the Near East with various armies, it was known as the Spanish Disease, The Polish Disease, the Christian Disease, the Frank Disease, and—in Tahiti—the British Disease but most often as the Great Pox.

The last name is the most telling. Although modern secondary syphilis can cause a skin rash and even diffuse pustules, the rash typically resolves even if not treated and leaves little if any residual. That was not the case in the 15th century. In that epidemic, pustules appeared around the genitalia and spread over the trunk and especially the face where they caused features to melt away. Charles VIII got the disease. “A violent, hideous and abominable sickness by which he was harrowed; and several of his number who returned to France, were most painfully affected by it; and since no one had heard of this awful pestilence before their return, it was called the Neapolitan sickness.” And, whereas, our version of secondary syphilis typically resolves, in this epidemic people died after a few weeks to months of suffering. Records are incomplete, but the epidemic probably caused about 5 million deaths in Europe. Until effective antibiotics were developed in the 20th century, the only treatment was mercury either administered topically or systemically. The treatment was minimally effective if effective at all and was quite toxic.

Although syphilis had been present in Central and South America for centuries, it apparently lacked the virulence that was typical in Europe. The reason for that is unclear although the complete lack of immunity in the European population has been cited as a possible reason.

The Native Americans may have donated syphilis to the Europeans but the conquistadors more than returned the favor. When Hernan de Cortes came to Mexico with 600 soldiers in 1519, the Aztec empire comprised approximately 20 million well-organized, militaristic inhabitants. In a running battle from the

Page 30: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

coast to the capital at Tenochtitlan and back, he lost 2/3 of his force. When he went back to central Mexico in 1520, Cortes had a new ally. The Spanish governor in Hispaniola had sent Panfilo de Narvaez to rein him in and Narvaez had brought a slave infected with smallpox. The first wave of the disease killed half of the Aztecs and was followed by epidemics of measles (1530-31), typhus (1546), and influenza (1558-59). By 1618, the native population of Central America had dwindled to a mere 1.6 million and Aztec wealth, power, and culture were at an end.

A similar and even more striking episode occurred when Francisco Pizarro came to Peru in 1531 with only 168 men. The Inca Empire also had 20-30 million inhabitants but smallpox preceded Pizarro. It had come in 1526 and killed the emperor and his chosen successor and precipitated not only a demographic collapse from disease but also a civil war. A conquest that should have been impossible was relatively easy.

North American native populations were not so well organized and were not well documented, but there are a number of indications that disease was as lethal there as in Central and South America. When Hernando Desoto explored what is now the American southeast in 1540, he found a number of abandoned Indian towns. Recent studies suggest that the mound building civilizations were still very well populated when Columbus arrived but, by the time Europeans explored North America, they had vanished. Massachusetts Pilgrims took it as a sign from God when local natives died of smallpox and left cultivated fields for their exploitation. Altogether, it is likely that 95 percent of the native American populations died of disease within two centuries of Columbus’s arrival, surely the most effective if inadvertent use of bio-warfare in human history.

Seventeenth Century Military Medicine

In the seventeenth century, European scientists finally gave their own studies greater weight than centuries old information culled from ancient Greek and Roman texts. Isaac Newton’s successful use of experiments and observations to understand the relationships between astral and terrestrial objects precipitated a headlong and often unsuccessful rush to apply those same tools to living creatures. Early attempts to apply the new tools of mathematics, physics, microscopy, and Vesalian anatomy to clinical medicine, especially in the treatment of diseases and injuries of war, proved too often futile and occasionally counterproductive.

. Andreas Vesalius and his fellow Italians had aroused great interest in dissection but, although anatomy was widely taught across the continent, it had scarcely any beneficial effect in clinical surgery. Military surgeons suffered from a number of disabilities, perhaps the greatest of which was their organizational separation from physicians who, by virtue of their university education and presumed classical knowledge, remained a caste above surgeons and barbers who merely operated. The physicians honored only academic knowledge and disdained learning from experience. The surgeons respected only that which they had personally seen or learned from others with direct experience. The barbers often ignored all education in favor of hucksterism and quackery. Surgeons and barbers, although they usually belonged to the same guild, were professional rivals; the former most often caring for nobles

Page 31: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

and officers and the common soldier left in the uncertain hands of the latter. Both, however, were under the jurisdiction of the physicians.

The high death rate from battlefield wounds led to a variety of creative suggestions for their treatment. Among the more curious was Sir Kenelm Digby’s “sympathetic powder” compounded of such fanciful ingredients as moss from a dead man’s skull and powdered mummies and which remained in common use through much of the century. The idea that wounds could be “cured at a distance” led to coating blades that had inflicted injury with salve in the hope that they would heal as well as they had injured. The “transplantation cure” involved dipping a stick in the pus or blood from a wound and driving it into a tree. It is likely that none of these was as harmful as cauterizing a wound with hot oil or red hot iron in an attempt to reverse gunpowder’s presumed toxicity. Amputation, despite its low survival rate, remained the most common battlefield operation and was used regularly for wounds that entered a joint or for open fractures.

The seventeenth century saw a number of scientific advances that should have formed the basis for significant improvement in medical care but which, in general, did not. William Harvey’s 1616 proof (not published until 1628) that blood circulated rather than simply being pulsed back and forth by the pumping heart formed the basis of clinical physiology and was the first application of Newtonian physics to living humans. René Descartes described the physiology of reflexes, Robert Boyle’s gas laws made it possible to understand respiration, and the astronomers’ discoveries in optics led to an understanding of human vision. Athanasius Kircher, Robert Hooke, Antonj von Leeuwenhoek, and Marcelo Malpighi all used the microscope to see animal and human anatomy in unimagined detail.

The artificial division between physicians and surgeons and the former’s lack of respect for empirical knowledge led to persistence of a plethora of useless and often harmful practices. The typical field medical chest might have weighed over 300 pounds and was burdened with such things as mummy dust, scorpion oil, plaster of frog spawn, and dog’s fat. Only quinine containing Peruvian bark, mercury (which was of some use in syphilis), and opium were of consistent use. Still, the best of the English internists, William Harvey, Thomas Sydenham, and Thomas Willis, all served as military physicians. One bright spot was Tobias Cober’s 1606 recognition of the relationship between body lice and typhus in army camps.

Nineteenth Century

Page 32: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Anesthesiology—the physiologic and pharmacologic management of patients during surgery. Use of drugs to relieve the pain and anxiety of surgery dates into pre-history; opium poppy seeds have been found in the ruins of Swiss lakeside villages occupied in the third millennium B.C.E. The Egyptians were using opium extracted from poppy seeds by 1591 B.C.E. In the first century A.D., Roman physician Dioscorides used mandragora bark to induce sleep. To this, his contemporary Celsus recommended adding hyoscyamus seek and extract of opium poppies to relieve pain. These drugs continued as standards recommended by Avicenna in the tenth century and into the thirteenth century when Ugo de Lucca combined them with mulberry, flax, hemlock, lapathum, ivy and lettuce seed and dried them in a sponge that could be moistened when needed and either inhaled or ingested.

Valerius Cardus synthesized ether (which he called sweet oil of vitriol) in 1540 although it remained a recreational drug particularly popular among medical students for the next three centuries. Andreas Vesalius introduced endotracheal ventilation when he placed a hollow reed in a pig’s trachea and blew into it to keep the animal’s lungs inflated while he opened the chest and examined the beating heart. In 1667, Robert Hooke used tracheal insufflation of air to keep a dog with an open chest alive for an hour. Joseph Priestly discovered nitrous oxide in 1772 and, at his recommendation, Humphrey Davy experimented with it on animals and humans including himself and suggested it could be used in surgery, but it remained for Hartford dentist Horace Wells to actually use it for that purpose in 1845.

In 1659, Christopher Wren suggested to Robert Boyle that he might introduce opium directly into the blood stream through a hollow quill. When Boyle did this to a dog, the animal became instantly comatose. Johann Major of Kiel repeated the experiment on a man in 1667, but the idea then went dormant until 1874 when Pierre Oré used intravenous chloral hydrate to anesthetize a patient. Intravenous anesthesia became a standard with the invention of barbiturates in the 1920s. Baron Larrey and his contemporaries used alcohol liberally as a sedative. In 1859, Albert Niemann isolated cocaine from the Peruvian coca leaf and, in 1884, Carl Koller began using the drug as a local anesthetic.

Alexander Munro secundus recommended phlebotomy with removal of blood until the patient became flaccid and unconscious as an aid to reducing dislocated joints and in delivering babies, a practice which persisted into the early 1800s.

True surgical anesthesia began in 1842 when Georgia physician Crawford W. Long, who had participated in “ether frolics” as a medical student, tried it on occasional patients. He did not, however, publish his findings. On October 16, 1846, Boston dentist William T.G. Morton, who had successfully used the drug in several extractions, convinced John Collins Warren, the direct descendant of American Revolutionary War military surgeons John and Joseph Warren, to anesthetize a patient during a public operation done at Massachusetts General Hospital. The demonstration was a stunning success and ether anesthesia became a surgical standard in America and Europe within months. James Simpson of Edinburgh was dissatisfied with ether’s side effects and replaced it with chloroform which became quite popular in England. It was especially prevalent in obstetrics after Queen Victoria delivered her eighth

Page 33: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

child with its assistance over the objections of Calvinist church officials who contended relieving pain during child birth violated the will of God.

The first wartime use of anesthesia for surgery was by American surgeon Edward H. Barton in 1847 during the Mexican-American War.

Anesthesia during surgery remained a delicate balance between having the patient too awake and dying from not breathing. In 1903, Harvey Cushing and George Crile brought the Riva-Rocci blood pressure monitoring device to Boston from Europe and recommended its routine use during surgery, but the Harvard faculty, after careful consideration, decided that routine monitoring during anesthesia was unnecessary. In 1909, Metzler and Auer placed a tube in a surgical patient’s trachea and used positive pressure ventilation to maintain oxygen supply during an operation.

Antisepsis—The use of chemicals to retard growth of bacteria and decrease the risk of wound infection. Antisepsis as a practice is as old as recorded medical history with about one-third of all prescriptions in the 1550 B.C.E. Ebers Papyrus contained honey expressly for that purpose. The Good Samaritan poured oil and wine into the wounds of the man injured by thieves on the road to Jericho and Homeric Greeks emphasized washing wounds with water or wine after removing enemy arrows. Besides those substances, ancient physicians irrigated wounds with turpentine, pitch, tar, and olive oil in an effort to forestall suppuration. The Roman medical writer Celsus recommended myrrh and frankincense dissolved in alcohol and noted that, in addition to decreasing inflammation, the combination enhanced clotting. Styrax and benzoin, both derived from similar Southeast Asian trees persisted as favorites of Napoleonic surgeon Dominique-Jean Larrey, and the latter remains in regular use for wound dressings. Balsam of Peru came to Europe as the second Incan wonder drug (chinchona or quinine being the first) in 1553 and continued in use through World War I.

The term antisepsis first appeared in the 1721 London pamphlet An Hypothetical Notion of the Plague, and some out-of-the-way thoughts about it by Place in which the chemicals were recommended to stop putrefaction and “generation of Insects.” In 1752, Sir John Pringle did a series of ingenious experiments in which he used various acids to retard decomposition in freshly killed animals. Unfortunately, although he was able to confirm that antiseptics slowed post mortem putrefaction, he did not apply his findings to wound treatment.

An effective means of decontaminating wounds became more important with the introduction of guns. The rate of infection from gunshot wounds which we now understand to be related to foreign material carried into the wound with the projectile was initially thought to be caused by gunpowder which was consequently considered poisonous. Oakum, the threads of tarred rope unraveled and wadded up for packing into wounds, was a popular disinfectant in seventeenth and eighteenth century navies and remained in regular use through the American Civil War. Even Lord Joseph Lister recommended it as an adjunct to washing with carbolic acid. In fact, it has been shown that growth of Staphylococcus aureus can be significantly slowed by dilute solutions of pine tar.

Page 34: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Various forms of alcohol were preferred antiseptics from the time of the Romans when Celsus used wine in addition to the previously mentioned tincture of myrrh. This recommendation was carried forward through the Middle Ages by Hugh of Lucca, Henri de Mondeville, Lanfranc, and Guglielmo Saliceto. Guy de Chauliac used distilled spirits to irrigate wounds and Ambroise Paré did the same with aqua vit. In 1863, Auguste Nélaton of Paris did surgery with large alcohol packs in the open wound and reduced his infection rate to less than 2 percent. Unfortunately, his practice was rejected because it impeded the formation of the “laudable pus” widely accepted to be necessary to healing.

Once European physicians accepted Ignacz Semmelweis’s and Louis Pasteur’s demonstrations that wound contamination caused infection and Lord Lister’s application of that work to surgery, there was a rush to identify effective antiseptics. Mercuric chloride dissolved in alcohol (corrosive sublimate) had been known since the fifteenth century and was brought back as a hand wash and wound irrigant, remaining in common use until the 1890s. Silver nitrate had also been used since the 1400s and remained especially popular for eye injuries.

Chlorine had been discovered in 1774, and the French had used it to disinfect stables, cemeteries, and dissecting rooms. As eau de Javelle, hypochlorite was used to disinfect military hospitals. Semmelweis had used hypochlorite as a hand wash and it remained as the basis of the Carrel-Dakins irrigation technique in World War I. Napoleon hired Bernard Courtois to create artificial nitrates for explosives, but, in 1811, the chemist discovered bromine instead. British surgeon John Davis recommended it for wound irrigation in 1839 and its use continued in disinfecting Union hospitals during the American Civil War.

Creosote, from the Greek for “I preserve flesh,” is distilled from beech wood tar and was invented in 1832 by German chemist Karl Reichenbach who immediately enlisted a local physician to try it as a wound disinfectant. Carbolic acid, or phenol, had been discovered in 1834 and, by mid-century, was recognized as an aid in wound healing. In 1867, Lord Lister recommended it first for treatment of open fractures which had, on account of their high rate of infection, been a primary cause of amputation. It was so effective that it became the antiseptic of choice for the balance of the century and was used as a hand wash, as an irrigant, as a soak for instruments, and as a spray in the operating theater. Hospital gangrene (pourriture d’hôpital) in the Crimea led to use of ferric chloride, camphorated vinegar, lead acetate, and sulphates of zinc and aluminum as disinfectants. Nitric acid was used for the same purpose in Civil War hospitals.

After his battlefield experience in World War I, Alexander Fleming led a campaign against routine use of disinfectants, arguing that their irritant properties outweighed their benefit. The argument went on for most of the inter-war years but, ironically, was settled when Fleming discovered penicillin and antibiotics displaced antiseptics as the primary method of bacterial control.

Asepsis—the attempt to prevent bacteria from ever entering a surgical wound as opposed to antisepsis which concentrates on killing organisms already present. In 1878, Robert Koch demonstrated that he could cause in infections in rabbits by injecting bacteria under their skin, and, by the mid 1880s, the bacterial theory of wound infection was generally accepted in Europe and the United States. The

Page 35: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

practice of surgical asepsis actually dates to Ignaz Semmelweiss who had demonstrated that “childbed fever” was caused by medical students going directly from contaminated dissecting rooms to the delivery room. Aseptic techniques in abdominal surgery were used by Alfred Heger of Freiberg in 1876 and in cranial surgery by Ian Macewen (who combined asepsis and anesthesia for the first time in operating on a fractured skull) in 1879.

By 1882, Ernst von Bergmann of Berlin’s Ziegelstrasse Clinic summed up the decade’s major surgical advance with the terse statement that now surgeons washed their hands before operations. In fact, the clinic used a complicated procedure in which fat and debris was removed from the patient’s skin which was then scrubbed with a stiff brush, soap, and water as hot as could be tolerated, and finally cleansed with alcohol followed by sublimate of mercury. The surgeon used a metal scraper to remove loose skin and dirt under the nails and then put his hands through the same process.

Surgical gloves were first suggested in the 1830s but they were intended to protect the surgeon from syphilis rather than to protect the patient from infection. The latter development came in the late 1890s from William Halsted’s service at Johns Hopkins. Heat sterilization of instruments was suggested by Koch and by Lord Joseph Lister’s student Charles Chamberland and, by the mid 1880s, had been generally adopted in Europe and America. Felix Würtz suggested it was best for doctors not to breathe into wounds in 1563, but surgical face masks did not come into general use until Carl Flügge proved that droplets from speech carried bacteria and Johann Mikulicz-Radecki took that information to the operating room in 1897.

Actually, one can never achieve complete asepsis, and the effort stalled during World War I when a large proportion of the wounds coming to military surgeons were heavily contaminated and the bacteria had been allowed to multiply for hours prior to treatment. Asepsis seemed a futile hope and surgeons fell back on chemical disinfection although Alexander Fleming repeatedly warned of the adverse effects of those substances on wound healing.

Penicillin

Molds had been recommended for wound treatment for over 1500 years, but never in a systematic or generally effective way. In an 1897 dissertation, French medical student Ernest Duchesne, whose primary interest was the interaction between fungi and bacteria, had shown that extract from the mold Penicillium glaucum protected laboratory animals subsequently inoculated with the bacteria that produces typhoid fever. Unfortunately, Duchesne enlisted in the army where he contracted and died of tuberculosis before he could return to his research.

In late August or early September of 1928, bacteriologist and stereotypical absent minded professor Alexander Fleming returned to his laboratory at St. Mary’s Hospital in London’s Paddington suburb to find that cultures of staphylococcus that he had left scattered about had become spoiled by

Page 36: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

air-borne molds. On the verge of dipping them in Lysol, he noticed that some had a ring of killed bacteria around the mold cultures. He incorrectly identified the mold as Penicillium rubrum (it was really Penicillium notatum) and reported his findings in the British Journal of Experimental Pathology. He also named the active agent penicillin. Penicillin, however, proved to be extraordinarily difficult to extract and almost impossible to stabilize, and his mold cultures completely stopped producing the substance after about eight days. Unable to produce usable quantities of penicillin—and, in fact, unable to reproduce his original accidental experiment—Fleming moved on to other things.

In 1938, Oxford’s Howard W. Florey had assembled a research team which included immigrant bacteriologist Ernst Chain and Norman C.H. Heatley to look for effective anti-bacterial agents. Florey had been taught by Cecil Paine who had been one of Fleming’s students and had been an editor of the journal that published Fleming’s original paper, so it is likely he already had some knowledge of penicillin when he adopted it as one of his areas of interest. By 1940, with the assistance of a grant from the Rockefeller Foundation, Florey and his co-workers—it is likely that much of the work was in fact done by Chain—were able to extract enough penicillin to test it on four laboratory mice with four more animals serving as controls. All the controls died and half of the treated mice survived, and the Oxford group were convinced they had a success. The first human—a healthy volunteer—was injected with 100 mg of penicillin on January 27, 1941, but she suffered a severe febrile reaction from a contaminant that had to be removed. The first actual clinical use of penicillin was February 1, 1941 when it was administered to London policeman Albert Alexander who had cut himself shaving and gotten staphylococcal sepsis with osteomyelitis, pneumonia, and a necrotizing infection of his eye. Alexander initially improved, but Florey did not have enough penicillin to continue treatment in spite of recovering and reusing crystallized drug from the patient’s urine. When the drug was stopped, Alexander relapsed and died. With the thought that children, being smaller, would need less drug, Florey next treated five children with sepsis that would have previously proven fatal. Four survived, and the fifth died of a brain hemorrhage without autopsy evidence of remaining infection.

Unfortunately, but this time the Battle of Britain was occupying the country’s attention and all of its industrial capacity, leaving nothing for production of penicillin, so the Rockefeller Foundation paid for Florey and Heatley to come to the United States in July of 1941 where Department of Agriculture officials put them in touch with the Northern Regional Research Laboratory at Peoria, Illinois. That laboratory was a national center of research on fermentation. At Peoria, three signal developments occurred. First, it was shown that addition of corn steep liquor (a byproduct of corn syrup production) to Penicillium cultures could increase penicillin output by a factor of ten. Then, a strain of Penicillium retrieved from a moldy cantaloupe found in a Peoria market was shown to increase penicillin production by a factor of two hundred. Mutations of that organism induced by x-ray and ultraviolet radiation increased that rate to a factor of over 1,000. Finally, cultures that had previously only grown on the surface of milk bottle-sized flasks were induced to grow throughout aerated 25,000 gallon tanks. That made commercial production possible, and Alfred N. Richardson of the Office of Scientific Research and Development’s Medical Research Committee enlisted Merck and Company, Charles Pfizer Company, and E.R. Squibb and Sons to the effort. It is likely that Richardson and the United States government were already considering the possible military uses of penicillin if the United States entered the war.

Page 37: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

In early 1943, there was only enough penicillin produced to treat 100 people, even with urine recovery and re-use of the drug. By summer of that year, more drug was available, but it still cost over $20 a dose. By 1944, American pharmaceutical companies were producing 300 billion units a month, and, by D-Day, there was enough penicillin to treat every British and American casualty from the invasion. By May 1945, the drug was released to the civilian population and the price had dropped to $.55 a dose.

By the end of the war, an unheard of 95 per cent of battlefield injuries were surviving, the death rate from pneumonia had dropped from 18 per cent to less than 1 per cent, and syphilis could be reliably treated. Fleming, Florey, and Chain were awarded the 1945 Nobel Prize for Physiology and Medicine. Unfortunately, as Fleming had predicted, resistant strains of staphylococci began emerging almost as soon as penicillin became available. The bacteria learned to produce an enzyme—penicillinase—that broke the drug down and passed that ability from generation to generation. The first penicillin resistant staphylococcus was identified in 1942, and, by 1952, 60 per cent of staphylococci were resistant to even massive doses of the drug. In addition, widespread use of the drug led to allergic reactions, and the first fatal case of penicillin induced anaphylaxis was reported in 1949. Resistance and allergies triggered a search for variants of penicillin and entirely new drugs that is ongoing.

Human Experimentation

The first well controlled medical experiment was probably Lind’s use of various potential remedies for scurvy and the recording of their effectiveness. Walter Reed’s use of Spanish immigrants and American soldiers to prove that mosquitoes were the vector of yellow fever were a model of experimental design if not of medical ethics.

Both the Japanese and the German military medical corps used prisoners of war and civilian prisoners during the Second World War to test the effects of physiological stress such as extreme altitude and cold in hopes of improving survival of their own personnel in harsh conditions. Both also used prisoners to test potential chemical and biological weapons. The studies were justified on the basis that sacrifice of experimental subjects was warranted in time of national emergency. These experiments directly resulted in the Nuremberg Code and the Helsinki Declaration, both of which emphasized the necessity for subjects to be fully informed of the risks of their participation in experiments and for that participation to be voluntary. In spite of the fact that both codes were broadly accepted, violations have been common.

As early as October 1942, the Committee on Medical Research of the Office of Scientific Research and Development recommended direct test of ionizing radiation on humans. Beginning in 1946, both United States Army and Navy personnel were intentionally exposed to fallout from nuclear weapons tests. In the July 1946 Project Able, 150 ships carrying 37,000 military personnel were stationed near a twenty-three kiloton low altitude atmospheric nuclear explosion and taken directly into

Page 38: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

the target area the following day. Later that month, the experiment was repeated with an underwater explosion. In both cases, radiation levels at various parts of the observer ships were carefully measured. In subsequent tests, pilots flew through clouds from nuclear explosions and ground observers were carefully monitored for their physiologic response to those explosions. Over 100 chemical, biological, and nuclear tests on United States military personnel carried such colorful names as Copperhead, Flower Drum, Eager Belle, Fearless Johnny, Half Note, Purple Sage, Scarlet Sage, and Autumn Gold. Ethical questions were raised at the time.

The Department of Defense, citing Reed’s yellow fever studies as justification for experimental use of military personnel, sought a waiver of the Nuremberg Code restrictions. The Wilson Memorandum (named for Secretary of Defense Charles E. Wilson) defined how each service would experiment on human subjects and effectively superseded the Nuremberg restrictions.

In the early 1950s, Seventh Day Adventist conscientious objectors were recruited into Operation Whitecoat in which they were exposed to Q fever at Utah’s Dugway Proving Grounds and then returned to Fort Detrick for observation. The Q fever studies were terminated in 1958, but other studies of tularemia and yellow fever continued until the program was terminated in 1975.

The British also used military personnel in human experimentation between 1939 and 1989 under the supervision of their chemical and biological warfare unit at Porton Down. Soldiers at the facility who were told they were part of experiments on the common cold were actually exposed to chemical warfare agents. In a 1970 test of aerosolized delivery of bio-warfare agents, the HMS Andromeda was sailed through a cloud of Escherichia coli and Bacillus globigii and the inhaled dose and clothing contamination of the crews measured. Both bacteria were thought at the time; however Bacillus globigii (now known as Bacillus subtilis) is known to cause sepsis, endocarditis, pulmonary infections, and meningitis. Similar studies were done with the HMS Achilles in 1973 and on basic training recruits at Portsmouth in 1976.

At a Nevada test site, American soldiers were exposed to nuclear flashes with and without protective eyewear to measure how long it took them to be able to read instruments after the explosion. In another experiment, officers were placed 2,000 yards from a forty kiloton nuclear explosion to measure the physiological effects of exposure in spite of medical recommendations that they be no closer than seven miles from the blast. The British used soldiers from their own army as well as from Australian and New Zealand forces to both evaluate physiologic effects of radiation and to teach their fellow soldiers about nuclear warfare. Over 35,000 troops were used between the 1953 and 1963 in what was called the Indoctrination Force.

Perhaps the most notorious use of American military personnel was in Project MKULTRA, authorized by Central Intelligence Agency Director Allen Dulles in 1953 in which both civilians and soldiers were exposed, without their knowledge or consent, to drugs such as LSD (lysergic acid diethylamide) thought to have a behavior altering potential. The program was terminated in 1964 amid concerns on its public relations ramifications.

Page 39: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

Nuremberg Code—

Standards for human medical experimentation arising from war crimes trial after World War II. In 1946, the United States conducted twelve trials of Germans key to various parts of the Third Reich including finance, law, ministry, manufacturing, and medicine. The first of these, the Doctors Trial, convened October 25, 1946 and lasted to August 20 the following year with United States Army lawyer Telford Taylor as the chief prosecutor. Of the twenty-three defendants accused of murder and torture while performing medical experiments on concentration camp inmates, twenty were physicians. Seven of the defendants were sentenced to be hung at Landsberg Prison (ironically, the place Adolf Hitler had been imprisoned before the war and where he had written Mein Kampf), five were sentenced to life in prison, two were sentenced to twenty-five years, one to fifteen years, one to ten years, and seven were acquitted.

Prior to December 1946, written ethical standards of human experimentation were rare, although the Prussian government had mandated voluntary participation by experimental subjects as early as 1899 and, in 1900, had banned all medical research on minors. Agreement on what was appropriate was central to Taylor’s prosecution; to that end, he solicited the opinion of American neuropsychiatrist Leo Alexander who proposed three broad principles. With attention to the biomedical ethical principle of autonomy, Alexander proposed that no human experiment should be conducted unless the subject participated of his own free will. Taking the Hippocratic proscription that a physician should never willingly harm a patient (primum non nocere) as a Kantian categorical imperative, he said no experiment should be done if there was an a priori reason to expect that the subject would be harmed. Finally, he said no experiment should be done if its design did not conform to the practices of good science. Alexander’s three basic principles were the foundation upon which the subsequent Nuremberg Code was built. The code says:

1. The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision. This latter element requires that before the acceptance of an affirmative decision by the experimental subject there should be made known to him the nature, duration, and purpose of the experiment; the method and means by which it is to be conducted; all inconveniences and hazards reasonably to be expected; and the effects upon his health or person which may possibly come from his participation in the experiment. The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity.

2. The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.

Page 40: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating

3. The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment.

4. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.

5. No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.

6. The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.

7. Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.

8. The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment.

9. During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.

10. During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of good faith, superior skill, and careful judgment required of him, that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

The Nuremberg Code has never been adopted as law by any nation or even as an ethical standard by any major medical organization, although it has been central to the judgments by institutional review boards that have become the sine qua non of modern medical research. The informed consent provisions are incorporated in Article 7 of the United Nations International Covenant on Civil and Political Rights of 1966 and are part of the International Ethical Guidelines for Biomedical Research Involving Human Subjects issued by the World Health Organization.

There have been recent attempts to modify the Nuremberg Code particularly in substituting peer review of proposed experiments for some parts of informed consent as exemplified in the Declaration of Helsinki issued by the World Medical Association of 1964. Current practice in the United States is a combination of the Helsinki Declaration and the Nuremberg Code.

Page 41: jackmccallum.comjackmccallum.com/History of Medicine.docx  · Web viewThe wonder of the four valleys was the regular floods that brought new nitrogen rich ... purging, sweating