The History of Health Insurance
Practices of insurance, broadly speaking, have long histories related to contracts and procedures designed to protect people from loss of property. Guarantees on property loans and insurance based on carefully distributed shipping wares are modes of insurance dating back to the second millennium B.C., and life insurance has a long and distinctive history that reflects humankind’s awareness of its own mortality. Health care has its own evolving story, beginning as a standard service performed in exchange for a fee. In Mesopotamia, under the Code of Hammurabi, it was expected that successful health care, in particular surgery with the knife, was paid for appropriately. A person of high status paid the successful surgeon more, but the cost of a failed surgery carried higher costs for the surgeon. Compensation and liability, in other words, were determined by the patient (Price 2001).
But contracts on health care, or health insurance, truly became an issue of vital and controversial importance in the early part of the twentieth century, when medical care became institutionalized and more advanced, and the cost of services began to rise.
A Brief History of Early Medical Knowledge
Ancient health care providers clearly had an extensive knowledge of local flora and, as the world gradually opened up, exotic flora become integrated into the ancient pharmacology. Medical knowledge in the ancient world was accumulated over centuries of experiment and experience—an exercise in trial, error, and success, the result of which began to be recorded by the second millennium B.C. One of the oldest and most extensive examples comes from Mesopotamia. Called the “Treatise of Medical Diagnosis and Prognoses,” the some forty tablets were comprised of prescriptions and treatises that, though many diseases were blamed on the supernatural, outlined treatments based on rational observations of the body.
Mesopotamian medical practitioners, like many ancient and indigenous people, were frequently shamanic (often called witch doctors or sorcerers), and they used charms and spells and tried to determine if mortal sin or vice were the cause of the patient’s ailment. But many societies had specialists in herbal remedies which, in Mesopotamia, were referred to as “physicians” because of their rational medical knowledge (Price 2001).
In many ways, the home has long been the bastion of health care. For much of the history of human civilization, everything from birth to disease to surgery was handled by medical practitioners within one’s own house. But in other traditions, particularly those borrowing more directly from the Egyptians, such as the Islamic tradition, medical knowledge became the foundation for early examples of centralized medical practice. As the great Arabic Empire expanded around the lower Mediterranean, their accumulated medical knowledge began to be supported in great social and cultural centers, where universities were developed with great libraries and even hospitals, all while much of the European tradition floundered through the Middle Ages.
The sources of such medicine came from the Classical era, where rational knowledge reached great heights, particularly once the Greeks “rid the science from supernatural powers and spirits” by about the fifth century B.C. Galen, the famous Greek-born physician who worked in Rome, used a logical method and meticulous studies of human anatomy en route to publishing hundreds of treatises. Galen’s heritage proved lasting when his work was translated into Arabic and adapted by physicians across the Islamic Empire. Galen’s work marked the foundation for modern medicine, enduring for more than a millennium until most of his deductions were proven incorrect by the sixteenth century.
The Classical traditions were combined with existing Islamic knowledge and centuries of herbal medicine borrowed from Chinese, Persian, and Indian traditions in the Canon of Medicine, a rigorous text written by the Iranian physician Abu Ali Sina in the eleventh century (Price 2001). Humans have long recorded their secrets for curing sickness, but sickness has not always been well understood. Bloodletting, for example, which dates to Galen’s treatises, continued to be practiced into the nineteenth century. But as modern medicine began to emerge, more advanced—and costlier—procedures than letting blood were developed.
Sickness Funds and the Groundwork for Health Insurance
The first of what could be called individual “health” insurance plans became available in the United States during the Civil War. The plans were accident insurance providing coverage for injury related to travel by railroad or steamboat. Massachusetts Health Insurance of Boston offered early group policies with a relatively comprehensive list of benefits as early as 1847. Individual accident insurance proved a successful venture, so these kinds of early plans began to evolve into more expansive programs that covered a broader range of illness and injury, including early versions of disability coverage by the end of the nineteenth century. In the early years of the twentieth century, groups began developing relationships with health care providers to develop what would become the predecessors to modern health insurance plans, or fee-based contracts (Neurosurgical.com).
Health insurance is a term that relates to a contract wherein the individual contributes a regular premium with the expectation that should something happen, the insurer will provide for the individual in question. The term dates to the Progressive Era in the United States, where the debate was already well underway regarding the role of the government in health care. Though health insurance in America has its origins in a related system called “sickness insurance,” it was really when the British passed their National Insurance Act in 1911 using the term “health insurance” that the term fell into favor.
Sickness insurance tended to provide supplementary income on par with modern disability insurance, and was more favorable through the 1910s since at the time wages lost from missing work were far in excess of the cost of health care, which was still scattered and much less relied upon than today. Instead, the reasonably clear concept of a “sickness fund”—a kind of health insurance that stepped in to dampen the effects of financial shock suffered as a result of missing work—was often “sufficiently competent and fair in their delivery of financial and…medical assistance” (Murray 2007).
In the first quarter of the twentieth century, then, health insurance was little used and, for that matter, remained little needed. To put it bluntly, “the state of medical technology generally meant that very little could be done for many patients, and that most patients were treated in their homes” (Thomasson 2003). As medicine became more advanced with respect to scientific discoveries of the era, treatment gradually moved out of the home and into health centers, particularly with respect to increased understanding of germs and procedural antisepsis. Though surgery continued to be conducted in private homes into the 1920s, the identification of infection-causing bacteria and communicable disease combined with a better understanding of the body’s immune system to drive down surgery fatality rates (Thomasson 2003).
Medical practitioners who began to institutionalize as improvements in medical technology legitimized the profession. By the early twentieth century , the American Medical Association (AMA), which has its roots in the nineteenth century, began creating licensing standards and better standards for medical training in higher education en route to developing medical specializations—advancements that without question have prolonged life spans and increased life expectancy, but at ever-increasing costs. Medical expense insurance, in other words, supplemented what patients previously had to pay up front entirely out of their available income (ama-assn.org).
Evolution of Modern American Health Insurance
In the 1920s, most people still felt health insurance was not necessary and stayed with sickness insurance plans instead. In 1929, however, a group of Dallas-based teachers formed a partnership with an area hospital to provide a set amount of sickness and hospitalization days in exchange for a fixed, prepaid fee. Prepaid hospital service increased during the Depression, proving mutually beneficial during a difficult economy times. The American Hospital Association (AHA) encouraged hospitals to develop similar plans, which provided a steady source of income, particularly to prevent financial embarrassment in emergencies. Individual hospitals and community care organizations began competing with one another for such plans so, to provide better community coverage, hospitals joined together with the help of the AHA under the name of Blue Cross.
To maintain some autonomy and a closer physician-patient relationship, physicians organized their own prepaid plans. Blue Shield was developed not only to compete with Blue Cross, but to offer another choice. Physicians were concerned that the contemporary social security legislation would lead to compulsory health insurance that would be heavily regulated and devastate patient choice and their relationship with physicians. With voluntary health insurance and the benefits of Blue Shield, doctors retained the ability to price discriminate. Patients would be charged the difference between what they were reimbursed and the actual charges. But as the health care market grew, the government began encouraging participation in the proliferation of employee-based benefit plans, which were often improved through the power of strong labor unions. Meanwhile, employer and employee contributions to health plans became exempt from taxes under the 1954 Internal Revenue Code (Thomasson 2003).
Further defeats of nationalized health insurance in the Fifties and Sixties culminated in 1965, when Congress enacted Medicare and Medicaid. Medicare provided compulsory hospital insurance for people over the age of 65, as well as subsidized medical insurance, while Medicaid provided care for low-income people, though the federal-state program varied across state lines according to each state’s relative per-capita income. Both programs have grown immensely, though critics of Medicare show that physicians were still able to price discriminate and, since doctors were permitted to bill patients directly, patients were reimbursed only what the program would pay and had to make up the difference. Medicaid expanded eligibility in the 1990s, but still had significant limitations in coverage (Thomasson 2003).
By the 1970s, the United States federal government took an interest in learning more about uninsured Americans. They found that the majority of the uninsured lived in poverty or near the poverty line, and many of those were children. The decision to create child-specific health programs and expand Medicaid has enjoyed some success, and yet the number of uninsured Americans still rose, including at a greater rate in the middle class (Swartz 2006).
In 1996, Congress passed two bills that demonstrated the federal government’s recommitment to regulate the health insurance industry. The Mental Health Parity Act was a boost to psychiatric benefits, while the Health Insurance Portability and Accountability Act (HIPAA) brought about important medical legislation, including helping employees maintain insurance between jobs, if they became self-employed, or were otherwise separated from the employer-packaged managed health care plan. Though the HIPAA is by no means a major health reform, it has “far-reaching implications…because it creates a statutory framework for the federal government to use in collaborating with state governments to regulate insurance markets, setting the stage for future mandates” (Ladenheim 1997).
The bill came three years after Congress rejected President Bill Clinton’s plan that would have provided health insurance for all Americans, and was modest compared to Congressional proposals in 1994 to reach a compromise in reforming health care. Nevertheless, the HIPAA allowed the federal government to join states in oversight and regulation, a role some lawmakers continue to reject and others are still pursuing.
Today, the Children’s Health Insurance Program (CHIP), which dates to 1997, helps provide insurance to low-income children, and new measures to provide greater quality and comprehensiveness in consumer-directed health care have emerged. But the debate challenges further evolution or reform as partisan politics confront the health insurance question, from what should be covered to whom should be covered (Thomasson 2003).
The Modern Debate
The primary reason for the debate over American health care is that the United States spends a higher percentage of its gross national product on health care than any other developed country. But despite the high spending, not only do tens of millions of uninsured Americans go without access to America’s high-quality health care, but the spending “has not produced comparably high measures of health status” (Institute of Medicine 2004). Key measures include life expectancy and infant mortality rate, both of which do not measure up to America’s peers in the developed world, including the United Kingdom and her commonwealths, Scandinavia, France, Italy, and Japan—most of which have some form of nationalized health care coverage available to 100 percent of their total populations.
While there remains “virtually no waiting time for elective procedures in the United States…and most Americans are highly satisfied with the care they receive…[t]he United States is among the few industrialized nations in the world that does not guarantee access to health care for its population” (ibid). Even as some Americans suffer from poor coverage in expensive procedures or fewer specialists, the availability of elective procedures is excellent.
As health care becomes more expensive within the complex system of health care delivery, the rate of uninsured Americans is rising. Within America’s system of voluntary private health insurance, people are taking fewer costly preventative measures and generally see fewer doctors—and the result seems to be a somewhat lower life expectancy and a hesitancy of individuals to take part in the health care infrastructure. Recommendations abound by individuals and institutions, and the 2008 U.S. presidential debates were driven in part by concerns over the future of American health care. The one agreed-upon conclusion is that coverage needs to be extended, but that will require some significant changes—whether toward nationalization or more individual choices based on interstate insurance options and tax credits remains to be seen.
The debate also extends to health in developing nations, where a combination of poor preventative care and simple “lack of reasonable access to basic health care” is exacerbated by the unavailability of clean water and proper nutrition. Clearly health care funding is the primary concern as many of the troubled countries are “making inefficient use of the resources they do have for health care and risk pooling.” Thus, the developing world everywhere, from Asia and Africa to Latin American, is seeing a flourish of social health insurance initiatives to try use payroll taxes to develop a financial infrastructure to mobilize and develop resources. The move is being supported by the World Health Organization with the hope that such initiatives will lay a foundation of risk pooling to develop a system that can provide good, widely accessible care that can be gradually contributed to by focused government spending, all with the aim of reducing the health care disparity in their developing economies (Hsiao and Shaw 2007).
In the meantime, navigating health insurance plans, individual or employer-based, remains a challenging aspect in the lives of Americans. In light of health care concerns in developing nations, the debate seems focused on simply making America’s world-renowned health care more available to its own citizens. Making the right choice is difficult, for determining one’s needs is akin to divining the future. What kind of coverage will I need? How much can I afford to spend now? It also raises the question whether the individual choice is better than a national option, and if the distance between patient and doctor (and all the intermediate costs) is really worth all the trouble.
Modern medicine continues to improve, funding research to develop our collective scientific understanding of the body. While some diseases and injuries remain beyond the capacity of modern science, having adequate money and insurance can mean the difference between even attempting a procedure or not. For some, that can mean resorting to sorcery, shamanism, or prayer—and while hope can be a great asset in the struggle for one’s life, for many, it is also comforting to have modern medicine on their side.
-- Posted March 31, 2009
“Chronology of AMA history.” Ama-assn.org. Accessed: October 28, 2008.
“The History of Health Insurance in the United States.” Neurosurgical.com. 2007. Accessed: 10/30/08
Hsiao, William C. and R. Paul Shaw, eds. 2007. Social Health Insurance for Developing Nations. Washington, D.C.: The World Bank.
Institute of Medicine of the National Academies. 2004. Insuring America’s Health: Principles and Recommendations. Washington, D.C.: The National Academies Press.
Murray, John E. 2007. Origins of American Health Insurance: A History of Industrial Sickness Funds. New Haven, CT: Yale University Press.
Ladenheim, Kala. “Health Insurance in Transition: The Health Insurance Portability and Accountability Act of 1996.” Publius. Vol 27, No. 2. The State of American Federalism, 1996-1997. (Spring, 1997): 33-51.
Price, Massoume. “History of Ancient Medicine in Mesopotamia & Iran.” Iranchamber.com. October, 2001. Accessed: October 11, 2008.
Swartz, Katherine. 2006. Reinsuring Health: Why More Middle-Class People Are Uninsured and What Government Can Do. New York, NY: Russell Sage Foundation.Thomasson, Melissa. "Health Insurance in the United States." EH.Net Encyclopedia. Robert Whaples, ed. April 18, 2003. Accessed: October 18, 2008.