c as e f ore i gn h e a l t hc a re e nvi ronm e nt a l p ... · with spain for cuba and the...

25
Case Studies in Public Policy: Foreign Policy Healthcare Policy Environmental Policy A.P. Government - Ms. Trimels 1

Upload: phamdang

Post on 18-Aug-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

 

 

 

Case Studies in Public Policy: Foreign Policy

Healthcare Policy Environmental Policy

A.P. Government - Ms. Trimels

 

 

1

FOREIGN POLICY

War and International Law 

America’s Foreign Policy: A Brief History  A central function of the U.S. government is to conduct relations with the almost 200 other nations in the world. A nation is a sovereign country, and as such, possesses the highest authority over its territories. All sovereign states are theoretically equal. Foreign policy determines how America conducts relations with other countries. It is designed to further certain goals. It seeks to assure America’s security and defense. It seeks the power to protect and project America’s national interests around the world. National interest shapes foreign policy and covers a wide range of political, economic, military, ideological, and humanitarian concerns.   America’s foreign policy has changed over time reflecting the change in its national interest. As a new nation after the Revolutionary War, America’s prime national interest was to maintain its independence from more powerful European countries. Protected by the Atlantic Ocean, its major foreign policy, as typified by the Monroe Doctrine, was to limit European attempts of further colonization of the Western Hemisphere.   Through the 19th century, America concentrated on creating a nation that spanned the continent, and it avoided foreign entanglements. Once industrialized and more prosperous, it began looking for foreign markets and colonies.   By the turn of the 20th century, the United States had become a minor imperial power, fighting a war with Spain for Cuba and the Philippines and annexing Hawaii and several other territories. World War I engaged the United States in European affairs, but after the war, a wave of isolationist feeling swept the country. Refusing membership in the League of Nations, America turned inward once again. Absorbed by the prosperity of the 1920s and the Great Depression of the 1930s, America let its military strength erode. It was not prepared for war when the Japanese struck the U.S. fleet at Pearl Harbor in late 1941.  Emerging from World War II as the most powerful economic power on Earth, the United States changed its foreign policy dramatically. It took the lead in founding the United Nations. It invested billions of dollars through the Marshall Plan to help strengthen war-devastated European democracies. It created a system of alliances, including the North Atlantic Treaty Organization (NATO).  

2

Central to America’s foreign policy in the post-war period was the containment of the Soviet Union and communism. During the Cold War, the United States and its allies competed with the Soviet Union and its allies militarily, economically, and ideologically. Both sides created massive military forces and huge stockpiles of nuclear weapons. Although the two superpowers never went to war, the policy of containment led the United States into the bloody Korean and Vietnam wars.   The Cold War ended when the Soviet Union, economically exhausted from competing with the West, disintegrated. This left the United States the only remaining superpower in a world no longer ruled by the logic of containing the Soviet Union.  Through time, various constitutional principles and values have shaped American foreign policy. American foreign policy has favored the self-determination of nations for independence. Based on our commitment to constitutional government, we often favor and support nations that practice democracy. These principles, however, sometimes have conflicted with the goals of national security, economics, or the realities of international politics. In certain cases, America has supported dictatorial governments or intervened to curtail popular political movements. 

Making and Carrying Out Foreign Policy America’s foreign policy today covers a wide range of functions and issues. It includes establishing and maintaining diplomatic relations with other countries and international organizations such as the United Nations and the Organization of American States. It includes peacekeeping functions such as working with allies to assure regional and international security and arms-control efforts. It covers a range of international economic issues including trade, travel, and business. It involves foreign aid and disaster relief. As a superpower, the United States has also taken a leadership role in peacemaking around the globe by trying to negotiate treaties and agreements to end regional conflicts. Also, as a world leader, the United States has a longstanding role in trying to address international economic and environmental problems.   The making and carrying out of America’s foreign policy involve all three branches of government and a complex array of governmental institutions and agencies.   The president and the executive branch have the most significant role in making foreign policy and are responsible for carrying it out. With the advice and consent of the Senate, the president makes treaties and appoints ambassadors. The president can hold summit meetings with world leaders. As commander in chief of the military, the president can, by executive order, rapidly project U.S. power around the globe.  In forming U.S. foreign policy, the president relies on advice from the National Security Council. This group is made up of the vice-president, secretary of state, secretary of defense, head of the Central Intelligence Agency (CIA), and chair of the Joint Chiefs of Staff (the nation’s highest military adviser).    The secretary of state heads the U.S. State Department and often represents the president abroad. The State Department carries out foreign policy decisions and helps develop foreign policy for every region of the world. Attached to the State Department is the U.S. Foreign Service, or diplomatic corps. It is made up of ambassadors (who represent America’s political interests in every county), 

3

consuls (who represent America’s business interests), and other officials who specialize in technical matters and issues of foreign aid.   Congress also plays a role in America’s foreign policy through its power to set duties and tariffs on foreign exports and imports, regulate foreign commerce and immigration, and declare war. It sets quotas on immigration, chooses which countries will benefit for most-favored-nation status in trade agreements, votes on foreign aid, and sets the defense budget. But Congress is usually in the role of accepting, changing, or rejecting policies proposed by the president.   The Supreme Court plays a limited role in foreign policy. It has jurisdiction over cases involving treaties, admiralty and maritime law, and ambassadors and other public ministers. It also is charged with deciding disputes between states and foreign states and their citizens and subjects.  At different times, tensions have arisen between the branches in the conduct of foreign policy. Presidents sometimes favor treaties that the Senate does not want to approve. President Woodrow Wilson promoted treaties establishing the League of Nations after World War I, but the Senate opposed the League and refused to ratify the treaties. Other times, tensions have arisen between the Congress’ power to declare war and the president’s role as commander in chief. Presidents have committed American armed forces to major conflicts such as the Korean, Vietnam, and Gulf wars without a declaration of war by Congress.   The public also plays a role in influencing foreign policy. Advocacy groups for foreign countries often try to influence Congress and the president about issues. Business associations lobby the government about international economic and trade issues. Groups and individuals with strong views on certain foreign policy issues, especially military intervention, often organize protests or other political actions to influence decisions.  http://www.crf-usa.org/war-in-iraq/foreign-policy.html

America's Foreign Policy Failures

Have Put Its National Security at

Risk

American foreign policy is in desperate need of reform. Far from creating

economic opportunity or enhancing national security, our current method of

forming and executing policies abroad harms our interests, increases

4

insecurity and thwarts our ability to expand trade. U.S. engagement with Iran

and North Korea over the past several decades best illustrates Washington’s

dangerous deficiencies.

The current “blob” of mainstream thinking has been stuck in an adversarial

mode since the end of the Cold War and has sought to preserve or expand

American influence primarily by the use of U.S. military power. Three major

events have inadvertently conspired to lay the groundwork for our currently

defective international engagement.

The first event was the dissolution of the Soviet empire, which officially ceased

to exist on December 31, 1991.The second event was Operation Desert Storm

in 1991 in which more than a half a million American troops utterly

annihilated what was then the fifth largest military in the world: Saddam

Hussein’s Iraqi army. The third and most critical event was when terrorists

attacked the United States on Sept. 11, 2001.

Virtually the entire world recognized that the United States possessed the

most powerful military on the planet following the dismantling of Saddam

Hussein’s army in 1991. With the dissolution of the USSR barely a year later,

any pretense of a peer-challenger to U.S. domination disappeared. But with

the terror attacks in September 2001, the foreign-policy elite in the United

States expanded their focus across the globe, declaring that states, powers and

individuals abroad were either “for us or against us.”

In January 2002, however, Bush took it one step further by declaring an

additional set of enemies: an “axis of evil” that included Iraq, Iran and North

5

Korea. A year later, Bush took out the first leg of that trio with the invasion of

Iraq. Yet Iran and North Korea remain squarely in the crosshairs.

North Korea and Iran are totalitarian and authoritarian regimes that deny

their citizens basic human rights and use imprisonment and torture to keep

their populations in check. Iran is suspected of harboring a secret desire for

nuclear weapons. North Korea has demonstrated it has them already.

Yet neither represents a threat to the United States that cannot be deterred by

our overwhelming conventional and nuclear dominance. Our policy towards

both has been to confront them using covert actions and overt military threats

that are costly and ineffective.

North Korea is faced across its border by South Korea, one of the largest, most

modern and well-trained militaries in the Asia-Pacific region. North Korea

knows an attack on its southern neighbor would result in a military defeat.

Likewise, it knows that to use any of its weapons of mass destruction would

result in a massive––and likely annihilating–– response from Washington

and Seoul. They want to live, and thus they are deterred from launching major

attacks.

The consequences of war in either location could be catastrophic. Some

suggest North Korea already has a crude ability to launch a nuclear missile to

U.S. territory. If Washington were to order a so-called “preventive” military

strike, then it is conceivable that Pyongyang could launch a retaliatory nuclear

missile, possibly killing millions of American citizens.

6

The risk of a U.S. preventive strike is so far out of proportion to what could be

gained, it should be excluded from further consideration. We should maintain

a firm and credible promise of a withering retaliation for any North Korea or

Iranian attack against American interests or allies because that has the

greatest chance of preventing war and maintaining U.S. security.

Clinging tenaciously to the failed status quo of current U.S. foreign policy

increases the chance America may one day stumble into a war. It is time to

change course and adopt more realistic, saner and effective method of

protecting our homeland, our prosperity and our way of life.

Daniel L. Davis is a senior fellow for Defense Priorities and a former

lieutenant colonel in the U.S. Army who retired in 2015 after twenty-one

years, including four combat deployments. Follow him @DanielLDavis1.

7

Healthcare Policy

History of healthcare policy in the United States

The United States' healthcare system is unique among Western countries. The United States has eschewed universal national insurance in favor of a private, employer-based

system, with government programs covering only certain vulnerable groups. While many have criticized the United States for its lack of government action on healthcare, others have praised the supposed innovation and diversity resulting from the private

healthcare industry.[1]

Some of the healthcare policy issues debated throughout the United States' history have been

● how to encourage efficiency among the different types of health insurance (fee-for-service vs. managed care)

● how to cover groups of people outside the employment system, such as the unemployed or elderly people

● how to control public healthcare spending while also ensuring quality care and access to technology such as prescription drugs

Over the past few decades, the government's involvement in healthcare has increased. By 2013, government expenditures represented 43 percent of healthcare spending; private households, 28 percent; and private businesses, 21 percent. Government health insurance accounted for 33 percent of all insurance plans, and employer-based plans constituted 48 percent.[2]

Today, the healthcare industry is an immense part of the United States economy. Overall healthcare spending amounts to about one-sixth of the nation's gross domestic product and for about one-fourth to one-third of state budgets. Healthcare regulation and policy is complex, with nearly "every aspect of the field ... overseen by one regulatory body or another, and sometimes by several." Such regulations are enforced by federal, state, and local governments, and even private organizations. The 2010 passage of the Affordable Care Act (ACA), also known as "Obamacare," introduced experimentation and uncertainty into the industry, which has been and will be watched closely over the next several years to gauge the lasting effects of its policies.[3][4][5][6][7]

8

HIGHLIGHTS

Healthcare policy affects not only the cost citizens must pay for care, but also their access to care and the quality of care received, which can influence their overall health. A top concern for policymakers is the rising cost of healthcare, which has placed an increasing strain on the disposable income of consumers as well as on state budgets. Some of the organizations that have influenced healthcare policy include the American Medical Association, the AFL-CIO, and the American Association of Retired Persons (AARP), as well as large health insurance companies such as Kaiser Permanente and BlueCross/Blue Shield.

Major healthcare legislation Social Security Amendments of 1965

President Lyndon Johnson signed Titles XVIII and XIX of the Social Security Act into law on July 30, 1965. Title XVIII established Medicare, which provided public health coverage to seniors over the age of 65. The Medicare law consisted of Part A and Part B:[8]

● Part A, which was universal for anyone receiving Social Security benefits, covered hospitalization. The recipient paid a deductible about equal to the first day of hospitalization, and Medicare then paid for the next 60 days. After 60 days, Medicare then paid part of the costs for up to 150 days of the hospitalization; after 150 days, Medicare did not pay any costs. Medicare also paid the costs of 20 days in a skilled nursing facility after a hospital stay, and then part of the costs for up to 100 days. Medicare did not cover long-term care in a nursing home. Part A was funded by payroll taxes on current workers and their employers.

● Part B covered physicians' and outpatient services, such as doctor visits, X-rays, and laboratory tests, after the beneficiary met a small yearly deductible. About 25 percent of the funds for Part B came from premiums paid by beneficiaries, initially with all beneficiaries paying the same premium. The rest of Part B was funded out of the federal government's general revenues. Enrollment in Part B was voluntary, but most seniors elected coverage.

9

The Medicare insurance program followed the fee-for-service model, in which the government reimbursed hospitals and doctors for the "usual, customary, and reasonable" fees they charged for each service and did not manage any hospitals or provider networks. The reimbursement process generally functioned through "fiscal intermediaries," private companies that wrote checks on behalf of the government.

Title XIX established Medicaid, which provided public health coverage to poor families receiving Aid to Families with Dependent Children (AFDC). The program was administered through matching grants, in which federal and state governments both provided funds. States were left some discretion over administering and determining eligibility for the program.

When the Medicare program began in 1966, 19 million people enrolled. By 2015, 55 million people were enrolled in Part A and 51 million people in Part B. People could be dual-eligible for both programs, and by 2010, one in five Medicare beneficiaries were also receiving Medicaid. In 2012, 91 percent of doctors accepted new Medicare patients, while 71 percent accepted new Medicaid patients.[9][10]

Social Security Amendments of 1972

President Richard Nixon signed Public Law 92-603 on October 30, 1972, which amended Title XVIII of the Social Security Act. The law expanded Medicare coverage to disabled people who had been receiving Social Security benefits for at least two years, and to people with serious kidney disease.[11]

Upon signing the legislation, President Nixon stated that it "reaffirms and reinforces America's traditional efforts to assist those of our citizens who, through no fault of their own, are unable to help themselves. America has always cared for its aged poor, the blind, and the disabled--and this bill will move that concern to higher ground."[11]

The eligibility expansion in 1972 contributed to Medicare's increasing costs. In 1967, Medicare's yearly expenses were $4.2 billion; by 1973, they had risen to $9.3 billion. In 2002, the average yearly cost to cover an elderly beneficiary was $6,002, but the average yearly cost to cover a person with serious kidney disease was $41,696.[12]

Health Maintenance Organization Act of 1973

President Richard Nixon signed the Health Maintenance Organizations Act on December 29, 1973. The law promoted a particular type of health insurance—prepaid group practice service plans, or health maintenance organizations (HMOs), as opposed

10

to the more traditional fee-for-service plans. The law promoted HMOs in several different ways:

● It removed existing state laws that prohibited HMOs. ● It offered federal subsidies to establish new "federally qualified" HMOs. ● It created standards for the federally qualified HMOs, such as requiring them

to be nonprofit organizations, to cover a certain level of care, and to charge all members the same premium.

● It required employers who offered health insurance to offer an HMO option whenever available.

● It allowed HMOs to adopt a variety of organizational structures (for instance, the HMO could contract with a number of hospitals or practices rather than owning its own hospitals as some insurance companies did).[13]

President Nixon hoped that the act would signal the beginning of a comprehensive healthcare strategy. In his signing statement, he commented: "The signing of this act marks another milestone in this Administration's national health strategy. The major task of providing financial access to health services should be addressed in the next session of this Congress with the enactment of an appropriate and responsive national health insurance act."[14]

Consolidated Omnibus Budget Reconciliation Act of 1985

The Consolidated Omnibus Budget Reconcilation Act of 1985, or COBRA, was a law signed by President Ronald Reagan on April 7, 1986. The law amended Title X of the Internal Revenue Service code to deny tax deductions to employers whose health plans did not allow employees to continue coverage. Under COBRA, employees could elect to continue healthcare coverage if they would otherwise lose it due to a "qualifying event," such as job loss, death or divorce of a family member, reduction in hours, or medical leave. The employee would generally have to pay both the employee and employer portions of the premium, and could continue to do so for 18 months to 36 months, depending on the qualifying event.[15]

In 1993, the RAND Corporation reviewed existing studies and found that between 20 and 25 percent of people eligible for COBRA coverage actually purchased such coverage.[16]

Health Security Act of 1993

The Health Security Act of 1993, also known informally as Hillarycare, was a healthcare bill proposed by President Bill Clinton's administration, but which failed to pass Congress. Shortly after President Clinton was inaugurated in January of 1993, he

11

established a healthcare task force led by first lady Hillary Clinton. Paul Starr, a White House advisor who was part of the task force, later wrote that "there seemed to be a historic opportunity to complete what Democrats had long regarded as the chief unfinished business of the New Deal—national health insurance."[17]

The task force created a 1,342-page bill, which President Clinton unveiled before a joint session of Congress on September 22, 1993. He asked Sen. Robert Byrd, the presiding officer of the Senate, to introduce the bill as part of the budget reconciliation process, but Byrd refused. The bill was then introduced on November 20 by Rep. Richard Gephardt in the House, with 103 cosponsors, and Senator George Mitchell in the Senate, with 29 cosponsors. Although the Senate version eventually reached a floor debate, Congress entered recess without coming to a conclusion on the bill, and Senator Mitchell admitted in 1994 that he considered the bill dead.[18][19][20][21]

The bill proposed the following regulations:[22][23]

● Employers would be required to provide health insurance to their full-time employees. Small businesses would receive subsidies to help provide insurance.

● State-based cooperatives would sell approved health insurance plans to consumers and regulate insurance companies.

● The unemployed, self-employed, and part-time employees would receive subsidies to help them purchase insurance through the cooperatives.

● All Americans would be required to obtain health insurance. Any citizen who chose not to enroll could be enrolled automatically by the state cooperative and charged twice the normal premium.

● The federal government would set minimum standards all health insurance plans would be required to cover. Insurance companies would not be allowed to discriminate against pre-existing conditions.

● A National Health Board would be established to control healthcare spending, oversee the state cooperatives, and establish new regulations.

Health Insurance Portability and Accountability Act of 1996

President Clinton signed the Health Insurance Portability and Accountability Act (HIPAA) on August 21, 1996. He stated that "this Act will ensure the portability of health benefits when workers change or lose their jobs and will protect workers against discrimination by health plans based on their health status."[24]

HIPAA limited the extent to which insurance companies could exclude people with pre-existing conditions. For instance, pregnancy could no longer be excluded as a pre-existing condition. Employer-based insurance plans could not exclude employees or

12

charge them higher premiums on the basis of preexisting conditions or genetic predispositions. HIPAA also enabled workers to retain their health insurance after losing or changing jobs. The law required health insurance companies to extend coverage to workers they had been covering under COBRA and whose COBRA coverage had expired. These two provisions were intended to enable individuals to maintain health insurance coverage through various life events.[25]

HIPAA also established national standards for the privacy and security of electronic health information. Of the four sets of standards in the law, the two most well-known were referred to as the Security Rule and the Privacy Rule. The Security Rule, health information that is stored electronically and could be used to identify a patient is required to retain the utmost confidentiality, and providers are legally responsible for protecting this information from unauthorized access. Under the Privacy Rule, patients retain full access to their health records and can restrict their disclosure and use. These were the most substantial provisions of HIPAA and they came with complex compliance requirements. Most people's experience with the privacy requirements is when they are visiting a doctor's office for the first time and are asked to review a HIPPA Notice of Privacy Requirements and sign a consent form.[26][27][28]

States were given the option of either allowing the federal government to enforce HIPAA regulations in their state, or adopting and enforcing their own measures that would be at least as stringent as the ones outlined in the federal legislation. Kala Ladenheim of George Washington University Medical Center wrote, "the legislation is important because it creates a statutory framework for the federal government to use in collaborating with state governments to regulate insurance markets."[29]

Balanced Budget Act of 1997

President Clinton signed the Balanced Budget Act on August 5, 1997. One aspect of the omnibus legislation created Title XXI of the Social Security Act, also known as the State Children's Health Insurance Program or S-CHIP. The law provided block grants for states to offer health insurance to children who weren't previously eligible for Medicaid and whose families earned less than 200 percent of the federal poverty line. States could offer such coverage in three ways: expanding their existing Medicaid programs to cover more children, creating a new program to cover them, or using both Medicaid expansion and new programs.[30]

After CHIP had been in place for three years, states were required to return any unspent S-CHIP funds to the federal government. Almost half of all federal funds were returned, due to difficulties that states encountered in enrolling children in the program.

13

By 2009, 5 million children were enrolled in the program, while 7.5 million children remained uninsured.[30]

The law also instituted important changes to Medicare. The Tax Equity and Fiscal Responsibility Act of 1982 had given Medicare beneficiaries the option of enrolling in Medicare through private plans rather than through the traditional fee-for-service Medicare plan. The Balanced Budget Act of 1997 expanded and formalized this option, later known as Medicare Part C or Medicare Advantage. Under Part C, the federal government paid the private plans for each beneficiary accepted, amounting to 95 percent of the Medicare average cost per enrollee. The insurance plans were not allowed to choose which individuals could enroll, but they could choose which geographic areas to serve. They were required to offer all traditional Medicare benefits but could also offer additional benefits beyond traditional Medicare, such as vision and dental benefits.

By 2015, about 15 million Medicare beneficiaries—30 percent of all Medicare beneficiaries—were enrolled in Medicare Advantage plans. Studies have shown that Medicare Advantage HMOs (but not other types of Medicare Advantage plans) tended to perform better than traditional Medicare in providing preventive services and controlling overall costs; however, beneficiaries perceived traditional Medicare more favorably. Because of these perceptions, older and less healthy recipients tended to enroll in traditional Medicare, so the private Medicare Advantage plans may have actually received higher payments than they needed for their relatively healthy enrollees. The Balanced Budget Act also encouraged states to offer HMO options to Medicaid recipients.[9][31][32]

Medicare Prescription Drug, Improvement and Modernization Act of 2003

While Medicare Part A covered hospitalization costs, and Medicare Part B covered doctors' visits, Medicare had not originally covered any prescription drug costs. The price and importance of pharmaceutical drugs had increased sharply over the decades since Medicare's original passage, and by 2004 the average Medicare beneficiary was spending over $1,000 out-of-pocket each year on prescription drugs. When President George W. Bush was elected in 2000, he promised to provide prescription drug coverage to seniors. Congress designed a program, endorsed by the American Association of Retired Persons (AARP), that functioned through private insurers. President Bush signed the bill on December 8, 2003 and it took effect on January 1, 2006.

14

The law, sometimes known as Medicare Part D, established four tiers of coverage.[33]

● Tier 1: The patient paid the first $250 per year in prescription drug costs ("deductible").

● Tier 2: Prescription drug costs between $250 and $2,250 per year were 75 percent paid for by Medicare and 25 percent paid for by the patient.

● Tier 3: After costs hit $2,250 for the year, the patient was responsible for 100 percent of costs (this was known as the "donut hole" of Medicare coverage—a period of no coverage between two coverage levels).

● Tier 4: After costs hit $5,100 for the year, Medicare would once again help cover 95 percent of costs.

Enrollment in the coverage was voluntary. Medicare beneficiaries who chose to enroll could do so in three ways: by purchasing separate drug coverage, if they were already enrolled in the government plan for Parts A or B; by enrolling in a private Medicare plan; or through their former employer's retirement benefits program. Beneficiaries paid premiums to enroll in the program, which were higher for those with higher income.

At the time of its passage, the program was officially estimated to cost $400 billion for the first 10 years of operation. However, controversy arose when an actuary from the executive branch estimated the true cost to be $530 billion, yet kept his estimates secret from Congress until months after the bill was signed. But by 2014, the Congressional Budget Office concluded that Medicare Part D spending had proved lower than either estimate.[34][35][36]

By 2014, about 37 million Medicare beneficiaries were enrolled in the prescription drug program.[37]

In addition to establishing Medicare Part D, the law also introduced changes to Medicare Part B by charging higher premiums to higher-income beneficiaries (individuals with incomes over $85,000 and couples with incomes over $170,000). The law also granted both Medicare and non-Medicare recipients a tax exemption for health savings accounts (HSAs), which allow people to save money for out-of-pocket medical expenses.

Patient Protection and Affordable Care Act of 2010

The Patient Protection and Affordable Care Act, also known as Obamacare, was signed into law by President Barack Obama on March 23, 2010. The aim of the law was to provide an expansion of health insurance coverage to more Americans through both individual health insurance exchanges as well as through employer-provided plans. Minimum requirements of coverage were established and both individual and employer

15

mandates, enforced by tax penalties, were established over a period of years in order to achieve the goal of expanded coverage. Subsidies and tax credits were provided to individual consumers based on income level and dependents, and the law provided for an expansion of Medicaid to cover low-income childless adults. Small businesses were given tax credits based on the level of insurance offered to employees, as well.[38]

The law also specified ten essential benefits that plans created after the law's passage need to include. Existing plans were grandfathered in, but by 2014, few of the grandfathered plans remained due to frequent changes to health insurance policies. The ten essential benefits outlined by the Affordable Care Act included the following:[39][40]

● Ambulatory patient services ● Emergency services ● Hospitalization ● Maternity and newborn care ● Mental health and substance abuse disorder services, including behavioral

health treatment ● Prescription drugs ● Rehabilitative and habilitative services and devices ● Laboratory services ● Preventive and wellness services and chronic disease management ● Pediatric services, including oral and vision care

Most of the law's major provisions were implemented in 2014. Between 2013 and 2014, the number of uninsured individuals nationwide declined by 18.8 percent. Meanwhile, annual Medicaid enrollment increased by 9.5 percent. Nearly 10 million individuals purchased insurance through the new exchanges as of November 2015, about 35 percent of the total number of estimated potential enrollees. Numerous studies have been conducted to assess changes in insurance premiums since the passage of the ACA; some have found large increases while others have found little or no change.[41][42][43]

  

16

  Environmental policy WRITTEN BY: 

Ellen van Bueren Environmental policy, any measure by a government or corporation or other public or private organization regarding the effects of human activities on the environment, particularly those measures that are designed to prevent or reduce harmful effects of human activities on ecosystems.

Environmental policies are needed because environmental values are usually not considered in

organizational decision making. There are two main reasons for that omission. First, environmental

effects are economic externalities. Polluters do not bear the consequences of their actions; the

negative effects occur elsewhere or in the future. Second, natural resources are underpriced because

they are often assumed to have infinite availability. Together, those factors result in what American

ecologist Garrett Hardin in 1968 called “the tragedy of the commons.” The pool of natural resources

can be considered as a commons that everyone can use to their own benefit. For an individual, it is

rational to use a common resource without considering its limitations, but that self-interested

behaviour will lead to the depletion of the shared limited resource—and that is not in anyone’s

interest. Individuals do so nevertheless because they reap the benefits in the short term, but the

community pays the costs of depletion in the long term. Since incentives for individuals to use the

commons sustainably are weak, government has a role in the protection of the commons.

History Of Environmental Policy Making

17

Public policies aimed at environmental protection date back to the Babylonians, who implemented the

first known social hygiene laws, such as not sharing a bed or a cup with a diseased person. The

earliest sewers were constructed in Mohenjo-daro (Indus, or Harappan, civilization) and in Rome

(ancient Roman civilization), which date back some 4,500 years and 2,700 years ago, respectively.

Other civilizations implemented similar environmental laws. The city-states of ancient Greece created

laws that governed forest harvesting some 2,300 years ago, and feudal European societies established

hunting preserves, which limited game and timber harvesting to royalty, effectively preventing

overexploitation, by 1000 ce. The city of Paris developed Europe’s first large-scale sewer system

during the 17th century. When the effects of industrialization and urbanization increased during the

late 19th and early 20th centuries and threatened human health, governments developed additional

rules and regulations for urban hygiene, sewage, sanitation, and housing, as well as the first laws

devoted to protecting natural landscapes and wildlife (such as the creation of Yellowstone National

Park as the world’s first national park in 1872). Wealthy individuals and private foundations, such as

the Sierra Club(founded 1892) and the National Audubon Society (founded 1905), also contributed to

efforts to conserve natural resources and wildlife.

People became aware of the harmful effects of emissions and use of chemicals in industry and

pesticides in agriculture during the 1950s and ’60s. The emergence of Minamata disease in 1956 in

Japan, which resulted from mercury discharges from nearby chemical companies, and the publication

of Silent Spring (1962) by American biologist Rachel Carson, which highlighted the dangers of

pollution, led to a greater public awareness of environmental issues and to detailed systems of

regulations in many industrialized countries. In those regulations, governments forbade the use of

hazardous substances or prescribed maximum emission levels of specific substances to ensure a

minimum environmental quality. Such regulative systems, like the Clean Water and Clean Air acts in

the United States, succeeded in effectively addressing point sources (i.e., any discernable discrete

18

location or piece of equipment that discharges pollution), such as industrial plants and utilities, where

the cause-and-effect relationship between the actors causing the negative environmental effect could

be clearly established. Nevertheless, some environmental problems persisted, often because of the

many nonpoint (diffuse) sources of pollution, such as exhaust from private automobiles and pesticide

and fertilizer runoff from small farms, that contributed to air and water pollution. Individually, those

small sources were not harmful, but the accumulation of their pollution exceeded the regulative

minimum norms for environmental quality. Also, the increasing complexity of chains of cause and

effectcontributed to persistent problems. In the 1980s the effects of acid rain showed that the causes

of environmental pollution could be separated geographically from its effects. Pollution problems of

all types underscored the message that Earth’s natural resources were being depleted.

From the late 1980s, sustainable development—(i.e., the fostering of economic growth while

preserving the quality of the environment for future generations—became a leading concept in

environmental policy making. With nature and natural resources considered as economic drivers,

environmental policy making was no longer the exclusive domain of government. Instead, private

industry and nongovernmental organizations assumed greater responsibility for the environment.

Also, the concept emphasized that individual people and their communities play a key role in the

effective implementation of policies.

Over the years, a variety of principles have been developed to help policy makers. Examples of such

guiding principles, some of which have acquired a legal basis in some countries, are the “polluter

pays” principle, which makes polluters liable for the costs of environmental damage, and the

precautionary principle, which states that an activity is not allowed when there is a chance that the

consequences are irreversible.

19

Such straightforward guiding principles do not work in all situations. For example, some

environmental challenges, such as global warming, illuminate the need to view Earth as an ecosystem

consisting of various subsystems, which, once disrupted, can lead to rapid changes that are beyond

human control. Getting polluters to pay or the sudden adoption of the precautionary principle by all

countries would not necessarily roll back the damage already imparted to the biosphere, though it

would stop future damage from occurring.

Since the early 1970s, environmental policies have made a shift from end-of-pipe solutions to

prevention and control. Such solutions rely on the mitigation of negative effects. In addition, if a

negative effect was unavoidable, it could be compensated for by investing in nature in other places

than where the damage was caused, for example.

A third solution, which develops policies that focus on adapting the living environment to the change,

is also possible. More specifically, measures that strengthen an ecosystem’s ecological resilience (i.e.,

an ecosystem’s ability to maintain its normal patterns of nutrient cycling and biomass production),

combined with measures that emphasize prevention and mitigation, have been used. One such

example is in Curitiba, Brazil, a city where some districts flood each year. The residents of

flood-prone districts were relocated to higher and dryer places, and their former living areas were

transformed into parks that could be flooded without disrupting city life.

Numerous instruments have been developed to influence the behaviour of actors who contribute to

environmental problems. Traditionally, public policy theories have focused on regulation, financial

incentives, and information as the tools of government. However, new policy instruments such as

performance requirements and tradable permits have been used.

Regulation

20

Regulation is used to impose minimum requirements for environmental quality. Such interventions

aim to encourage or discourage specific activities and their effects, involving particular emissions,

particular inputs into the environment (such as specific hazardous substances), ambient concentrations

of chemicals, risks and damages, and exposure. Often, permits have to be acquired for those activities,

and the permits have to be renewed periodically. In many cases, local and regional governments are

the issuing and controlling authorities. However, more-specialized or potentially hazardous activities,

such as industrial plants treating dangerous chemical substances or nuclear power stations using

radioactive fuel rods, are more likely to be controlled by a federal or national authority.

Regulation is an effective means to prescribe and control behaviour. Detailed environmental

regulations have resulted in a considerable improvement in the quality of air, water, and land since the

early 1970s. The strengths of regulation are that it is generally binding—it includes all actors who

want to undertake an activity described in the regulation—and it treats them in the same framework.

Regulations are also rigid: they are difficult to change. That can be considered as a strength, since

rigidity ensures that regulations will not change too suddenly. However, rigidity can also be

considered a weakness, because it slows down innovation, as actors seek to stay within the letter of

the law rather than creating new technologies, such as more-efficient emission scrubbers on

smokestacks that would remove more pollution than what the regulation mandates. When regulations

demand standards that are difficult or impossible to meet—because of a lack of knowledge, skills, or

finances on the part of the actors or mismanagement by policymakers—regulations will not be

effective.

One common improvement in environmental regulation made since the 1970s has been the

development of performance requirements, which allow actors to determine their own course of

action to meet the standard. For example, they are not required to purchase a particular piece of

21

equipment to meet an emissions standard. They can do it another way, such as developing a

technology or process that reduces emissions. The advantage of performance requirements is that

actors addressed by the regulation are encouraged to innovate in order to meet the requirements.

Despite that advantage, performance requirements cannot keep actors who lack incentives from

achieving more than the minimum requirements.

Financial incentives

Governments can decide to stimulate behavioral change by giving positive or negative financial

incentives—for example, through subsidies, tax discounts, or fines and levies. Such incentives can

play an important role in boosting innovation and in the diffusion and adoption of innovations. For

example, in Germany the widespread subsidizing of solar energy systems for private homeowners

increased the large-scale adoption of photovoltaic (PV) panels. Financial incentives or disincentives

can also stimulate professional actors to change. A potential drawback of financial incentives is that

they distort the market. When not used for a limited period, they can make receivers dependent upon

the subsidy. A final drawback is that subsidies are expensive instruments, especially when they are

open-ended.

Environmental reporting and ecolabeling

There are several instruments that aim to inform decision makers about the environmental effects of

their actions. Decisions are usually based on a cost-benefit analysis of which environmental costs and

benefits are not part. The environmental impact assessment (EIA) is an instrument that helps public

decision makers to decide on initiatives with a certain environmental impact, such as the construction

of roads and industrial plants. The EIA, which has become a legal requirement in many countries,

requires that the environmental effects of a project, such as the building of a dam or shopping mall, be

studied and that the actors be informed of how to mitigate environmental damage and what

22

compensation they could receive for doing so. EIAs allow decision makers to include environmental

information in a cost-benefit analysis. Although all EIAs cannot stop initiatives from taking place,

they can reduce the negative environmental impacts.

Environmental management systems are comprehensive approaches that help organizations reduce

their use of natural resources while reducing costs and—when certified—contributing to a positive

image. The most commonly known standard for such systems is the ISO 14000 standards, first issued

by the International Organization for Standardization (ISO) in 1996. Such standards help an

organization control its environmental impact, formulate and monitor environmental objectives, and

demonstrate that they have been achieved.

Ecolabels and certificates applied to specific products and services inform consumers about their

environmental performance. Sometimes governments require such labels and certificates, such as the

“EU Ecolabel” marking in Europe, which certifies that a product has met minimum requirements for

consumer safety, health, and environmental friendliness. To push organizations to develop products

and services that perform beyond those minimum requirements, there are also labels that specifically

express the environmental friendliness of the product or service. For example, the Energy Star rating

in the United States indicates the energy performance level of household appliances. In addition,

ecolabels are often applied in the food industry and for energy performance in buildings (LEED

standards). The underlying assumption of ecolabeling is that informed consumers buying

environmentally responsible products will stimulate industry to innovate and produce cleaner

products.

Global policy agreements

From the early 1970s, the United Nations (UN) has provided the main forum for international

negotiations and agreements on environmental policies and objectives. The 1972 Stockholm

23

conference was the first international conference on environmental issues and was followed by the

United Nations Conference on Environment and Development (UNCED) summits in Rio de Janeiro

in 1992 and in Johannesburg in 2002. The UN also hosted special conferences on climate change,

such as those of 1996 in Kyoto and 2009 in Copenhagen.

Those conferences and summits responded to the global character of some of the most-challenging

environmental problems, which would require international cooperation to solve. Those conferences

were effective in setting an international agenda for regional and national environmental policy

making that resulted in treaties and protocols, also known as “hard law,” and in nonbinding

resolutions, statements, and declarations, or “soft law.” Whereas the 1992 Rio conference agreement

was a soft law, the Kyoto Protocol was a hard law, with clear-cut reduction targets of greenhouse gas

emissions for regions and countries. Nation-states, in their efforts to meet the targets, could make use

of three so-called flexibility mechanisms designed to lower the costs of compliance.

Joint implementation, the first mechanism, allowed countries to invest in lowering emissions in other

countries that had ratified the Kyoto Protocol and, thus, had a reduction target to meet. For

industrialized, developed countries that had already invested in emission reductions in their own

economies, it was cheaper to invest in emission reductions in other countries with economies in

transition, where the same investment would lead to greater reductions. In other words, the investing

country could get credit for helping a country with an economy in transition to lower its emissions.

Clean development, the second mechanism, allowed industrialized countries that have ratified the

protocol to meet their targets in any country where it is cheapest to invest—that is, in developing

countries—even if that country did not ratify the protocol. That mechanism is not undisputed, since it

involves questions of intervention in the economies of developing countries, which may have an

impact on the economic development of those countries. To prevent industrialized countries from not

24

reducing their own emissions, the mechanism can only be used in supplement to domestic reductions,

but no definition of such supplemental action was given, which led some countries to achieve 50

percent of their reduction target through that mechanism.

The third mechanism, carbon-emission trading (which is also known as “cap and trade”), is a

market-based instrument and can be applied in the form of voluntary markets or in a mandatory

framework. Most trading schemes are based on a cap-and-trade model. A central authority puts a cap

on the overall carbon emissions allowed in a country or region. Within that cap, emission rights are

allocated to the polluters, and emissions produced beyond those rights are penalized. The idea is that

polluters choose between investing in emission reductions or emission permits. By lowering the cap

over time, total emission reduction can be achieved. The trade of permits will ensure that emissions

reduction is achieved at the lowest costs.

The instrument of tradable permits has been applied to other emissions. The first emission-trading

schemes date back to 1974, when the United States experimented with emission trading as part of the

Clean Air Act.

25