autonomous military robotics:  

Upload: mark-druskoff

Post on 30-May-2018

225 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 AutonomousMilitaryRobotics:

    1/112

    AutonomousMilitaryRobotics:

    Risk,Ethics,andDesign

    Preparedfor: USDepartmentofNavy,OfficeofNavalResearch

    Preparedby: PatrickLin,Ph.D.

    GeorgeBekey,Ph.D.

    KeithAbney,M.A.

    Ethics&EmergingTechnologiesGroupat

    CaliforniaStatePolytechnicUniversity,SanLuisObispo

    Preparedon: December20,2008

    ThisworkissponsoredbytheDepartmentoftheNavy,OfficeofNavalResearch,

    underaward#N000140711152.

  • 8/14/2019 AutonomousMilitaryRobotics:

    2/112

    i

    TableofContents

    Preface iii

    1. Introduction 11.1. OpeningRemarks 21.2. Definitions 41.3. MarketForces&Considerations 51.4. ReportOverview 9

    2. MilitaryRobotics 112.1. GroundRobots 122.2. AerialRobots 142.3. MarineRobots 162.4. SpaceRobots 172.5. Immobile/FixedRobots 182.6. RobotsSoftwareIssues 192.7. EthicalImplications:APreview 212.8. FutureScenarios 21

    3. ProgrammingMorality 253.1. FromOperationaltoFunctionalMorality 253.2. Overview:TopDownandBottomUpApproaches 273.3. TopDownApproaches 283.4. BottomUpApproaches 343.5. SupraRationalFaculties 373.6. HybridSystems 383.7. FirstConclusions:HowBesttoProgramEthicalRobots 40

    4. TheLawsofWarandRulesofEngagement 434.1. CoercionandtheLOW 434.2. JustWarTheoryandtheLOW 444.3. JustWarTheory:JusadBellum 454.4. JustWarTheory:JusinBello 47

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    3/112

    ii

    4.5. RulesofEngagementandtheLawsofWar 534.6. JustWarTheory:JuspostBellum 544.7. FirstConclusions:RelevancetoRobots 54

    5. LawandResponsibility 555.1. RobotsasLegalQuasiAgents 555.2. Agents,QuasiAgents,andDiminishedResponsibility 585.3. Crime,Punishment,andPersonhood 59

    6. TechnologyRiskAssessmentFramework 636.1. AcceptableRiskFactor:Consent 636.2. AcceptableRiskFactor:InformedConsent 666.3. AcceptableRiskFactor:TheAffectedPopulation 686.4. AcceptableRiskFactor:SeriousnessandProbability 686.5. AcceptableRiskFactor:WhoDeterminesAcceptableRisk? 706.6. OtherRisks 72

    7. RobotEthics:TheIssues 737.1. LegalChallenges 737.2. JustWarChallenges 747.3. TechnicalChallenges 767.4. HumanRobotChallenges 797.5. SocietalChallenges 817.6. OtherandFutureChallenges 847.7. FurtherandRelatedInvestigationsNeeded 86

    8. Conclusions 879. References 92A. Appendix:Definitions 100

    A.1 Robot 100A.2 Autonomy 103A.3 Ethics 105

    B. Appendix:Contacts 107

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    4/112

    iii

    Preface

    This report is designed as a preliminary investigation into the risk and ethics issues related to

    autonomousmilitary systems,with aparticular focus on battlefield robotics as perhaps themost

    controversialarea. Itisintendedtohelpinformpolicymakers,militarypersonnel,scientists,aswell

    asthebroaderpublicwhocollectively influencesuchdevelopments. Ourgoal istoraisethe issues

    thatneedtobeconsider inresponsibly introducingadvancedtechnologies intothebattlefieldand,

    eventually, intosociety. Withhistoryasaguide,weknowthatforesight iscriticaltobothmitigate

    undesirableeffectsaswellastobestpromoteorleveragethebenefitsoftechnology.

    Inthisreport,wewillpresent:thepresumptivecasefortheuseofautonomousmilitaryrobotics;theneed to address risk and ethics in the field; the current and predicted state ofmilitary robotics;

    programmingapproachesaswellasrelevantethicaltheoriesandconsiderations(includingtheLaws

    ofWar,RulesofEngagement);aframeworkfortechnologyriskassessment;ethicalandsocialissues,

    bothnear andfarterm;andrecommendationsforfuturework.

    ThisworkissponsoredbytheUSDepartmentoftheNavy,OfficeofNavalResearch,underAward#

    N000140711152,whomwethankforitssupportandinterestinthisimportantinvestigation. We

    also thank California State Polytechnic University (Cal Poly, San Luis Obispo) for its support,

    particularlytheCollegeofLiberalArtsandtheCollegeofEngineering.

    Weare indebted toColinAllen (IndianaUniv.),PeterAsaro (RutgersUniv.),andWendellWallach

    (Yale)fortheircounselandcontributions,aswellastoanumberofcolleaguesRonArkin(Georgia

    Tech), JohnCanning (NavalSurfaceWarfareCenter),KenGoldberg (IEEERoboticsandAutomation

    Society;UCBerkeley),PatrickHew(DefenceScienceandTechnologyOrganization,Australia),George

    R.Lucas,Jr.(USNavalAcademy),FrankChongwooPark(IEEERoboticsandAutomationSociety;Seoul

    National Univ.), Lt. Col. Gary Sargent (US Army Special Forces; Cal Poly), Noel Sharkey (Univ. of

    Sheffield,UK),RobSparrow(MonashUniv.,Australia),andothersfortheirhelpfuldiscussions. We

    alsothanktheorganizationsmentionedhereinforuseoftheirrespectiveimages. Finally,wethank

    ourfamiliesandnationsmilitaryfortheirserviceandsacrifice.

    PatrickLin

    KeithAbney

    GeorgeBekey

    December,2008

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    5/112

    1

    1. Introduction

    Nocatalogueofhorrorseverkeptmenfromwar. Beforethewaryoualwaysthink

    thatitsnotyouthatdies. Butyouwilldie,brother,ifyougotoitlongenough.

    ErnestHemingway[1935,p.156]

    Imagine the faceofwarfarewithautonomous robotics: Insteadofour soldiers returninghome in

    flagdrapedcaskets toheartbroken families,autonomous robotsmobilemachines thatcanmake

    decisions,suchastofireuponatarget,withouthumaninterventioncanreplacethehumansoldier

    in an increasing range of dangerous missions: from tunneling through dark caves in search ofterrorists,tosecuringurbanstreetsrifewithsniperfire,topatrollingtheskiesandwaterwayswhere

    thereislittlecoverfromattacks,toclearingroadsandseasofimprovisedexplosivedevices(IEDs),to

    surveying damage from biochemical weapons, to guarding borders and buildings, to controlling

    potentiallyhostilecrowds,andevenastheinfantryfrontlines.

    Theserobotswouldbesmartenoughtomakedecisionsthatonlyhumansnowcan;andasconflicts

    increase in tempoand requiremuchquicker informationprocessingand responses, robotshavea

    distinct advantageover the limited and fallible cognitive capabilities thatweHomo sapienshave.

    Notonlywould robotsexpand thebattlespaceoverdifficult, largerareasof terrain,but theyalso

    represent a significant forcemultipliereach effectivelydoing theworkofmanyhuman soldiers,

    while immunetosleepdeprivation,fatigue, lowmorale,perceptualandcommunicationchallenges

    inthefogofwar,andotherperformancehinderingconditions.

    Butthepresumptivecasefordeployingrobotsonthebattlefield ismorethanaboutsavinghuman

    lives or superior efficiency and effectiveness, though saving lives and clearheaded action during

    frenetic conflicts are significant issues. Robots, further, would be unaffected by the emotions,

    adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of

    Engagementandcommitatrocities,that istosay,warcrimes. Wewouldno longerread(asmany)

    newsreportsaboutourownsoldiersbrutalizingenemycombatantsorforeigncivilianstoavengethedeaths of their brothers in armsunlawful actions that carry a significant political cost. Indeed,

    robotsmayactasobjective,unblinkingobserversonthebattlefield,reportinganyunethicalbehavior

    backtocommand;theirmerepresenceassuchwoulddiscouragealltoohumanatrocitiesinthefirst

    place.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    6/112

    2Technology,however,isadoubleedgeswordwithbothbenefitsandrisks,criticsandadvocates;and

    autonomousmilitaryroboticsisnoexception,nomatterhowcompellingthecasemaybetopursue

    such research. The worries include: where responsibility would fall in cases of unintended or

    unlawful harm, which could range from the manufacturer to the field commander to even the

    machineitself;thepossibilityofseriousmalfunctionandrobotsgonewild;capturingandhackingof

    militaryrobotsthatarethenunleashedagainstus;loweringthethresholdforenteringconflictsand

    wars, since fewer US military lives would then be at stake; the effect of such robots on squad

    cohesion,e.g.,ifrobotsrecordedandreportedbackthesoldierseveryaction;refusinganotherwise

    legitimateorder;andotherpossibleharms.

    Wewillevaluatetheseandotherconcernswithinourreport;andtheremainderofthissectionwill

    discussthedrivingforcesinautonomousmilitaryroboticsandtheneedforrobotethics,aswellas

    provideanoverviewofthereport. Beforethatdiscussion,weshouldmakeafewintroductorynotes

    anddefinitionsasfollow.

    1.1 OpeningRemarks

    First, inthis investigation,wearenotconcernedwiththequestionofwhether itiseventechnically

    possibletomakeaperfectlyethicalrobot, i.e.,onethatmakesthe rightdecision ineverycaseor

    evenmostcases. FollowingArkin,weagreethatanethicallyinfalliblemachineoughtnottobethe

    goalnow(ifitisevenpossible);rather,ourgoalshouldbemorepracticalandimmediate:todesigna

    machine that performs better than humans do on the battlefield, particularly with respect to

    reducingunlawfulbehaviororwarcrimes [Arkin,2007]. Considering thenumberof incidencesof

    unlawfulbehaviorandbyunlawfulwemeanaviolationofthevariousLawsofWar(LOW)orRules

    of Engagement (ROE), which we also will discuss later in more detailthis appears to be a low

    standard to satisfy, though a profoundly important hurdle to clear. To that end, scientists and

    engineersneednot first solve thedaunting taskof creating a truly ethical robot, at least in the

    foreseeablefuture;rather,itseemsthattheyonlyneedtoprogramarobottoactincompliancewith

    theLOWandROE (thoughthismaynotbeasstraightforwardandsimplyas itfirstappears)oract

    ethicallyinthespecificsituationsinwhichtherobotistobedeployed.

    Second,weshouldnotethatthepurposeofthisreportisnottoencumberresearchonautonomous

    military robotics, but rather to help responsibly guide it. That there should be two faces totechnologybenefitsandriskisnotsurprising,ashistoryshows,and isnotby itselfanargument

    againstthat technology.1 But ignoringthose risks,orat leastonly reactivelyaddressing themand

    1Biotechnology,forinstance,promisestoreduceworldhungerbypromotinggreaterandmorenutritiousagriculturalandlivestockyield;yetcontinuingconcernsaboutthepossibledisseminationofbioengineeredseeds(orFrankenfoods)intothewild,displacingnativeplantsandcrops,havepromptedtheindustrytomovemorecautiously[e.g.,Thompson,2007]. EvenInternettechnologies,asvaluableastheyhavebeeninconnectingusto

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    7/112

    3waiting forpublicreaction,seemstobeunwise,given that itcan lead (and, inthecaseofbiotech

    foods,hasled)toabacklashthatstallsforwardprogress.

    Thatsaid,itissurprisingtonotethatoneofthemostcomprehensiveandrecentreportsonmilitary

    robotics,UnmannedSystemsRoadmap20072032,doesnotmentionthewordethicsoncenorrisks

    raisedbyrobotics,withtheexceptionofonesentencethatmerelyacknowledgesthatprivacyissues

    [have been] raised in some quarters without even discussing said issues [US Department of

    Defense, 2007, p. 48]. While this omission may be understandable from a public relations

    standpoint, again it seems shortsighted given lessons in technology ethics, especially from our

    recentpast. Ourreport,then,isdesignedtoaddressthatgap,proactivelyandobjectivelyengaging

    policymakersandthepublictoheadoffapotentialbacklashthatservesnoonesinterests.

    Third,whilethisreportfocusesonissuesrelatedtoautonomousmilitaryrobotics,thediscussionmay

    applyequallywellandoverlapwith issues related toautonomousmilitary systems, i.e.,computer

    networks. Further,weare focusingonbattlefieldor lethalapplications,asopposed to robotics inmanufacturingormedicineeven iftheyaresupportedbymilitaryprograms(suchastheBattlefield

    Extraction Assist Robot, or BEAR, that carries injured soldiers from combat zones), for several

    reasonsasfollow. Themostcontentiousmilitaryrobotswillbetheweaponizedones:Weaponized

    unmanned systems is a highly controversial issue that will require a patient crawlwalkrun

    approachaseachapplications reliabilityandperformance isproved [USDepartmentofDefense,

    2007, p. 54]. Their deployment is inherently about human life and death, both intended and

    unintended,sotheyimmediatelyraiseseriousconcernsrelatedtoethics(e.g.,doesjustwartheory

    ortheLOW/ROEallowfordeploymentofautonomousfightingsystemsinthefirstplace?)aswellas

    risk (e.g.,malfunctions and emergent, unexpected behavior) that demand greater attention than

    otherroboticsapplications.

    Also,thougharelativelysmallnumberofmilitarypersonneliseverexposedonthebattlefield,lossof

    lifeandpropertyduringarmedconflicthasnontrivialpoliticalcosts,nevermindenvironmentaland

    economic costs,especially if collateralorunintendeddamage is inflicted andevenmore so if it

    results fromabusive,unlawfulbehaviorbyourown soldiers. Howweprosecuteawarorconflict

    receivesparticularscrutinyfromthemediaandpublic,whoseopinionsinfluencemilitaryandforeign

    policy even if those opinions aredisproportionately drawn from events on thebattlefield, rather

    thanonthemanymoredevelopmentsoutsidethemilitarytheater. Therefore,thoughautonomous

    battlefieldorweaponizedrobotsmaybeyearsawayandaccountforonlyonesegmentoftheentiremilitaryroboticspopulation,thereismuchpracticalvalueinsortingthroughtheirassociativeissues

    soonerratherthanlater.

    information,socialnetworks,etc.,andinmakingnewwaysoflifepossible,revealadarkerworldofonlinescams,privacyviolations,piracy,viruses,andotherills;yetnoonesuggeststhatweshoulddoawaywithcyberspace[e.g.,Weckert,2007].

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    8/112

    4Fourth and finally,while our investigation here is supported by the US Department of theNavy,

    OfficeofNavalResearch,itmayapplyequallywelltootherbranchesofmilitaryservice,allofwhich

    arealsodeveloping robotics for their respectiveneeds. The rangeof roboticsdeployedorunder

    considerationbytheNavy,however,isexceptionallybroad,withairborne,seasurface,underwater,

    andgroundapplications.2 Thus, it isparticularlyfitting fortheDepartmentoftheNavytosupport

    one of the first dedicated investigations on the risk and ethical issues arising from the use of

    autonomousmilitaryrobotics.

    1.2 Definitions

    Totheextentthattherearenostandard,universallyaccepteddefinitionsofsomeofthekeyterms

    we employ in this report, we will need to stipulate those working definitions here, since it is

    importantthatweensurewehavethesamebasicunderstandingofthosetermsattheoutset. And

    so that we do not become mired in debating precise definitions here, we provide a detaileddiscussionorjustificationforourdefinitionsinAppendixA:Definitions.

    Robot (particularly in a military context). A powered machine that (1) senses, (2) thinks (in a

    deliberative,nonmechanicalsense),and(3)acts.

    Mostrobotsareandwillbemobile,suchasvehicles,butthis isnotanessentialfeature;however,

    somedegreeofmobility isrequired,e.g.,afixedsentryrobotwithswivelingturretsorastationary

    industrialrobotwithmovablearms. Mostdonotandwillnotcarryhumanoperators,butthistoois

    not an essential feature; the distinction becomes even more blurred as robotic features are

    integratedwiththebody. Robotscanbeoperatedsemi orfullyautonomouslybutcannotdepend

    entirely on human control: for instance, teleoperated drones such as the Air Forces Predator

    unmannedaerialvehiclewouldqualifyas robots to theextent that theymake somedecisionson

    theirown,suchasnavigation,butachildstoycartetheredtoaremotecontrolisnotarobotsince

    its controldependsentirelyon theoperator. Robots canbeexpendableor recoverable,and can

    carrya lethalornonlethalpayload. And robotscanbe consideredasagents, i.e., theyhave the

    capacitytoactinaworld,andsomeevenmaybemoralagents,asdiscussedinthenextdefinition.

    Autonomy(inmachines). Thecapacitytooperateintherealworldenvironmentwithoutanyformof

    externalcontrol,

    once

    the

    machine

    is

    activated

    and

    at

    least

    in

    some

    areas

    of

    operation,

    for

    extended

    periodsoftime.

    2 TheonlyapplicationsnotcoveredbytheDepartmentoftheNavyappeartobeunderground andspacebased,includingsuborbitalmissions,whichmayunderstandablyfalloutsidetheirpurview.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    9/112

    5This is to say that, we are herein not interested in issues traditionally linked to autonomy that

    requireamore robustandprecisedefinition, suchas theassignmentofpolitical rightsandmoral

    responsibility (asdifferent from legal responsibility)orevenmore technical issues related to free

    will, determinism, personhood, and whether machines can even thinkas important as those

    issues are in philosophy, law, and ethics. But in the interest of simplicity, we will stipulate this

    definition,which seemsacceptable inadiscussion limited tohumancreatedmachines. This term

    alsohelpselucidatethesecondcriterionofthinkinginourworkingdefinitionofarobot. Autonomy

    isalsorelatedtotheconceptofmoralagency,i.e.,theabilitytomakemoraljudgmentsandchoose

    onesactionsaccordingly.

    Ethics(construedbroadlyforthisreport). Morethannormativeissues,i.e.,questionsaboutwhatwe

    shouldorought todo,butalsogeneralconcerns related tosocial,political,andcultural impactas

    wellasriskarisingfromtheuseofrobotics.

    Asa result,wewill coverall theseareas inour report,notjustphilosophicalquestionsorethicaltheory,withthegoalofprovidingsomerelevant ifnotactionable insightsatthispreliminarystage.

    Wewillalsodiscussrelevantethicaltheoriesinmoredetailinsection3(thoughthisisnotmeantto

    beacomprehensivetreatmentofthesubject).

    1.3 MarketForcesandConsiderations

    Several industry trends and recent developmentsincluding highprofile failures of semi

    autonomoussystems,asperhapsaharbingerofchallengeswithmoreadvancedsystemshighlight

    theneed for a technology riskassessment, aswell as abroader studyofotherethicaland social

    issuesrelatedtothefield. Inthefollowing,wewillbrieflydiscusssevenprimarymarketforcesthat

    are driving the development of military robotics as well as the need for a guiding ethics; these

    roughlymaptowhathavebeencalledpush(technology)andpull(socialandcultural)factors[US

    DepartmentofDefense,2007,p.44].

    1. Compellingmilitaryutility. USdefenseorganizations are attracted to theuseof robots for arangeofbenefits,someofwhichwehavementionedabove. Aprimaryreasonistoreplaceus

    lessdurable humans in dull, dirty, and dangerousjobs [US Department of Defense, 2007,

    p.19]. This includes: extended reconnaissance missions, which stretch the limits of humanenduranceto itsbreakingpoint;environmentalsamplingafteranuclearorbiochemicalattack,

    which had previously led to deaths and longterm effects on the surveying teams; and

    neutralizingIEDs,whichhavecausedover40%ofUScasualtiesinIraqsince2003[IraqCoalition

    CasualtyCount,2008]. Whileofficialstatisticsaredifficultto locate,newsorganizationsreport

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    10/112

    6that theUS has deployed over 5,000 robots in Iraq andAfghanistan,which have neutralized

    10,000IEDsby2007[CBS,2007].

    Also mentioned above, military robots may be more discriminating, efficient, and effective.

    Theirdispassionateanddetachedapproachtotheirworkcouldsignificantlyreducetheinstances

    ofunethicalbehaviorinwartimeabusesthatnegativelycolortheUSprosecutionofaconflict,

    nomatterhowjusttheinitialreasonstoentertheconflictare,andcarryahighpoliticalcost.

    2. USCongressionaldeadlines. Clearly,there isatremendousadvantagetoemployingrobotsonthebattlefield,and theUSgovernment recognizes this. TwokeyCongressionalmandatesare

    drivingtheuseofmilitaryrobotics:by2010,onethirdofalloperationaldeepstrikeaircraftmust

    be unmanned, and by 2015, onethird of all ground combat vehicles must be unmanned

    [NationalDefenseAuthorizationAct,2000]. Most, ifnotall,of the robotics inuseandunder

    developmentaresemiautonomousatbest;andthoughthetechnologyto(responsibly)create

    fullyautonomousrobots isnearbutnotquite inhand,wewouldexpecttheUSDepartmentofDefense to adopt the same, sensible crawlwalkrun approach aswithweaponized systems,

    giventheseriousinherentrisks.

    Nonetheless, these deadlines apply increasing pressure to develop and deploy robotics,

    includingautonomousvehicles;yetarushtomarketincreasestheriskforinadequatedesignor

    programming. Worse,withoutasustainedandsignificanteffort tobuild inethicalcontrols in

    autonomoussystems,oreventodiscusstherelevantareasofethicsandrisk,thereislittlehope

    that theearlygenerationsofsuchsystemsand robotswillbeadequate,makingmistakes that

    maycosthumanlives. (Thisisrelatedtothefirstgenerationproblemwediscussinsections6

    and7,thatwewontknowexactlywhatkindoferrorsandmistakenharmsautonomousrobots

    willcommituntiltheyhavealreadydoneso.)

    3. Continuing unethical battlefield conduct. Beyond popular news reports and images ofpurportedlyunethicalbehaviorbyhuman soldiers, theUSArmySurgeonGeneralsOfficehad

    surveyed US troops in Iraq on issues in battlefield ethics and discovered worrisome results.

    From its summaryof findings,amongother statistics:Less thanhalfofSoldiersandMarines

    believedthatnoncombatantsshouldbetreatedwithrespectanddignityandwelloverathird

    believedthattortureshouldbeallowedtosavethelifeofafellowteammember. About10%of

    Soldiers and Marines reported mistreating an Iraqi noncombatant when it wasntnecessary...Less thanhalfofSoldiersandMarineswould reporta teammember forunethical

    behavior...Although reportingethical training,nearlya thirdofSoldiersandMarines reported

    encountering ethical situations in Iraq inwhich they didnt know how to respond [USArmy

    SurgeonGeneralsOffice,2006]. Themost recent survey by the sameorganization reported

    similarresults[USArmySurgeonGeneralsOffice,2008].

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    11/112

    7

    Wartimeatrocitieshaveoccurredsincethebeginningofhumanhistory,sowearenotoperating

    under the illusion that they can be eliminated altogether (nor that armed conflicts can be

    eliminatedeither,atleastintheforeseeablefuture). However,totheextentthatmilitaryrobots

    can considerably reduce unethical conduct on the battlefieldgreatly reducing human and

    political coststhere is a compelling reason topursue theirdevelopment aswellas to study

    theircapacitytoactethically.

    4. Militaryroboticsfailures. Morethantheoreticalproblems,militaryroboticshavealreadyfailedonthebattlefield,creatingconcernswiththeirdeployment(andperhapsevenmoreconcernfor

    more advanced, complicated systems) that ought to be addressed before speculation,

    incompleteinformation,andhypefillthegapinpublicdialogue.

    InApril2008,severalTALONSWORDSunitsmobilerobotsarmedwithmachinegunsin Iraq

    were reported tobegrounded for reasonsnot fullydisclosed, thoughearly reports claim therobots,withoutbeingcommandedto,trainedtheirgunsonfriendlysoldiers[e.g.,Page,2008];

    and later reports denied this account but admitted there had been malfunctions during the

    developmentandtestingphasepriortodeployment[e.g.,Sofge,2008]. Thefullstorydoesnot

    appeartohaveyetemerged,buteitherway,theincidentunderscoresthepublicsanxietyand

    themilitaryssensitivitywiththeuseofroboticsonthebattlefield(alsoseePublicperceptions

    below).

    Further, it is not implausible to suggest that these robots may fail, because it has already

    happened elsewhere: in October 2007, a semiautonomous robotic cannon deployed by the

    SouthAfricanarmymalfunctioned,killingnine friendlysoldiersandwounding14others[e.g.,

    Shachtman,2007]. Communicationfailuresanderrorshavebeenblamedforseveralunmanned

    aerialvehicle(UAV)crashes,fromthoseownedbytheSriLankaAirForcetotheUSBorderPatrol

    [e.g.,BBC,2005;NationalTransportationSafetyBoard,2007]. Computerrelatedtechnology in

    generalisespeciallysusceptibletomalfunctionsandbugsgiventheircomplexityandevenafter

    many generations of a product cycle; thus, it is reasonable to expect similar challengeswith

    robotics.

    5. Relatedcivilian systemsfailures. Ona similar technologypathasautonomous robots, civiliancomputersystemshavefailedandraisedworriesthatcancarryovertomilitaryapplications. Forinstance, such civilian systems have been blamed for massive power outages: in early 2008,

    Floridasufferedthroughmassiveblackoutsacrosstheentirestate,asutilitycomputersystems

    automaticallyshutoffandreroutedpowerafterjustasmallfirecausedbyafailedswitchatone

    electrical substation [e.g., Padgett, 2008]; and in the summer 2003, a single fallen tree had

    triggereda tsunamiofcascadingcomputerinitiatedblackouts thataffectedtensofmillionsof

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    12/112

    8customersfordaysandweeksacrosstheeasternUSandCanada,leavingpracticallynotimefor

    human interventionto fixwhatshouldhavebeenasimpleproblemofstoppingthedisastrous

    chainreaction[e.g.,USDepartmentofEnergy,2004]. Thus,itisaconcernthatwealsomaynot

    beabletohaltsome(potentiallyfatal)chainofeventscausedbyautonomousmilitarysystems

    thatprocess informationandcanactat speeds incomprehensible tous,e.g.,withhighspeed

    unmannedaerialvehicles.

    Further, civilian robotics are becoming more pervasive. Never mind seeminglyharmless

    entertainment robots, some major cities (e.g., Atlanta, London, Paris, Copenhagen) already

    boastdriverless transportationsystems,againcreatingpotentialworriesandethicaldilemmas

    (e.g.,bringing to lifethe famousthoughtexperiment inphilosophy:shoulda fastmovingtrain

    divertitselftoanothertrackinordertokillonlyoneinnocentperson,orcontinueforwardtokill

    the five on its current path?). So there can be lessons for military robotics that can be

    transferred from civilian robotics and automated decisionmaking, and vice versa. Also, as

    robots become more pervasive in the public marketplacethey are already abundant inmanufacturing and other industriesthe broaderpublicwillbecomemore aware of risk and

    ethical issuesassociatedwith such innovations,concerns that inevitablywillcarryover to the

    militarysuse.

    6. Complexityandunpredictability. Perhapsrobotethicshasnotreceivedtheattentionitneeds,atleast in the US, given a common misconception that robots will do only what we have

    programmed them todo. Unfortunately,suchabelief isasorelyoutdated,harkingback toa

    timewhencomputersweresimplerandtheirprogramscouldbewrittenandunderstoodbya

    single person. Now, programs with millions of lines of code are written by teams of

    programmers,noneofwhomknows theentireprogram;hence,no individualcanpredict the

    effectofagivencommandwithabsolutecertainty,sinceportionsoflargeprogramsmayinteract

    inunexpected,untestedways. (Andevenstraightforward,simplerulessuchasAsimovsLawsof

    Robotics can create unexpected dilemmas [e.g., Asimov, 1950].) Furthermore, increasing

    complexitymay leadtoemergentbehaviors, i.e.,behaviorsnotprogrammedbutarisingoutof

    sheercomplexity[e.g.,Kurzweil,1999,2005].

    Related major research efforts also are being devoted to enabling robots to learn from

    experience,raisingthequestionofwhetherwepredictwithreasonablecertaintywhattherobot

    will learn. Theanswer seems tobenegative,since ifwecouldpredict that,wewould simplyprogram the robot in the firstplace, instead of requiring learning. Learningmay enable the

    robottorespondtonovelsituations,giventhe impracticalityand impossibilityofpredictingall

    eventualitiesonthedesignerspart. Thus,unpredictabilityinthebehaviorofcomplexrobotsisa

    majorsourceofworry,especiallyifrobotsaretooperateinunstructuredenvironments,rather

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    13/112

    9thanthecarefullystructureddomainofafactory. (Wewilldiscussmachine learningfurther in

    sections2and3.)

    7. Publicperceptions. FromAsimovsscience fictionnovels toHollywoodmoviessuchasWallE,IronMan,Transformers,BladeRunner,StarWars,Terminator,Robocop,2001:ASpaceOdyssey,

    andI,Robot(tonameonlyafew,fromtheiconictorecentlyreleased),robotshavecapturedthe

    globalpublicsimaginationfordecadesnow. Butinnearlyeveryoneofthoseworks,theuseof

    robotsinsocietyisintensionwithethicsandeventhesurvivalofhumankind. Thepublic,then,

    isalready sensitive to the risksposedby robotswhetherornot those concernsareactually

    justifiedorplausibletoadegreeunprecedented in scienceand technology. Now, technical

    advancesinroboticsiscatchinguptoliteraryandtheatricalaccounts,sotheseedsofworrythat

    havelongbeenplanted inthepublicconsciousnesswillgrow intoclosescrutinyoftherobotics

    industrywithrespecttothoseethicalissues,e.g.,thebookLoveandSexwithRobotspublished

    latelastyearthatreasonablyanticipateshumanrobotrelationships[Levy,2007].

    Givensuch investments,questions,events,andpredictions, it isnowonder thatmoreattention is

    being paid to robot ethics, particularly in Europe [e.g., Veruggio, 2007]. An entire conference

    dedicated to the issueofethics inautonomousmilitarysystemsoneofthe firstwehaveseen, if

    not the firstof itskindwasheld in lateFebruary2008 in theUK [RoyalUnitedServices Institute

    (RUSI)forDefenceandSecurityStudies,2008],inwhichexpertsreiteratedthepossibilitythatrobots

    mightcommitwarcrimesorbeturnedonusbyterroristsandcriminals[RUSI,2008:NoelSharkey

    andRearAdmiralChrisParryspresentations, respectively;also,Sharkey,2007a,andAsaro,2008].

    Robotics isaparticularly thrivingandadvanced industry inAsia:SouthKorea is the first (and still

    only?)nation tobeworkingona RobotEthicsCharteroracodeofethics togovern responsible

    robotics development and use, though the document has yet to materialize [BBC, 2007]. This

    summer, Taiwan played host to a conference about advanced robotics and its societal impacts

    [InstituteofElectricalandElectronicsEngineers(IEEE),2008].

    ButtheUSisstartingtocatchup:somenotableUSexpertsareworkingonsimilarissues,whichwe

    will discuss throughout this report [Arkin, 2007; Wallach and Allen, 2008]. A January 2008

    conferenceatStanfordUniversityfocusedontechnologyinwartime,ofwhichrobotethicswasone

    notable session [ComputerProfessionals forSocialResponsibility (CPSR),2008]. In July2008, the

    North American Computing and Philosophy (NACAP) conference at Indiana University focused a

    significantpartofitsprogramonrobotethics[NACAP,2008]. Again,weintendforthisreportasanearly,complementarystepinfillingthegapinrobotethicsresearch,bothtechnicalandtheoretical.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    14/112

    10

    1.4 ReportOverview

    Followingthisintroduction,insection2,wewillprovideashortbackgrounddiscussiononroboticsin

    generalandindefenseapplicationsspecifically. Wewillsurveybrieflythecurrentstateofroboticsin

    the military as well as developments in progress and anticipated. This includes several future

    scenarios inwhich themilitarymay employ autonomous robots, whichwill help anchor and add

    depthtoourdiscussionslateronethicsandrisk.

    In section 3,wewill discuss the possibility of programming in rules or a framework in robots to

    govern their actions (such as Asimovs Laws of Robotics). There are different programming

    approaches: topdown, bottomup, and a hybrid approach [Wallach and Allen, 2008]. We also

    discuss themajor (competing)ethical theoriesdeontology,consequentialism,andvirtueethics

    thattheseapproachescorrespondwithaswellastheirlimitations.

    Insection4,weconsideranalternative,aswellasacomplementaryapproach, toprogramminga

    robotwithanethicalbehavior framework:tosimplyprogram ittoobeytherelevantLawsofWar

    and Rules of Engagement. To that end,we also discuss the relevant LOW and ROE, including a

    discussionofjustwartheoryandrelatedissuesthatmayariseinthecontextofautonomousrobots.

    Insection5,continuingthediscussionaboutlaw,wewillalsolookattheissueoflegalresponsibility

    basedonprecedentsrelatedtoproduct liability,negligenceandotherareas[Asaro,2007]. Thisat

    least informsquestions of risk in thenear andmidterm inwhich robots areessentiallyhuman

    madetoolsandnotmoralagentsoftheirown;butwealso lookatthecase fortreatingrobotsas

    quasilegalagents.

    Insection6,wewillbroadenourdiscussioninprovidingaframeworkfortechnologyriskassessment.

    Thisframework includesadiscussionofthemajorfactors indeterminingacceptablerisk:consent,

    informedconsent,affectedpopulation,seriousness, andprobability[DesJardins,2003].

    Insection7,wewillbringthevariousethicsandsocial issuesdiscussed,andnewones,together in

    one location. We will survey a full range of possible risks and issues related to ethics,justwar

    theory, technical challenges, societal impact, and more. These contingencies and issues are

    importanttohaveinmindinanycompleteassessmentoftechnologyrisks.

    Finally, in section 8, we will draw some preliminary conclusions, including recommendations for

    future, more detailed investigations. A bibliography is provided as section 9 of the report; and

    appendixAoffersmoredetaileddiscussionsonkeydefinitions,asinitiatedinthissection.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    15/112

    11

    2. MilitaryRobotics

    The field of robotics has changed dramatically during the past 30 years. While the first

    programmable articulated arms for industrial automation were developed by George Devol and

    made intocommercialproductsbyJosephEngleberger inthe1960sand1970s,mobilerobotswith

    variousdegreesofautonomydidnotreceivemuchattentionuntil the1970sand1980s. The first

    truemobilerobotsarguablywereElmerandElsie,theelectromechanicaltortoisesmadebyW.Grey

    Walter,aphysiologist,in1950[Walter,1950]. Theseremarkablelittlewheeledmachineshadmany

    of the features of contemporary robots: sensors (photocells for seeking light and bumpers for

    obstacledetection),amotordriveandbuiltinbehaviorsthatenabledthemtoseek(oravoid)light,

    wander,avoidobstaclesand recharge theirbatteries. Theirarchitecturewasbasically reactive, inthat a stimulus directly produced a response without any thinking. That development first

    appeared in Shakey, a robot constructed at Stanford Research Laboratories in 1969 [Fikes and

    Nilsson, 1971]. In this machine, the sensors were not directly coupled to the drive motors but

    provided inputs to a thinking layer known as the Stanford Research Institute Problem Solver

    (STRIPS),oneof theearliest applicationsof artificial intelligence. Thearchitecturewas known as

    senseplanactorsensethinkact[Arkin,1998].

    Sincethoseearlydevelopments,therehavebeenmajorstridesinmobilerobotsmadepossibleby

    new materials, faster, smaller and cheaper computers (Moores law) and major advances in

    software. Atpresent,robotsmoveonland,inthewater,intheair,andinspace. Terrestrialmobility

    useslegs,treads,andwheelsaswellassnakelikelocomotionandhopping. Flyingrobotsmakeuse

    ofpropellers,jetengines,andwings. Underwater robotsmay resemble submarines, fish,eels,or

    even lobsters. Somevehicles capableofmoving inmore thanonemediumor terrainhavebeen

    built. Service robots,designed for such applications as vacuum cleaning, floorwashing and lawn

    mowing,havebeensoldinlargequantitiesinrecentyears. Humanoidrobots,longconsideredonly

    in science fiction novels, are now manufactured in various sizes and with various degrees of

    sophistication[Bekey,2005]. Smalltoyhumanoids,suchastheWowWeeCorporationsRoboSapien,

    havebeensold inquantitiesofmillions. Morecomplexhumanoids,suchastheHondaASIMOare

    able toperformnumerous tasks. However, killerapplications forhumanoid robotshavenotyetemerged.

    There has also been great progress in the development of software for robots, including such

    applications as learning, interaction with humans, multiple robot cooperation, localization and

    navigationinnoisyenvironments,andsimulatedemotions. Wediscusssomeofthesedevelopments

    brieflyinsection2.6below.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    16/112

    12

    Duringthepast20years,militaryroboticvehicleshavebeenbuiltusingallthemodesoflocomotion

    described above and making use of the new software paradigms [US Dept. Of Defense, 2007].

    Military robots findmajorapplications in surveillance, reconnaissance, locationanddestructionof

    mines and IEDs, as well as for offense or attack. The latter class of vehicles is equipped with

    weapons,whichatthepresenttimearefiredbyremotehumancontrollers. Inthefollowing,wefirst

    summarizethestateof theart inmilitaryrobots, includingbothhardwareandsoftware,andthen

    introducesomeoftheethicalissueswhicharisefromtheiruse. Weconcentrateonrobotscapable

    oflethalactioninthatmuchoftheconcernwithmilitaryroboticsistiedtothislethalityandomit

    discussionofmore innocuousmachinessuchastheArmysBigDog,afour leggedrobotcapableof

    carryingseveralhundredpoundsofcargoover irregularterrain. Ifatsomefuturetimesuch carry

    robotsareequippedwithweapons,theymayneedtobeconsideredfromanethicalpointofview.

    2.1

    GroundRobots

    TheUSArmymakesuseoftwomajortypesofautonomousandsemiautonomousgroundvehicles:

    largevehicles, suchas tanks, trucksandHUMVEEsand smallvehicles,whichmaybe carriedbya

    soldier inabackpack(suchasthePackBotshown inFig.2.0a)andmoveontreads likesmalltanks

    [USDept.OfDefense,2007].ThePackBotisequippedwithcamerasandcommunicationequipment

    andmayincludemanipulators(arms);itisdesignedtofindanddetonateIEDs,thussavinglives(both

    civilian and military), as well as to perform reconnaissance. Its small size enables it to enter

    buildings,reportonpossibleoccupants,andtriggerboobytraps. Typicalarmedrobotvehiclesare

    (1) theTalonSWORDS (SpecialWeaponsObservationReconnaissanceDetectionSystem)madeby

    FosterMiller,which can be equipped with machine guns, grenade launchers, or antitank rocket

    launchersaswellascamerasandothersensors(seeFig.2.0b)and(2)thenewerMAARS(Modular

    AdvancedArmedRoboticSystem). WhilevehiclessuchasSWORDSandthenewerMAARSareable

    to autonomously navigate toward specific targets through its global positioning system (GPS), at

    presentthefiringofanyonboardweaponsisdonebyasoldierlocatedasafedistanceaway. Foster

    Millerprovidesauniversalcontrolmoduleforusebythewarfighterwithanyoftheirrobots. MAARS

    usesamorepowerfulmachinegunthantheoriginalSWORDS. WhiletheoriginalSWORDSweighted

    about150 lbs.,MAARSweighs about350 lbs. It isequippedwith anewmanipulator capableof

    lifting 100 lbs., thus enabling it to replace its weapon platform with an IED identification and

    neutralizationunit.

    Among the larger vehicles, the Armys TankAutomotive Research, Development and Engineering

    Center(jointlywithFosterMiller)hasdevelopedtheTAGSCX,a5,0006,000lb.amphibiousvehicle.

    Morerecently,andjointlywithCarnegieMellonUniversity,theArmyhasdevelopeda5.5ton,six

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    17/112

    13wheelunmannedvehicleknownas theCrusher,capableofcarrying2,000lbs.atabout30mphand

    capableofwithstandingamineexplosion;itisequippedwithoneormoreguns(seefigure2.1).

    (a) (b)Fig.2.0 Militarygroundvehicles:(a)PackBot(CourtesyofiRobotCorp.);

    (b)SWORDS(CourtesyofFosterMillerCorp.)

    Fig.2.1 Militarygroundvehicle:TheCrusher(CourtesyofUS Army)

    BothPackBotandTalonrobotsarebeingusedextensivelyandsuccessfully inIraqandAfghanistan.

    Hence, we expect further announcements of UGV deployments in the near future. We are not

    awareoftheuseofarmedsentryrobotsbytheUSmilitary;however,theyareusedinSouthKorea

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    18/112

    14(developedbySamsung)andinIsrael. TheSouthKoreansystemiscapableofinterrogatingsuspects,

    identifyingpotentialenemyintruders,andautonomousfiringofitsweapon.

    DARPA supported two major national competitions leading to the development of autonomous

    groundvehicles. The2005GrandChallenge requiredautonomousvehiclesto traverseportionsof

    theMojavedesert inCalifornia. The vehicleswere providedwithGPS coordinatesofwaypoints

    alongtheroute,butotherwisetheterraintobetraversedwascompletelyunknowntothedesigners,

    and the vehicles moved autonomously at speed averaging 20 to 30 mph. In 2007, the Urban

    Challengerequiredautonomousvehiclestomoveinasimulatedurbanenvironment,inthepresence

    ofothervehiclesandsignal lights,whileobeyingtraffic laws. Whilethewinningautomobilesfrom

    StanfordUniversityandCarnegieMellonUniversitywerenotmilitaryinnature,thelessonslearned

    willundoubtedlyfindtheirwayintofuturegenerationsofautonomousroboticvehiclesdevelopedby

    theArmyandotherservices.

    2.2 AerialRobots

    TheUSArmy,AirForce,andNavyhavedevelopedavarietyofroboticaircraftknownasunmanned

    flyingvehicles (UAVs).3 Likethegroundvehicles,theserobotshavedualapplications:theycanbe

    used for reconnaissancewithoutendangeringhumanpilots,and theycancarrymissilesandother

    weapons. Theservicesusehundredsofunarmeddrones,someassmallasamodelaiplane,tolocate

    andidentifyenemytargets. AnimportantfunctionforunarmedUAVsistoserveasaerialtargetsfor

    piloted aircraft, such as thosemanufactured byAeroMech Engineering in San LuisObispo,CA, a

    company started by Cal Poly students. AeroMech has sold some 750 UAVs, ranging from 4 lb.

    batteryoperatedonesto150lb.vehicleswithjetengines. SomereconnaissanceUAVs,suchasthe

    Shadow,arelaunchedbyacatapultandcanstayaloftallday. ThebestknownarmedUAVsarethe

    semiautonomousPredatorUnmannedCombatAirVehicles (UCAV)builtbyGeneralAtomics (see

    Fig.2.2a),whichcanbeequippedwithHellfiremissiles. Both thePredatorand the largerReaper

    hunterkilleraircraftareusedextensively inAfghanistan. Theycannavigateautonomouslytoward

    targets specified by GPS coordinates, but a remote operator located in Nevada (or in Germany)

    makes the final decision to release the missiles. The Navy,jointly with Northrop Grumman, is

    developinganunmannedbomberwithfoldingwingswhichcanbelaunchedfromanaircraftcarrier.

    The military services are also developing very small aircraft, sometimes calledMicro AirVehicles(MAV)capableofcarryingacameraandsendingimagesbacktotheirbase. AnexampleistheMicro

    3Earlierversionsofsuchvehiclesweretermeddrones,whichimpliedthattheywerecompletelyundercontrolofapilotinachaseraircraft. Currentmodelsarehighlyautonomous,receivingdestinationcoordinatesfromonlygroundorsatellitetransmitters. Thus,becausethisreportisfocusedonrobotsmachinesthathavesomedegreeofautonomywedonotusethetermdronehere.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    19/112

    15AutonomousAirVehicle(MAAV;alsocalledMUAV forMicroUnmannedAirVehicle)developedby

    IntelligentAutomation,Inc.,whichisnotmuchlargerthanahumanhand(seeFig.2.2b).

    (a) (b)

    Fig.

    2.2

    Autonomous

    aircraft:

    (a)

    Predator

    (Courtesy

    of

    General

    Atomics

    Aeronautical

    Systems);

    (b)Microunmannedflyingvehicle(CourtesyofIntelligentAutomation,Inc.)

    Similarly, theUniversity of Floridahas developed an MAVwith a 16inchwingspanwith foldable

    wings,whichcanbestoredinan8inchx4inchcontainer. OtherAUVsincludeaductedfanvehicle

    (see Fig.2.3a)beingused in Iraq, and vehicleswith flappingwings,madebyAeroVironment and

    others (Fig. 2.3b).WhileMAVs are used primarily for reconnaissance and are not equipped with

    lethalweapons,itisconceivablethatthevehicleitselfcouldbeusedinsuicidemissions.

    (a) (b)

    Fig.2.3 Microairvehicles:(a)ductedfanvehiclefromHoneywell;(b)OrnithopterMAVwithflapping

    wingsmadebystudentsatBrighamYoungUniversity(PhotobyJarenWilkey/BYU,usedby

    permission)

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

    http://www.flightglobal.com/articles/2008/01/25/221092/us-navy-unveils-surprise-order-for-ducted-fan-uavs.html
  • 8/14/2019 AutonomousMilitaryRobotics:

    20/112

    16Otherflyingrobotseitherdeployedorindevelopment,includinghelicopters,tinyrobotsthesizeofa

    bumblebee,andsolarpoweredcraftcapableofremainingaloftfordaysorweeksatatime. Again,

    ourobjectivehere isnot toprovidea complete survey,but to indicate thewide rangeofmobile

    robotsinusebythemilitaryservices.

    2.3 MarineRobots

    Along with the other services, the US Navy has a major robotic program involving interaction

    between land,airborne,and seabornevehicles [USDept.of theNavy,2004;USDept.ofDefense,

    2007]. The latter include surface shipsaswellasUnmannedUnderwaterVehicles (UUVs). Their

    applications include surveillance, reconnaissance, antisubmarine warfare, mine detection and

    clearing, oceanography, communications, and others. It should be noted that contemporary

    torpedoesmaybeclassifiedasUUVs,sincetheypossesssomedegreeofautonomy.

    Aswithrobots intheotherservices,UUVscome invarioussizes, frommanportabletovery large.

    Fig.2.4ashowsBoeing'sLongtermMineReconnaissanceSystem(LMRS)which isdropped intothe

    ocean from a telescoping torpedo launcher aboard the SV Ranger to begin its underwater

    surveillancetestmission. LMRSusestwosonarsystems,anadvancedcomputeranditsowninertial

    navigationsystem tosurvey theocean floor forup to60hours. TheLMRS shown in the figure is

    about21inchesindiameter;itcanbelaunchedfromatorpedotube,operateautonomously,return

    tothesubmarine,andbeguided intoatorpedotubemountedroboticrecoveryarm. AlargeUUV,

    the Seahorse, is shown in Fig. 2.4b; this vehicle is advertised as being capable of independent

    operations,whichmayincludetheuseoflethalweapons. TheSeahorseisabout3feetindiameter,

    28feet long,andweighs10,500lbs. TheNavyplanstomovetowarddeploymentoflargeUUVsby

    2010. Thesevehiclesmaybeupto3to5feetindiameter,weighingperhaps20,000lbs.

    (a) (b)

    Figure2.4:(a)LongtermMineReconnaissanceUUV(CourtesyofTheBoeingCompany);

    (b)Seahorse3footdiameterUUV(CourtesyofPennStateUniversity)

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    21/112

  • 8/14/2019 AutonomousMilitaryRobotics:

    22/112

    18

    2.5 Immobile/FixedRobots

    To thispointwehavedescribeda rangeofmobile robotsusedby themilitary:onearth,onand

    under thewater, in theair,and inspace. Itshouldbenoted thatnotall robotscapableof lethal

    actionaremobile;infact,somearestationary,withonlylimitedmobility(suchasaimingofagun).

    Weconsiderafewexamplesofsuchrobotsinthissection.

    First,letusconsideragainwhylandminesandunderwatermines,whetheraimedatdestructionof

    vehiclesorattacksonhumans (antipersonnelmines),arenotproperlyrobots. Whetherburied in

    thegroundorplantedinthesurfzonealongtheoceanshore,thesesystemsareequippedwithsome

    sensingability (since they candetect thepresenceofweight),and they actbyexploding. Their

    informationprocessingabilityisextremelylimited,generallyconsistingonlyofaswitchtriggeredby

    pressurefromabove. Givenourdefinitionofautonomousrobotsasconsiderinsection1(aswellas

    detailed inAppendixA),while suchminesmaybe considered asautonomous,wedonot classifythemas robots sincea simple trigger isnotequivalent to thecognitive functionsofa robot. Ifa

    landmineisconsideredarobot,oneseemstobeabsurdlyrequiredtodesignateatripwireasarobot

    too.

    Ontheotherhand,thereareimmobileorstationaryweapons,bothonlandandonships,whichdo

    merit the designation of robot, despite their lack of mobility (though they have some moving

    features,whichsatisfiesourdefinitionforwhatcountsasarobot). Anexampleofsuchasystemis

    theNavysPhalanxCloseInWeaponSystem(CIWS). CIWSisarapidfire20mmgunsystemdesigned

    toprotectshipsatcloserangefrommissileswhichhavepenetratedotherdefenses. Thesystem is

    mountedonthedeckofaship;itisequippedwithbothsearchandtrackingradarsandtheabilityto

    rotateaturretinordertoaimtheguns. Theinformationprocessingabilityofthecomputersystem

    associatedwiththeradarsisremarkable,sinceitautomaticallyperformssearch,detecting,tracking,

    threat evaluation, firing, and killassessments of targets. Thus, the CIWS uses radar sensing of

    approachingmissiles, identifiestargets,trackstargets,makesthedecisiontofire,andthenfires its

    guns,usingsolidtungstenbulletstopenetratetheapproachingtarget. Thegunandradarturretcan

    rotateinatleasttwodegreesoffreedomfortargettracking,buttheentirestructureisimmobileand

    fixedonthedeck.

    TheUSArmyhasalsoadoptedaversionof thePhalanx system toprovidecloseinprotection fortroops and facilities in Iraq, under the name Counter Rocket, Artillery, and Mortar (CRAM, or

    CounterRAM). Thesystem ismountedonthegroundor, insomecases,onatrainplatform. The

    basic system operation is similar to that of the Navy system: it is designed to destroy incoming

    missilesatarelativelyshortrange. However,sincethesystemislocatedadjacenttoornearcivilian

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    23/112

    19facilities,thereismajorconcernforcollateraldamage,e.g.,debrisorfragmentsofadisabledmissile

    couldlandoncivilians.

    Asafinalexamplehere,wecitetheSGRA1sentryrobotdevelopedbySamsungTechwinCo.foruse

    bytheSouthKoreanarmyintheDemilitarizedZone(DMZ)whichseparatesNorthandSouthKorea.

    The system is stationary, designed to replace a manned sentry location. It is equipped with

    sophisticatedcolorvisionsensorsthatcan identifyapersonenteringtheDMZ,evenatnightunder

    onlystarlight illumination. SinceanypersonenteringtheDMZ isautomaticallypresumedtobean

    enemy,itisnotnecessarytoseparatefriendfromfoe. Thesystemisequippedwithamachinegun,

    andthesensorgunassembly iscapableofrotating intwodegreesoffreedomas ittracksatarget.

    The firing of the gun can be done manually by a soldier or by the robot in fullyautomatic

    (autonomous)mode.

    2.6

    RobotSoftware

    Issues

    Inthepreceding,wehavepresentedthecurrentstateofsomeoftherobotichardwareandsystems

    beingusedand/orbeingdevelopedbythemilitaryservices. It is importanttonotethat inparallel

    withthedesignandfabricationofnewautonomousorsemiautonomousroboticsystems,thereisa

    greatdealofworkonfundamentaltheoreticalandsoftwareimplementationissueswhichalsomust

    besolved iffullyautonomoussystemsaretobecomeareality [Bekey,2005]. Thecurrentstateof

    someoftheseissuesisasfollows:

    2.6.1 SoftwareArchitecture

    Mostcurrentsystemsusethesocalled three levelarchitecture, illustrated inFig.2.6. The lowest

    levelisbasicallyreflexive,andallowstherobottoreactalmostinstantlytoaparticularsensoryinput.

    Figure2.6. Typicalthreelevelarchitectureforrobotcontrol

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    24/112

    20

    The highest level, sometimes called the Deliberative layer, includes Artificial Intelligence such as

    planning and learning, as well as interaction with humans, localization and navigation. The

    intermediate or supervisory layer provides oversight of the reactive layer, and translates upper

    level commands as required for execution. Many recent developments have concentrated on

    increasingthesophisticationofthedeliberativelayer.

    2.6.2 SimultaneousLocalizationandMapping(SLAM)

    Animportantproblemforautonomousrobotsistoascertaintheirlocationintheworldandthento

    generatenewmapsastheymove. Anumberofprobabilisticapproachestothisproblemhavebeen

    developedrecently.

    2.6.3 Learning

    Particularly in complex situations it has become clear that robots cannot be programmed for all

    eventualities.This isparticularlytrue inmilitaryscenarios. Hence,therobotmust learntheproper

    responsestogivenstimuli,anditsperformanceshouldimprovewithpractice.

    2.6.4 MultipleRobotSystemArchitectures

    Increasingly, it will become necessary to deploy multiple robots to accomplish dangerous and

    complex tasks. Theproper architecture for control of such robot groups is still not known. For

    example, should they be organized hierarchically, along military lines, or should they operate in

    semiautonomoussubgroups,orshouldthegroupsbetotallydecentralized?

    2.6.5 HumanRobotInteraction

    Intheearlydaysofrobotics(andeventoday incertain industrialapplications),robotsareenclosed

    or segregated to ensure that they do not harm humans. However, in an increasing number of

    applications,humansandrobotscooperateandperformtasksjointly. Thisiscurrentlyamajorfocus

    of research in thecommunity,and thereare several internationalconferencedevoted toHuman

    RobotInteraction(HRI).

    2.6.6 ReconfigurableSystems

    Thereisincreasing interest(bothformilitaryandcivilianapplications) indevelopingrobotscapable

    ofsomeformofshapeshifting. Thus,incertainscenarios,arobotmayberequiredtomovelikea

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    25/112

    21snake,while inothers itmayneed legs to stepoverobstacles. Several labsaredeveloping such

    systems.

    2.7 EthicalImplications:APreview

    ItisevidentfromtheabovesurveythattheArmedForcesoftheUnitedStatesareimplementingthe

    Congressionalmandatedescribedinsection1ofthisreport. However,asofthiswriting,noneofthe

    fieldedsystemshasfullautonomyinawidecontext. Manyarecapableofautonomousnavigation,

    localization,stationkeeping,reconnaissanceandotheractivities,butrelyonhumansupervisionto

    fire weapons, launch missiles, or exert deadly force by other means; and even the Navys CIWS

    operatesinfullautomodeonlyasareactivelastlineofdefenseagainstincomingmissilesanddoes

    notproactivelyengageanenemyor target. Clearly, thereare fundamentalethical implications in

    allowingfullautonomyfortheserobots. Amongthequestionstobeaskedare:

    Willautonomous robotsbe able to followestablishedguidelinesof the LawsofWarandRulesofEngagement,asspecifiedintheGenevaConventions?

    Willrobotsknowthedifferencebetweenmilitaryandcivilianpersonnel? Willtheyrecognizeawoundedsoldierandrefrainfromshooting?

    Technicalanswers to suchquestionsarebeingaddressed inastudy for theUSArmybyprofessor

    Ronald Arkin from Georgia Institute of Technologyhis preliminary report is entitled Governing

    Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture [Arkin

    2007]andotherexperts[e.g.,Sharkey,2008a]. Inthefollowingsectionsofourreport,weseekto

    complement that work by exploring other (mostly nontechnical) dimensions of such questions,

    specificallyastheyrelatedtoethicsandrisk.

    2.8 FutureScenarios

    Fromthebriefdescriptionsofthestateoftheartofroboticsabove,itisclearthatthefieldishighly

    dynamic. Robotics is inherently interdisciplinary, drawing from advances in computer science,

    aerospace,electricalandmechanicalengineering, aswellasbiology (toobtainmodelsof sensing,

    processingandphysicalactionintheanimalkingdom),sociology,ergonomics(toprovideabasisforthedesignanddeploymentof robotcolonies),andpsychology (toobtainabasis forhumanrobot

    interaction). Hence,discoveries inanyof these fieldswillhave aneffecton thedesignof future

    robots and may raise new questions of risk and ethics. It would be useful, then, to anticipate

    possible futurescenarios involvingmilitaryrobotics inordertomorecompletelyconsider issues in

    riskandethics,asfollow:

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    26/112

    22

    2.8.1 Sentry/ImmobileRobots

    A future scenario may include robot sentries that guard not only military installations but also

    factories,governmentbuildings,and the like. As theseguardsacquire increasingautonomy, they

    maynotonlychallengevisitors(Whogoesthere?)andaskthemtoprovide identificationbutwill

    be equipped with a variety of sensors for this purpose: vision systems, bar code readers,

    microphones,soundanalyzers,andsoon. Visionsystems(and,ifneeded,fingerprintreaders)along

    with large graphic memories may be used to perform the identification. More importantly, the

    guardswillbeequippedwithweaponsenablingthemtoarrestand, ifnecessary,todisableorkilla

    potential intruderwho refuses to stop andbe identified. Underwhat conditionswill such lethal

    forcebeauthorized? Whatiftherobotconfusestheidentitiesoftwopeople? Theseareonlytwoof

    themanydifficultethicalquestionswhichwillariseeveninsuchabasicallysimpletaskasguarding

    agateandchallengingvisitors.

    2.8.2 GroundVehicles

    We expect that future generations of Army ground vehicles, beyond the existing PackBots or

    SWORDSdiscussed in section2.1above,will feature significantlymoreandbetter sensors,better

    ordnance, more sophisticated computers, and associated software. Advanced software will be

    neededtoaccomplishseveraltasks,suchas:

    (a) Sensor fusion:More accurate situational awarenesswill require the technical ability to assign

    degrees of credibility to each sensor and then combine information obtained from them. For

    example, in thevicinityofa safehouse, the robotwillhave to combineacousticdata (obtained

    from a variety of microphones and other sensors) with visual information, sensing of ground

    movement,temperaturemeasurementstoestimatethenumberofhumanswithinthehouse,andso

    on. Theseestimateswillthenhavetobecombinedwithreconnaissancedata(sayfromautonomous

    flyingvehicles)toobtainaprobabilisticestimateofthenumberofcombatantswithinthehouse.

    (b)Attackdecisions:Sensordatawillhavetobeprocessedbysoftwarethatconsiderstheapplicable

    RulesofEngagementandLawsofWarinorderforarobottomakedecisionsrelatedtolethalforce.

    Itisimportanttonotethatthedecisiontouselethalforcewillbebasedonprobabilisticcalculations,andabsolutecertaintywillnotbepossible. Ifmultiplerobotvehiclesare involved,thesystemwill

    alsoberequiredtoallocatefunctionstoindividualmembersofthegroup,ortheywillberequiredto

    negotiatewitheachothertodeterminetheirindividualfunctions. Suchnegotiationisacurrenttopic

    ofmuchchallengingresearchinrobotics.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    27/112

    23(c)Humansupervision:Weanticipatethatautonomywillbegrantedtorobotvehiclesgradually,as

    confidenceintheirabilitytoperformtheirassignedtasksgrows. Further,weexpecttoseelearning

    algorithmsthatenabletherobottoimproveitsperformanceduringtrainingmissions. Evenso,there

    willbefundamentalethical issues. Forexample,willasupervisingwarfighterbeabletooverridea

    robotsdecisiontofire? Ifso,howmuchtimewillhavetobeallocatedtoallowsuchdecisions? Will

    therobothavetheabilitytodisobeyahumansupervisorscommand,say inasituationwherethe

    robotmakesthedecisionnottoreleaseamissileonthebasisthatitsanalysisleadstotheconclusion

    thatthenumberofcivilians(saywomenandchildren)greatlyexceedsthenumberof insurgents in

    thehouse?

    2.8.3AerialVehicles

    Clearly,manyofthesameconsiderationthatapplytogroundvehicleswillalsoapplytoUFVs,with

    theadditionalcomplexitythatarisesfrommoving inthreedegreesoffreedom,ratherthantwoas

    onthesurfaceoftheearth. Hence,theUFVmustsensetheenvironmentinthex,y,andzdirections.TheUFVmayberequiredtobombparticularinstallations,inwhichcaseitwillbegovernedbysimilar

    considerationstothosedescribedabove. However,theremaybeothers:forinstance,anaircraftis

    generallyamuchmoreexpensivesystemthanasmallgroundvehiclesuchastheSWORDS. What

    evasiveactionshouldthevehicleundertaketoprotectitself? Itshouldhavetheabilitytoreturnto

    base and land autonomously,butwhat should itdo if challenged by friendly aircraft? Are there

    situationsinwhichitmaybejustifiedindestroyingfriendlyaircraft(andpossiblykillinghumanpilots)

    toensureitsownsafereturntobase? TheUFVwillberequiredtocommunicatewithUGVsandto

    coordinate strategy when necessary. How should decisions be made if there is disagreement

    betweenairborneandgroundvehicles? Iftherearehybridmissionsthat includebothpilotedand

    autonomousaircraft,whoisincharge?

    Thesearenotatrivialquestion,sincecontemporaryaircraftmoveatveryhighspeeds,makingthe

    lengthoftimerequiredfordecisionsinadequateforhumancognitiveprocesses. Inaddition,vehicles

    maybeofvastlydifferentsize,speedandcapability. Further,underwhatconditionsshouldaUFVbe

    permitted to cross national boundaries in the pursuit of an enemy aircraft? Since national

    boundariesarenotpaintedontheground,therobotaircraftwillhavetorelyonstoredmapsand

    GPSmeasurements,whichmaybefaulty.

    2.8.4Marine

    Vehicles

    Manyof the same challenges that apply to airborne vehicles also apply to those travelingunder

    water. Again,theymustoperateinmultipledegreesoffreedom. Inaddition,thesensoryabilitiesof

    robotsubmarineswillbequitedifferentfromthoseofgroundorairvehicles,giventhepropertiesof

    water. Thus,sonarechoescanbeusedto identify thepresenceofunderwaterobjects,but these

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    28/112

    24signalsrequire interpretation. Assumethattherobotsubmarinedetectsthepresenceofasurface

    vessel,which ispresumed to carryingenemyweapons,aswellas civilianpassengers:underwhat

    conditionsshouldtherobotsubmarine launchtorpedoestodestroythesurfacevessel? Itmaybe

    muchmoredifficulttoestimatethenumberofciviliansaboardan ironshipthanthosepresentina

    woodenhouse. Howcantherobotmakeintelligentdecisionsintheabsenceofcriticalinformation?

    It is evident that the use of autonomous robots in warfare will pose a large number of ethical

    challenges. Inthenextsections,wediscusssomeprogrammingapproachesandtheirrelationshipto

    ethical theories, issues related to responsibilityand law (including LOW/ROE),andexpandon the

    variousethicalandriskissueswehaveraisedinthecourseofthisreport.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    29/112

    25

    3. ProgrammingMorality

    What rolemightethical theoryplay indefining thecontrolarchitecture for semiautonomousand

    autonomous robotsusedby themilitary? Whatmoralstandardsorethicalsubroutinesshouldbe

    implemented inarobot? Thissectionexploresthewaysinwhichethicaltheorymaybehelpfulfor

    implementingmoraldecisionmakingfacultiesinrobots.4

    Engineersareverygoodatbuildingsystemstosatisfycleartaskspecifications,butthereisnoclear

    taskspecificationforgeneralmoralbehavior,noristhereasingleanswertothequestionofwhose

    moralityorwhatmoralityshouldbeimplementedinAI. However,militaryoperationsareconducted

    withina legal frameworkof international treatiesaswellas thenationsownmilitary code. Thissuggests that the rules governingacceptable conductofpersonnelmightperhapsbe adapted for

    robots;onemightattempttodesignarobotwhichhasanexplicitinternalrepresentationoftherules

    andstrictlyfollowsthem.

    A robotic codewould,however,probablyneed todiffer in some respects from that for ahuman

    soldier. Forexample,selfpreservationmaybelessofaconcernfortheroboticsystem,bothinthe

    way it is valued by the military and in its programming. Furthermore, what counts as a strictly

    correctinterpretationofthelawsinaspecificsituationisitselflikelytobeamatterfordispute,and

    conflicts among duties or obligations will require assessment in light of more general moral

    principles. Regardlessofwhatcodeofethics,norms,values,laws,orprinciplesareadoptedforthe

    designofanartificialmoralagent(AMA),whetherthesystemfunctionssuccessfullywillneedtobe

    evaluatedthroughexternallydeterminedcriteriaandtesting.

    3.1 FromOperationaltoFunctionalMorality

    Safetyandreliabilityhavealwaysbeenaconcernforengineersintheirdesignofintelligentsystems

    andforthemilitary in itschoiceofequipment. Remotelyoperatedvehiclesandsemiautonomous

    weaponssystemsusedduringmilitaryoperationsneedtobereliable,andtheyshouldbedestructiveonlywhendirectedatdesignatedtargets. Notallrobotsutilizedbythemilitarywillbedeployedin

    combatsituations,however,establishingasaprioritythatallintelligentsystemsaresafeanddono

    harmto(friendly)militarypersonnel,civilians,andotheragentsworthyofmoralconsideration.

    4WethankandcreditWendellWallachandColinAllenfortheircontributiontomanyofthediscussionshere,drawnfromtheirnewbookMoralMachines:TeachingRobotsRightfromWrong(OxfordUniversityPress,2008).

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    30/112

    26

    Whenrobotswithevenlimitedautonomymustchoosefromamongdifferentcoursesofaction,the

    concernforsafetyistransmutedintotheneedforthesystemstohaveacapacityformakingmoral

    judgments. Forrobotsthatoperatewithinaverylimitedcontext,thedesignersandengineerswho

    buildthesystemsmaywellbeabletodiscernallthedifferentoptionstherobotwillencounterand

    programtheappropriateresponses. Theactionsofsucharobotarecompletelyinthehandsofthe

    designers of the systems and those who choose to deploy them; these robots are operationally

    moral. They do not have, and presumably will not need, a capacity to explicitly evaluate the

    consequences of their actions. They will not need to evaluate which rules apply in a particular

    situation,norneedtoprioritizeconflictingrules.

    However, three factors suggest that operational morality is not sufficient for many robotic

    applications: (1) the increasing autonomy of robotic systems; (2) the prospect that systems will

    encounter influences that their designers could not anticipate because of the complexity of the

    environments inwhich theyaredeployed,orbecause the systemsareused incontexts forwhichtheywerenotspecificallydesigned;and(3)thecomplexityoftechnologyandtheinabilityofsystems

    engineerstopredicthowtherobotswillbehaveunderanewsetofinputs.

    The choices available to systems that possess a degree of autonomy in their activity and in the

    contextswithinwhichtheyoperate,andgreatersensitivitytothemoralfactors impinginguponthe

    course of actions available to them, will eventually outstrip the capacities of any simple control

    architecture. Sophisticatedrobotswillrequireakindoffunctionalmorality,suchthatthemachines

    themselveshavethecapacityforassessingandrespondingtomoralconsiderations. However,the

    engineers that design functionally moral robots confront many constraints due to the limits of

    presentdaytechnology. Furthermore,anyapproachtobuildingmachinescapableofmakingmoral

    decisionswillhavetobeassessedinlightofthefeasibilityofimplementingthetheoryasacomputer

    program.

    In the following,wewillbrieflyexamineseveralmajor theoriesdeontological (rulebased)ethics,

    consequentialism, natural law, social contract ethics, and virtue ethicsas possible ethical

    frameworks in robots. (A complete discussion of these theories and their relative plausibility is

    beyondthescopeofthisreportandcanbereadilyfoundinphilosophicalliterature[e.g.Universityof

    SanDiego,2008].)

    First,letusdismissoneimportantpossibility:ethicalrelativism,orthepositionthatthereisnosuch

    thingasobjectivityinethicalmatters,i.e.,whatisrightorwrongisnotamatteroffactbutaresultof

    individualorculturalpreferences. Evenifitweretruethatethicsisrelativetoculturalpreferences,

    thiswouldhavenobearingonaprojecttodevelopautonomousmilitaryrobots,sincetheUSmilitary

    and itscodeofethicswouldbe the standard forour robotsanyway,asopposed toprogramming

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    31/112

    27someothernationsmorality intoourmachines. Further,we canexpect that such robotswillbe

    employedonly inspecificenvironments,at leastfortheforeseeablefuture,whichsuggestsamore

    limited, practical programming approach; so a broad or allencompassing theory of ethics is not

    immediatelyurgent,andthusweneednotsettlethequestionofwhetherethicsisobjectivehere.

    That is, the ideaofan autonomous general orevenmultipurpose robot (whichmight require a

    broadframeworktogovernafullrangeofpossibleactions)ismuchmoredistantthanthepossibility

    of an autonomous robot created for specific militaryrelated tasks, such as patrolling borders or

    urban areas, or exercising lethal force in a carefully circumscribed battlefield. Given the limited

    operationsofsuchrobots,theinitialethicaltaskwillbesufficienttosimplyprograminthesuitable

    basic,relevantrules. Inthenextsection,wewilldelineatetheLawsofWarandRulesofEngagement

    thatwould govern the robots behavior; these laws already are established and codified,making

    programmingeasier(intheory). Wewillalsoofferchallengesandfurtherdifficultiesrelatedtothe

    approachofusing theLOWandROEasanethical framework,anddiscuss longerterm issues that

    mayariseasrobotshavegreaterautonomyandresponsibility.

    3.2 Overview:TopDownandBottomUpApproaches

    The challenge of building artificial moral agents (AMAs) might be understood as finding ways to

    implement abstract values within the control architecture of intelligent systems. Philosophers

    confrontedwith this problem are likely to suggest a topdown approachof encoding aparticular

    ethicaltheoryinsoftware. Thistheoreticalknowledgecouldthenbeusedtorankoptionsformoral

    acceptability. Psychologistsconfrontedwiththeproblemofconstrainingmoraldecisionmakingare

    likelytofocusonthewayasenseofmoralitydevelopsinhumanchildrenastheymatureintoadults.

    Theirapproach to thedevelopmentofmoralacumen isbottomup in thesense that it isacquired

    overtimethroughexperience.Thechallengeforroboticistsistodecidewhetheratopdownethical

    theoryorabottomupprocessoflearningisthemoreeffectiveapproachforbuildingartificialmoral

    agents.

    Thestudyofethicscommonlyfocusesontopdownnorms,standards,andtheoreticalapproachesto

    moraljudgment. From Socrates dismantling of theories ofjustice to Kants project of rooting

    morality within reason alone, ethical discourse has typically looked at the application of broad

    standards of morality to specific cases. According to these approaches, standards, norms, orprinciplesarethebasisforevaluatingthemoralityofanaction.

    Thetermtopdownisusedinadifferentsensebyengineers,whoapproachchallengeswithatop

    down analysis through which they decompose a task into simpler subtasks. Components are

    assembled intomodulesthat individually implementthesesimplersubtasks,andthenthemodules

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    32/112

    28arehierarchicallyarrangedtofulfillthegoalsspecifiedbytheoriginalproject.

    In our discussion of machine morality, we use topdown in a way that combines these two

    somewhatdifferentsensesfromengineeringandethics. Inourbroadersense,atopdownapproach

    to the design of AMAs is any approach that takes a specified ethical theory and analyzes its

    computational requirements to guide the design of algorithms and subsystems capable of

    implementingthattheory.

    In the bottomup approaches to machine morality, the emphasis is placed on creating an

    environmentwhereanagentexplorescoursesofactionandisrewardedforbehaviorthatismorally

    praiseworthy. Inthismanner,theartificialagentdevelopsorlearnsthrough itsexperience. Unlike

    topdown ethical theories, which define what is and is not moral, ethical principles must be

    discovered or constructed in bottomup approaches. Bottomup approaches, if they use a prior

    theoryatall,dosoonlyasawayofspecifyingthetaskforthesystem,andnotasawayofspecifying

    animplementationmethodorcontrolstructure.

    Engineers would find this topdown/bottomup dichotomy to be rather simplistic given the

    complexity of many engineering tasks. However, the concepts of topdown and bottomup task

    analysis are helpful in that they highlight two different roles for ethical theory in facilitating the

    designofAMAs.

    3.3 TopDownApproaches

    Are ethical principles, theories, and frameworks useful in guiding the design of computational

    systems capable of acting with some degree of autonomy? Can topdown theoriessuch as

    utilitarianism, or Kants categorical imperative, or even Asimovs laws for robotsbe adapted

    practicallybyroboticistsforbuildingAMAs?

    Topdownapproachestoartificialmoralityaregenerallyunderstoodashavingasetofrulesthatcan

    be turned intoanalgorithm. These rules specify thedutiesofamoralagentor theneed for the

    agenttocalculatetheconsequencesofthevariouscoursesofaction itmightselect. Thehistoryof

    moralphilosophycanbeviewedasalonginquiryintotheadequacyofanyoneethicaltheory;thus,

    selectinganyparticulartheoreticalframeworkmaynotbeadequateforensuringanartificialagentwillbehaveacceptably inall situations. However,one theoryoranother isoftenprominent ina

    particulardomain,andfortheforeseeablefuturemostrobotswillfunctionwithinlimiteddomainsof

    activity.

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    33/112

    293.3.1 TopDownRules:Deontology

    Abasicgrounding inethical theorynaturallybeginswith the idea thatmorality simply consists in

    followingsomefinitesetofrules:deontologicalethics,orthatmorality isaboutsimplydoingones

    duty. Deontological(dutybased)ethicspresentsethicsasasystemofinflexiblerules;obeyingthem

    makesonemoral,breakingthemmakesoneimmoral. Ethicalconstraintsareseenasalistofeither

    forbidden or permissible forms of behavior. Kants Categorical Imperative (CI) is typical of a

    deontologicalapproach,asfollowsinitstwomaincomponents:

    CI(1) This is often called the formula of universal law (FUL), which commands: Act only in

    accordancewiththatmaximthroughwhichyoucanatthesametimewillthatitbecomeauniversal

    law [Kant, 1785, 4:421]. Alternatively, the CI also has been understood as that the relevant

    legislatureshouldpasssuchalawmandatingmyaction,i.e.,aUniversalLawofNature.

    Amaximisastatementofonesintentorrationale:itistheanswertothequeryaboutwhyonedidwhatwasdone. SoKant asserts that theonly intentions that aremoral are those that couldbe

    universallyheld;partialityhasnoplaceinmoralthought. Kantalsoassertsthatwhenwetreatother

    peopleasameremeanstoourends,suchactionmustbeimmoral;afterall,weourselvesdontwish

    tobetreatedthatway. Hence,whenapplyingtheCIinanysocialinteraction,Kantprovidesasecond

    formulationasapurportedcorollary:

    CI(2) Variously called the Humanity formulation of the CI, or the MeansEnds Principle, or the

    formulaoftheendinitself(FEI),itcommands:Soactthatyouusehumanity,whetherinyourown

    personorinthepersonofanyother,alwaysatthesametimeasanend,nevermerelyasameans

    [Kant,1785,4:429]. One couldneveruniversalize the treatmentofanother as ameremeans to

    some other ends, claims Kant, in his explanation that CI(2) directly follows from CI(1). This

    formulation is credited with introducing the idea of respect for persons; that is, respect for

    whatever it isthat isessential toourHumanity, forwhatevercollectiveattributesarerequired for

    humandignity[Johnson,2008].

    AKantiandeontologistthusbelievesthatactssuchasstealingandlyingarealwaysimmoral,because

    universalizingthemcreatesaparadox. For instance,onecannotuniversalize lyingwithoutrunning

    into the Liarsparadox (that itcannotbetrue thatallstatementsarea lie);similarly,onecannot

    universalizestealingpropertywithoutunderminingtheveryconceptofproperty. Kantsapproachiswidelyinfluentialbuthasproblemsofapplicabilityanddisregardforconsequences.

    3.3.2 AsimovsLawsofRobotics

    Anotherdeontologicalapproachoftencomestomind in investigatingrobotethics:AsimovsThree

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    34/112

    30LawsofRobotics (he lateraddeda fourthor ZerothLaw)are intuitivelyappealing intheirsimple

    demand to not harm or allow humans to be harmed, to obey humans, and to engage in self

    preservation. Furthermore,the lawsareprioritizedtominimizeconflicts. Thus,doingnoharmto

    humanstakesprecedenceoverobeyingahuman,andobeyingtrumpsselfpreservation. However,

    instoryafterstory,Asimovdemonstratedthatthreesimplehierarchicallyarrangedrulescouldlead

    todeadlockswhen,forexample,therobotreceivedconflictinginstructionsfromtwopeopleorwhen

    protectingonepersonmightcauseharmtoothers.

    TheoriginalversionofAsimovsThreeLawsofRoboticsareasfollows:(1)arobotmaynot injurea

    humanbeingor, through inaction,allowahumanbeing to come toharm; (2)a robotmustobey

    ordersgiventoitbyhumanbeings,exceptwheresuchorderswouldconflictwiththeFirstLaw;(3);a

    robotmustprotect itsownexistenceas longassuchprotectiondoesnotconflictwiththeFirstor

    SecondLaw[Asimov,1950].

    Asimovs fiction explored the implications and difficulties of the Three Laws of Robotics. Itestablishedthatthefirstlawwasincompleteasstated,duetotheproblemofignorance:arobotwas

    fullycapableofharmingahumanbeingaslongasitdidnotknowthatitsactionswouldresultin(a

    riskof)harm, i.e., theharmwasunintended. Forexample,a robot, in response toa request for

    water,couldserveahumanaglassofwater teemingwithbacterialcontagion,or throwahuman

    downawell,ordrownahumaninalake,adinfinitum,aslongastherobotwasunawareoftherisk

    of harm. One solution is to rewrite the first and subsequent laws with an explicit knowledge

    qualifier:A robotmaydonothing that, to itsknowledge,willharmahumanbeing;nor, through

    inaction,knowinglyallowahumanbeing tocome toharm [Asimov,1957]. Butaclevercriminal

    coulddivideataskamongmultiplerobots,sothatnoonerobotcouldevenrecognizethatitsactions

    would leadtoharmingahuman,e.g.,onerobotplacesthedynamite,anotherattachesa lengthof

    cordtothedynamite,athirdlightsthecord,andsoon. Ofcourse,thissimplyillustratestheproblem

    withdeontological,topdownapproaches,thatonemayfollowtherulesperfectlybutstillproduce

    terribleconsequences.

    Anadditionaldifficulty is that thedegreeof riskmakesadifference too,e.g., should robotskeep

    humansfromworkingnearXraymachinesbecauseofasmallriskofcancer,andhowwouldarobot

    decide? (Section6onriskassessmentwillexplorethistopicfurther). Thethroughinactionclause

    ofAsimovsfirstlawraisesanotherissue:Wouldntarobothavetoconstantlyintervenetominimize

    all sortsof risks tohumans,andneverbeable toperform itsprimary tasks? AsimovconsidersamodifiedFirstLawtosolvethisissue:(1)Arobotmaynotharmahumanbeing. RemovingtheFirst

    Lawsinactionclausesolvesthisproblem,buttdoessoattheexpenseofcreatinganevengreater

    one:arobotcouldinitiateanactionwhichwouldharmahuman(forexample,initiatinganautomatic

    firing sequence, then watching a noncombatant wander into the firing line) knowing that it was

    A u t o n o m o u s M i l i t a r y R o b o t i c s : R i s k , E t h i c s , a n d D e s i g n

    Copyright2008Lin,Abney,andBekey. Alltrademarks,logosandimagesarethepropertyoftheirrespectiveowners.

  • 8/14/2019 AutonomousMilitaryRobotics:

    35/112

  • 8/14/2019 AutonomousMilitaryRobotics:

    36/112

    32LawOne:Arobotmaynotinjureahumanbeing,or,throughinaction,allowahumanbeing

    tocometoharm,unlessthiswouldviolateahigherorderLaw.

    LawTwo:A robotmustobeyordersgiven itbyhumanbeings,exceptwhere suchorders

    would conflict with a higherorder Law; a robot must obey orders given it by

    superordinaterobots,exceptwheresuchorderswouldconflictwithahigherorderLaw.

    Law Three:A robotmustprotect the existenceof a superordinate robot as long as such

    protection does not conflict with a higherorder Law; a robot must protect its own

    existenceaslongassuchprotectiondoesnotconflictwithahigherorderLaw.

    LawFour:Arobotmustperformthedutiesforwhichithasbeenprogrammed,exceptwhere

    thatwouldconflictwithahigherorderlaw.

    TheProcreationLaw:Arobotmaynottakeanypartinthedesignormanufactureofarobot

    unlessthenewrobotsactionsaresubjecttotheLawsofRobotics.

    Clarke admits thathis revised laws still face serious problems, including the identification of and

    consultationwith stakeholders andhow they are affected, aswell as issues ofquality assurance,liabilityforharmresultingfromeithermalfunctionorproperuse,andcomplainthandling,dispute

    resolution,andenforcementprocedures. Ourdiscussionofproductliabilityinsection5willaddress

    manyoftheseconcerns.

    There are additional problems that occur when moral laws for robots are given in the military

    context. Tobeginwith,militaryofficersareawarethatifcodesofconductorRulesofEngagement

    arenotcomprehensive,thenproperbehaviorcannotbeassured. Onedifficultyliesinthefactthat

    asthecontextgetsmorecomplex,itbecomesimpossibletoanticipateallthesituationsthatsoldiers

    willencounter,thusleavingthechoiceofbehaviorinmanysituationsuptothebestjudgmentofthe

    soldier. The desirability of placing machines in this situation is a policy decision that is likely to

    evolveasthetechnologicalsophisticationofAMAsimproves.

    Unfortunately, there are yet further problems: most pertinently, even if their glitches could be

    ironedout,Asimovs lawswillremainsimply inapplicabletothemilitarycontext,as it is likelythat

    autonomousmilitary robotswillbeasked toexercise lethal forceuponhumans inorder toattain

    mission objectives, thereby violating Asimovs First Law. A further problem, called rampancy,

    involves thepossibility thatanautonomousrobotcouldoverwrite itsownbasicprogrammingand

    substituteitsownnewgoalsfortheoriginalmissionobjectives(e.g.,themovieStealth). Thatleads

    us to a final and apparently conclusive reason why deontological ethics cannot be used forautonomousmilitaryrobots: it isincompatiblewithaslavemorality,asaddressedinthefollowing

    discussion(andfurtherinsection6).

    A u t o n o m o u s M i l i t a r y