governance and assessment of future spaces: a discussion ... ·...

15
Vol:.(1234567890) Development (2019) 62:66–80 https://doi.org/10.1057/s41301-019-00208-1 DIALOGUE SECTION Governance and Assessment of Future Spaces: A Discussion of Some Issues Raised by the Possibilities of Human–Machine Mergers Andelka M. Phillips 1,2  · I. S. Mian 3 Published online: 9 October 2019 © Society for International Development 2019 Abstract This article explores potential privacy, security, and ethical issues raised by technologies that allow for human–machine mergers. The focus is on research, development and products at the intersection of robotics, artificial intelligence, Big Data, and smart computing. We suggest that there is a need for a more holistic approach to the assessment of such technology and its governance. We argue that in order to determine how the law will need to respond to this particular future space, it is necessary to understand the full impacts of human–machine mergers on societies and our planet—to go beyond these three aforementioned issues. We aim to encourage further discussion and research on this as well as the broader organism-machine merger question, including on our FLE 5 SH (F = financial, L = legal, E 5 = economic, ethical, equity, environmental, and eco- system, S = socio-political, H = historical) framework for the governance and assessment of these and other future spaces. Keywords Human–machine merger · Technology assessment · FLE 5 SH · G.O.A.T.S · Precautionary principle · Artificial intelligence Introduction Today, it seems we stand at the beginning of an age of ubiq- uitous computing and attempts to merge the physical, digital, and biological realms. 1 This is also a time increasingly of technological convergence (Kearns 1998: 975; O’Brolcháin et al. 2016; Perakslis et al. 2016), with an ever-larger array of objects having Internet connectivity. All of this poses sig- nificant risks for individual and group privacy and security (Weber 2010; European Commission 2013; Global Privacy Enforcement Network 2016; UK Information Commis- sioner’s Office 2016), but it also raises further issues for environmental, human, and animal health, as well as the prospect of unemployment for many as jobs are increasingly automated (Frey et al. 2016; Solon 2016; Williams 2017). Developments in computing technology have drastically altered our world and while there will be some gains from many of these advances, most technologies pose both risks and benefits and are not in themselves neutral. Since there will be both winners and losers, there is a need for a broader assessment of the impact of new technologies on society as a whole, the environment, and the planet. Developments in Artificial Intelligence (AI) and com- puting are often viewed as transformative technologies. However, given the potential that future developments in these related fields have to alter our natural and built envi- ronment-, impacting not only humans, plants, and animals but also entire ecosystems, there should be a wider debate about not only regulation and assessment of technology, but also the type of world we want to live in. Furthermore, although these technologies are often presented as trans- formative, developments in these related fields often have limitations (including misinterpreting data and failing to distinguish between things that humans would be able to correctly identify (Jordan 2018; Vincent 2018; Broussard 2018: chapter 1)) and current advances in AI are best viewed as giving rise to narrow AI (Jordan 2018; Bostrom 2014: * Andelka M. Phillips [email protected] 1 Te Piringa - Faculty of Law, University of Waikato, Hamilton, New Zealand 2 HeLEX Centre, University of Oxford, Oxford, UK 3 Department of Computer Science, University College London, London, UK 1 An earlier version of this article was presented as a discussion paper at the Data For Policy 2017: Government by Algorithm? Con- ference held in London, 6–7 September, 2017, Phillips and Mian (2017); this article builds also upon our related work Phillips et al. (2015) and Phillips (2016).

Upload: others

Post on 08-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

Vol:.(1234567890)

Development (2019) 62:66–80https://doi.org/10.1057/s41301-019-00208-1

DIALOGUE SECTION

Governance and Assessment of Future Spaces: A Discussion of Some Issues Raised by the Possibilities of Human–Machine Mergers

Andelka M. Phillips1,2 · I. S. Mian3

Published online: 9 October 2019 © Society for International Development 2019

AbstractThis article explores potential privacy, security, and ethical issues raised by technologies that allow for human–machine mergers. The focus is on research, development and products at the intersection of robotics, artificial intelligence, Big Data, and smart computing. We suggest that there is a need for a more holistic approach to the assessment of such technology and its governance. We argue that in order to determine how the law will need to respond to this particular future space, it is necessary to understand the full impacts of human–machine mergers on societies and our planet—to go beyond these three aforementioned issues. We aim to encourage further discussion and research on this as well as the broader organism-machine merger question, including on our FLE5SH (F = financial, L = legal, E5 = economic, ethical, equity, environmental, and eco-system, S = socio-political, H = historical) framework for the governance and assessment of these and other future spaces.

Keywords Human–machine merger · Technology assessment · FLE5SH · G.O.A.T.S · Precautionary principle · Artificial intelligence

Introduction

Today, it seems we stand at the beginning of an age of ubiq-uitous computing and attempts to merge the physical, digital, and biological realms.1 This is also a time increasingly of technological convergence (Kearns 1998: 975; O’Brolcháin et al. 2016; Perakslis et al. 2016), with an ever-larger array of objects having Internet connectivity. All of this poses sig-nificant risks for individual and group privacy and security (Weber 2010; European Commission 2013; Global Privacy Enforcement Network 2016; UK Information Commis-sioner’s Office 2016), but it also raises further issues for environmental, human, and animal health, as well as the prospect of unemployment for many as jobs are increasingly automated (Frey et al. 2016; Solon 2016; Williams 2017). Developments in computing technology have drastically

altered our world and while there will be some gains from many of these advances, most technologies pose both risks and benefits and are not in themselves neutral. Since there will be both winners and losers, there is a need for a broader assessment of the impact of new technologies on society as a whole, the environment, and the planet.

Developments in Artificial Intelligence (AI) and com-puting are often viewed as transformative technologies. However, given the potential that future developments in these related fields have to alter our natural and built envi-ronment-, impacting not only humans, plants, and animals but also entire ecosystems, there should be a wider debate about not only regulation and assessment of technology, but also the type of world we want to live in. Furthermore, although these technologies are often presented as trans-formative, developments in these related fields often have limitations (including misinterpreting data and failing to distinguish between things that humans would be able to correctly identify (Jordan 2018; Vincent 2018; Broussard 2018: chapter 1)) and current advances in AI are best viewed as giving rise to narrow AI (Jordan 2018; Bostrom 2014:

* Andelka M. Phillips [email protected]

1 Te Piringa - Faculty of Law, University of Waikato, Hamilton, New Zealand

2 HeLEX Centre, University of Oxford, Oxford, UK3 Department of Computer Science, University College

London, London, UK

1 An earlier version of this  article was presented as a discussion paper at the Data For Policy 2017: Government by Algorithm? Con-ference held in London, 6–7 September, 2017, Phillips and Mian (2017); this article  builds also  upon our related work Phillips et  al. (2015) and Phillips (2016).

Page 2: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

67Governance Challenges with Human-Machine Mergers

14–16). A September 2019 Royal Society report argues ‘Linking human brains to computers using the power of arti-ficial intelligence could enable people to merge the decision-making capacity and emotional intelligence of humans with the big data processing power of computers, creating a new and collaborative form of intelligence.’ (Royal Society Steer-ing Group on Neural Interface Technologies 2019: 15). This proposed merger of neural interfaces (NI) and AI ‘could open the way to game-changing applications ... However, the prospect also raises a number of ethical issues concern-ing our autonomy, privacy and perception of ‘normality’.’ (Royal Society Steering Group on Neural Interface Tech-nologies 2019: 49).

There is significant interest and investment in technolo-gies that increase connections between humans and com-puters. Key here have been developments in: AI (Simonite 2017; Peet and Wilde 2017; Patterson 2017; ACM 2018); machine learning (ML); wearable technology (such as FitBit and Garmin); Virtual Reality (such as Oculus Rift, HTC Vive, Samsung Gear VR, and Neurable)2; implants (such as Northwestern University’s tiny antennas (Dormehl 2017; Nan et al. 2017) and Elon Musk’s Neuralink venture (Con-stine 2017a; Lopatto 2019; Statt 2017)); brain to computer interfaces (such as Neurable and Neurovigil’s iBrain3), and exoskeletons and bionic limbs (such as Berkeley’s Lower Extremity Exoskeleton (BLEEX) and Human Universal Load Carrier (HULC)4). Many of these projects could allow for humans to be augmented, enhanced, and altered. For example, implants could enable extra senses. Meanwhile, bionic limbs and brain-to-computer interfaces could alter capacities and cababilities, which would in turn permit some form of motor and/or thought control (further examples can be found in Royal Society Steering Group on Neural Inter-face Technologies (2019)).

If successful these aptitudes and facilities could change the very nature of what it means to be human. Some conjec-ture that humans in their current form will be replaced by a posthuman or transhumanist future (Barfield 2015: 1–20) and that human–machine merger is inevitable (Barfield 2015: 1–2) However, we suggest that this scenario is not a

fait accompli and further, if humans are to be re-engineered in this way then the matter must be subject to extensive pub-lic debate, scrutiny, and regulatory oversight (for example, recent discussions of such issues in the biosciences can be found in Newmann and Stevens 2019a, b; HEGAAs http://web.evolb io.mpg.de/HEGAA s/). Broadening our purview, the same arguments apply if we generalize to organism-machine mergers where the term ‘organism’ encompasses individual or groups of microbes and/or macrobes—entities such as bacteria, fungi, viruses, plants and animals (includ-ing humans)—and ‘merger’ refers to the two components ‘working together’ so that boundaries between them become blurred. Since coordination and control of such hybrid enti-ties will require tighter coupling of their activities, we antici-pate and expect closer interactions between and alignment of the molecular communication (Suda and Nakano 2018; Aky-ildiz et al. 2019) and organism-machine future spaces. That is, the rise of research and development into technologies inspired by chemical communication within and between the (a)biotic worlds and applications attempting to manipulate human behaviour (Kupferschmidt 2019; Chemical commu-nication in humans 2019; Schmidt et al. 2019) much in the same way as, for example, chemical ecology-inspired bio-active molecules are used for pest management (Beck and Vannette 2016).

As new and emerging technologies combine to facilitate the potential merger of a wide range of corporeal bodies, there will also be differences in the degree to which the intercommunicating components are impacted. For instance, organisms may do all or part of the computing for machines (organism computation), may work together with assistance from computers (computer-supported cooperative work), may live lives intermingled with computer systems (social computing and computer-mediated communication), and may form organism-robot hybrids (cybernetic organisms). Organism-machine mergers are examples of perhaps one of the most important future spaces: systems with physical-bio-logical-digital interfaces at the microscopic-, mesoscopic-, and/or macroscopic-scale. Genetic material (digital sequence information pairings such as the genomes of agricultural crops and livestock (Hammond 2017)) are molecular-level exemplars.

Written from a broadly law perspective, the overall goal herein is to initiate a discourse between policy makers, law-makers, the general public(s), and industry on how to (1) think5 about governance and assessment in future spaces)6, (2) develop appropriate legal and regulatory frameworks

2 Neurable http://www.neura ble.com. Accessed 25 July 2019.3 Neurable, http://www.neura ble.com accessed 25 August 2017; Neurable. Neurable Funded to Power Brain-Controlled Virtual Real-ity. Press Release, December 2016, http://www.neura ble.com/news/neura ble-funde d-power -brain -contr olled -virtu al-reali ty accessed 25 August 2017; Metz (2017); Neurovigil, http://neuro vigil .com/index .php/techn ology /ibrai n-devic e. Accessed 25 July 2019; Suzuki et  al. 2007.4 Berkeley Robotics & Human Engineering Laboratory BLEEX, http://bleex .me.berke ley.edu/resea rch/exosk eleto n/bleex /. Accessed 25 July 2019; Berkeley Robotics & Human Engineering Laboratory Human Universal Load Carrier (HULC), http://bleex .me.berke ley.edu/resea rch/exosk eleto n/hulc/. Accessed 25 July 2019.

5 See note 1. This article builds upon and extends work presented first in 2015 and subsequently in 2016 (Phillips et al. 2015,; Phillips 2016).6 See for example Newmann and Stevens 2019a, b; HEGAAs http://web.evolb io.mpg.de/HEGAA s/.

Page 3: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

68 A. M. Phillips, I. S. Mian

for existing technologies, and (3) facilitate free access to meaningful information on advances in fields and impacts of technologies to local, regional, national and international stakeholders. This is because whilst our primary focus is developments and ideas that could enable human–machine mergers, we believe it is also necessary to draw attention to advances in fields such as molecular communication, nano-technology, genome sequencing, CRISPR, and gene drives. For instance, these technologies could allow a wider range of sensors to be featured on clothing or a person’s skin and could enable various entities to be implanted into humans, animals, and plants. This could in turn permit (genetic) modification of all or parts of many different life forms, and use micoorganisms to manipulate the complex behaviours of animals (Rohrscheib and Brownlie 2013). Examples include: spinach leaves that have been embedded with carbon nano-tubes to detect explosives (Trafton 2016); the implantation of self-destructing nanobots in mice (Gao et al. 2015; Sep-pala 2015); commercial genetic tests; and gene editing of plants, insects such as mosquitos, and now human embryos (Young 2017; Ma et al. 2017; Sanders 2017). Here, issues such as (bio)security, (bio)safety, and (bio)privacy impacts on ecosystems and consent need to be considered (Reeve et al. 2018; HEGAAs http://web.evolb io.mpg.de/HEGAA s/; African Centre for Biodiversity 2019; Borger 2019). How to ensure that genetically altered organisms—whether or not merged with machines—are not released into the environ-ment accidentally is a vital issue that needs further attention.

These fields are one prong of a trend towards merging the physical (built and/or natural) world with the cyber world. This can also be seen in developments in the Inter-net of Things (IoT) and in smart computing systems more broadly—for instance, the rise of smart buildings and the connection of critical infrastructure such as the electrical grid to the Internet. Reducing energy consumption and improving efficiency are desirable goals. However, mak-ing an entire country’s energy supply reliant on the Internet introduces risks and vulnerabilities, such as a large-scale attack disabling the power supply of an entire nation. If power plants, dams, and other infrastructure have not been maintained properly, connecting them to the Internet may not necessarily improve their reliability or security. Hence, programmes aimed at maintaining the physical security of facilities and ensuring the cyber security of industrial control systems are critical and necessary investments (Mo et al. 2012; Hahn et al. 2013; Tuptuk and Hailes 2016).

To a large extent attempts to merge humans with machines depend on a mechanistic perception of both humans and the human brain and of a view of intelligence as computation (O’Connell 2017: 55–6). Developments that link humans with machines by direct means such as implants or brain to computer interfaces raise a number of issues for privacy, security and ethics. These include: how can we ensure the

protection of an individual’s privacy? Will there be privacy settings for an individual’s brain? How can we ensure that an individual has control over their body and mind and is free from manipulation of their thoughts and bodies by third parties?7 What happens if malware could affect the human brain? How do we ensure security of the human brain and body? How would it affect communication between the brain and the gut (Martin et al. 2018; Hu et al. 2019)? Normally, before we allow drugs and medical devices to be marketed, they are subject to oversight and pre-market review. How do we ensure that any implant is safe for human, plant, and animal use before it is made widely available?

Further ethical and legal issues include: if there are vari-ous forms of humans, some augmented and some not, who will be entitled to the protection of human rights? Could distinctions be made between an augmented human and a robot that did not have a genetic link to the human species? How do we implement consent in the context of brain to computer interfaces or other technologies that enable the human body to be connected to the Internet? How should society address the loss of gainful employment and increased economic inequality produced by robot- and/or computer-guided automation? How do we ensure the protection of individual autonomy in this context? For instance, medical law often affords strong protection to the rights of patients to refuse treatment.8 How will this play out if a govern-ment wanted to require its citizens to have microchips, as is already required for animals such as dogs and cats? There are some existing examples of this: SJ, a Swedish train com-pany has introduced implanted microchips for its passengers as a form of biometric train ticket (Coffey 2017) whilst two companies, the Swedish startup Epicenter and the American Three Square Market, have introduced microchips for their employees (Brooks 2017; Grimm 2017; Michael et al. 2017; News.com.au 2017; Sheppard 2017; Solon 2017; Associated Press 2017).

As more of the natural and built environment are con-nected to the Machine, such issues are amplified. If humans become part of a telecommunication network such as an Internet of Everything,9 where our thoughts can be read, monitored, and potentially manipulated, then it will be very difficult to turn back the clock. An illustrative example

7 For example,  Pfeiffer et  al. (2015) and Frischmann and Selinger (2018: 30–32) discuss actuated navigation.8 Phillips (2017: 285) citing Montgomery (Appellant) v Lanarkshire Health Board (Respondent) (Scotland) [2015] 2 All ER 1031, [2015] UKSC 11 and Campbell (2015).9 Internet of Everything (IoE) Group, https ://ioe.eng.cam.ac.uk/. Accessed 25 July 2019—here IoE is defined as ‘a seamless intercon-nection and autonomous coordination of massive number of comput-ing elements and sensors, inanimate and living entities, people, pro-cesses and data through the Internet infrastructure’.

Page 4: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

69Governance Challenges with Human-Machine Mergers

of this point is Facebook’s announcement that it wants to develop a brain to computer interface (Constine 2017b; Strickland 2017). It has already emerged that Facebook does monitor what its users type and delete without post-ing (Sørensen 2016). Imagine if this was not just a matter of typing words on a screen, but a direct link to someone’s thoughts. This would potentially reduce privacy in quite a revolutionary way, compounding the challenges we already face with targeting marketing and online behavioural adver-tising (Duhigg 2012; Lubin 2012; Papadopoulos et al. 2017; Narayanan and Reisman 2017). A well-known example of the ways that businesses can obtain information about cus-tomers is that of Target, a company that was able to make predictions about whether a customer was likely to be preg-nant based on the purchase of 25 products and then engaged in targeted market with coupons for baby products (Ellen-berg 2014). Ensuring security of these types of technologies is a significant challenge that should not be underestimated (Bonaci et al. 2014: 47; Li et al.2015: 663–666). Software based systems are prone to many vulnerabilities and recent research has demonstrated that it was possible to implant malware into synthetic DNA (Ney et al. 2017; Tracy 2017; Greenberg 2017; Timmer 2017).

As technological convergence increases, there is the potential for Big Social Engineering, which raises further questions. These include: How can we ensure transparency about the full functionality of particular technologies? How can we ensure that people have access to information about technologies that may be used to influence them without their consent or knowledge so that they can make informed choices about whether to use particular technologies and reject adoption if they want to? What kind of pre-market review should social engineering technologies be subject to? What rights will people have to their private thoughts? Could there be a privacy setting for a person’s brain and what will happen if this is overridden? How can we ensure that existing rights and freedoms are protected? What about security and control? How will ‘brain hacking’ allow peo-ple to be influenced or conditioned to act in particular ways without conscious knowledge of this influence? What are the consequences when applications encourage addiction?

Depictions of AI, cyborgs, and androids from science fiction also exert a significant influence on how many view innovations in these fields, (Calo et al. 2016: 1–22) as well as influencing lawmakers (Walter 2016; Warwick 2016). A good example is the EU call for civil laws on robotics. The text of the European Parliament Committee on Legal Affairs’ Draft Report begins in paragraph A:

whereas from Mary Shelley’s Frankenstein’s Monster to the classical myth of Pygmalion, through the story of Prague’s Golem to the robot of Karel Čapek, who coined the word, people have fantasised about the pos-

sibility of building intelligent machines, more often than not androids with human features (Committee on Legal Affairs. 2016).

These depictions may also be influencing inventors and shaping what they expect to develop (perhaps both con-sciously and unconsciously). They may also be employed in marketing to foster acceptance of (bio)technology.

In this article we seek to draw attention to some of the issues raised by developments in this field and to encourage discussion of not only appropriate regulation, but also tech-nology assessment—a task for which, in previous work, we have proposed the FLE5SH (F = financial, L = legal, E5 = economic, ethical, equity, environmental, and ecosys-tem, S = socio-political, H = historical) framework (Phillips et al. 2015; Phillips and Mian 2017).

However, we also wish to highlight the more recent pro-posal by the ETC Group of Global Overview Assessments of Technological Systems (G.O.A.T.S). ETC presented the ‘G.O.A.T.S approach to Science, Technology, and Innova-tion (STI)’ Governance at the UN STI Forum in May 2017 (ETC Group 2017a, b: 1). The G.O.A.T.S provides for a ‘bottom up ‘technology landscaping’ project involving multi-actor assessment organised thematically around the 17 SDGs’ (Sustainable Development Goals) (ETC Group 2017a). As the ETC Group notes ‘Technology is established as a key cross-cutting theme of the 2030 Agenda for Sustain-able Development which charts a path to the future for gov-ernments, and 13 of the 17 SDGs specify that technological solutions will be necessary to achieve them.’ (ETC Group 2017a) This approach can offer a means for ‘policymakers, civil society and others to better perceive and navigate the innovation landscape’ considering both ‘the potential prom-ises and pitfalls’ of technologies (ETC Group 2017b: 1–2). We support this approach and our aim with FLE5SH is to facilitate a similarly broad multi-dimensional assessment of technologies.

AI and Augmented Humans

Developments in science, technology, engineering, math-ematics and medicine (STEMM) promise a tomorrow where ‘errors’ or ‘deficiencies’ in an organism’s genetic and/or phenotypic makeup can be modulated, enhanced, corrected, redefined or eradicated. A post-human world could be populated by people who have additional senses, such as artificial eyes equipped with video cameras and the ability to feel electromagnetic pulses, enhanced intel-ligence, and direct connections with computers through a variety of mechanisms including Virtual Reality, Augmented Reality, prosthetics, implants, and other forms of brain to computer interfaces. Such beings may be human on some

Page 5: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

70 A. M. Phillips, I. S. Mian

level and machine on another, but they will not be able to retain privacy (or security) of their own thoughts. In the 2018 American science-fiction dark comedy film ‘Sorry to Bother You’, workers are made stronger and more obedient by transforming them into ‘equisapiens’—a half-human, half-horse hybrid created when a human snorts a gene-modifying powder (Riley 2018).

Already, there are a number of products and services on the market that are part of the Quantified Self movement. These range from direct-to-consumer genetic tests through wearable fitness monitors such as FitBit and Garmin to FashTech which incorporates sensors into clothing, exam-ples are heart rate monitoring bras such as the Mi Pulse Smart Bra and the Vitali Everyday Smart Bra.10 Some of these devices have already begun to be used in the court-room (Chauriye 2016; Jackson et al. 2017). The results of the Global Privacy Enforcement Network’s 2016 Privacy Sweep of IoT highlighted problems with companies’ com-munication with consumers regarding privacy and security practices, as well as the sending of unencrypted informa-tion by medical devices (Irish Data Protection Commis-sioner 2016; Privacy Commissioner (New Zealand) 2016; UK Information Commissioner’s Office 2016). Meanwhile, research by Citizen Lab and Open Effect as well as HPE Fortify (HPE 2015; HP 2015; Hilts et al. 2016; Norwegian Consumer Council 2017) has demonstrated that a number of such devices (including fitness bands and smart watches) are prone to security vulnerabilities and that it is possible to create a false fitness record on some devices. This is a significant issue if such devices are to be relied upon as evidence in the courtroom. Furthermore, as more forms of personal information are collected and linked, there is an increasing risk to informational privacy for individuals and their families (Drabiak 2017).

There is also growing concern and debate around technol-ogy design and specifically the issue of technologies being designed to be addictive, as well as the impact of the use of screens on children and young people.11 Germany has banned the sale of smartwatches to children and it is possi-ble that other countries will also begin to restrict the sale of certain products to young people (Johnsen 2016; Wakefield 2017; O’Brien 2018).

In recent years there has been increasing interest in the idea of approaching technological singularity or as Nick Bostrom terms it, an intelligence explosion (Bostrom 2014: 4; 62–77). The basic premise here is centred around creating human level machine intelligence. Once this is achieved the

suggestion is that AI will improve itself and quickly surpass human intelligence. Bostrom defines super intelligence as ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’ (Bostrom 2014: 22–23). His work is timely and of great value to this discussion. The book concludes with an analogy of the development of super intelligence with a child playing with a bomb (Bostrom 2014: 260–61), a very useful starting point to highlight the importance of paying sufficient attention to getting this right.

It is also important to understand that the development of a super intelligent AI is not at present a forgone conclusion, although a number of experts do view it as likely. How-ever, if this does come to pass, it does not necessitate that all humans must be augmented and merged with machines. These are separate issues that are both in need of further attention. There is growing attention and concern over the safe development of AI technology. The letter calling for a ban on lethal autonomous weapons released at the Inter-national Joint Conference on Artificial Intelligence (IJCAI 2017) is an example (Vincent 2017; Future of Life Institute 2017a, b). Others include the ‘Partnership on AI’12 and ‘A Unified Framework of Five Principles for AI in Society’ (Floridi and Cowls 2019).

We cannot predict what the interests of a super intelli-gent AI will be and we support the calls for more discussion and oversight of this area. Recent research from Google’s DeepMind has shown that AI can behave both collabora-tively and in more aggressive ways (Burgess 2017; Leibo et  al. 2017; International Foundation for Autonomous Agents and Multiagent Systems 2017). Since AI may behave unpredictably and before we get to the advent of a super intelligent AI, it is vital that we understand more about how less advanced AI operate and what their interests could be. There is a growing literature, particularly in the context of autonomous vehicles (Bradshaw-Martin and Eas-ton 2014; Bonnefon et al. 2015; Etzioni and Etzioni 2016), about the need for coding in human values into AI systems. Although this seems advisable, since humans do not always share all values (Mignolo 2010, 2013; Grosfoguel 2012) per-haps one option is a requirement for some form of balancing and explanation, which could assist AI to make decisions contextually, allowing for consideration of a number of fac-tors. An example from science fiction can demonstrate this point. In Arthur C Clarke’s 2001: A Space Odyssey, Hal is taught to lie, cheat, and deceive humans. Hal’s abilities are linked closely with the achievement of particular goals, in this case the completion of Hal’s mission (Clarke 2016). However, much of what he is designed to do is not balanced

10 Mi Pulse, https ://www.mi-pulse .com. Accessed 1 March 2018; Vitali, https ://vital iwear .com. Accessed 1 March 2018.11 For example, see Atler (2017); Harris (2016); Tristan Harris’ web-site http://www.trist anhar ris.com/essay s/. Accessed 25 July 2019.

12 Partnership on AI, https ://www.partn ershi ponai .org/. Accessed 25 July 2019.

Page 6: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

71Governance Challenges with Human-Machine Mergers

out by explanation. The point here is that in order for AI and humans to work together successfully, AI will need to understand human motivations and the reasons we do or do not behave in certain ways. Such understanding could help to avoid AI deciding to do something that could result directly or indirectly in human extinction.

However, our concern here is also to highlight the sig-nificance of developments that allow for humans to be revamped, so that they are cyber-physical and for instance, elements of the Internet of Bio-Nano Things (Akyildiz et al. 2019). While there should be discussion and oversight of AI, implants, and brain to computer interfaces, other prod-ucts also need attention. Since AI systems that have under-standing of human motivations and emotions might be use-ful in developments that merge humans and machines, the Precautionary Principle could assist or be invoked here. It also seems advisable to look at existing governance mecha-nisms that have regulated medical devices and pharmaceu-tical drugs. While such systems are imperfect, they could be helpful in thinking further about governance of implants and brain to computer interfaces. Generally, it would seem wise to ensure the safety of such products before implanting them into people.

Ideally, we do not want the occurrence of super intelli-gent AI to be made by a lone individual—be it a lay person or researcher—in their basement or (computer) laboratory. Likewise, while there is a DIY biohacking movement already (Barfield 2015: 135–176; Bradley-Munn and Michael 2016; Mallonee 2017) and it is true that some individuals want to alter their bodies in new ways, this is also something that does need more oversight. Furthermore, the addition of new senses, different forms of implants, and brain to computer interfaces is not something that should be forced on people without their consent.

Technology Assessment: Time for a Holistic Approach?

A variety of technologies, such as smartphones, laptops, tablets, wearables, as well as a burgeoning range of other devices which form the IoT are now accessible and used by a significant portion of the world’s population. For example Facebook now exceeds 2 billion monthly active users, (Con-stine 2017c) Apple has sold more than 1.2 billion iPhones, (Morris 2017) and the number of mobile phone users will likely exceed 5 billion in 2019 (Statistica 2017). However, while technological solutions are often promoted as a means to solve many of the planet’s problems, much of this high technology consumer culture involves products that are not made to last, but to be replaced on a regular basis, which is depleting resources and also places burdens on resource, particularly energy, consumption (Vince 2012; Mian et al.

2019). An interesting initiative to combat this throwaway culture is the Swedish Government’s introduction of tax breaks for the repair of common consumer products, includ-ing clothing, bicycles, and washing machines (Starritt 2016).

Many of these technologies involve the collection, stor-age, transmission, and sharing of a variety of forms of infor-mation, which can include personal information, and sensi-tive information, including health, and genetic information. There is growing use of cross-device and cross-platform tracking, which attempts to harvest more information from individuals based on their purchasing behaviour, as busi-nesses seek to identify whether viewing a particular adver-tisement results in the purchase of their products or services (Chen et al. 2016; Federal Trade Commission 2017; Brook-man et al. 2017).

There are now a growing variety of impact assessments that are either encouraged or required by law. These include: privacy impact assessments; sustainability impact assess-ment; environmental impact assessments; and ethical trade impact assessments. One example is that of data protection impact assessments, which are required in article 35 of Euro-pean Union’s General Data Protection Regulation. These are to be carried out:

Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the pro-cessing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.13

If we think about this in the context of technologies, such as implants or brain to computer interfaces, it is likely that such technologies would be caught by this requirement.

Our recently proposed FLE5SH framework provides a new approach to help organize, interpret and assess past, extant, emerging and new research and development in STEMM (Phillips and Mian 2017; Phillips et al. 2015). The nine lenses in this framework provide a more holistic approach to technology assessment and regulation. We are including such a broad range of lenses because we believe that many if not all technologies need to be assessed from as wide a perspective as possible.

To some extent the FLE5SH framework can be seen as allowing the formation of a social contract, whereby all

13 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free move-ment of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), http://eur-lex.europ a.eu/legal -conte nt/en/TXT/?uri=CELEX %3A320 16R06 79. Accessed 1 March 2018.

Page 7: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

72 A. M. Phillips, I. S. Mian

stakeholders are required to engage in a review of this wider spectrum of the possible impacts of technologies. Where risks are seen as likely, imminent or serious then this may (or probably should) trigger application of the Precaution-ary Principle.

There is growing interest in central banks maintaining financial stability,14 together with interest in ethical invest-ment in sectors such as pension funds. Consequently, look-ing at digital technology, such as distributed ledger technol-ogy (for example, cryptocurrencies and smart contracts), in the round can help give a more balanced picture of the respective benefits, risks, and challenges raised by a spe-cific technology (Wolbring 2009; ETC Group 2011, 2014, 2015; Daño et al. 2013) such as Bitcoin or Ethereum (Reijers et al. 2016; Zimmer 2017). In order to have a more holistic assessment of technology, we advocate for a broad dialogue amongst all stakeholders, including the public, and espe-cially groups that have historically been marginalized, such as Indigenous Peoples.

Taking a more holistic approach also allows for consid-eration of the relationship between technology and Nature and its impact on Nature. Here we are thinking about not only humans, but also organisms and the rights of Nature. The granting of forms of legal personhood and human rights for the protection of rivers in New Zealand and Ecuador are illuminating examples.15 Our proposed approach aims to assess the interactions amongst and between components of all Earth systems: the lithosphere, atmosphere, hydrosphere and biosphere. The FLE5SH framework provides a common toolbox that diverse stakeholders—researchers, policymak-ers, regional and national social movements, civil society organisations, and others—can use to evaluate technologies and if warranted, to choose a different future.

At present, many products and services are coming to market without pre-market review and without comprehen-sive impact assessments. Regulators have generally held back and there is a general tendency to let the market decide and promote industry self-regulation. The law may have a history of struggling to keep up with technological progress, but we should not accept this as a permanent state of affairs that stops discussion of appropriate regulation and account-ability. Unforeseen harms can occur if there is no incen-tive for a company to behave responsibly other than loss of

reputation. Fines for violating laws may be regarded as a cost of doing business.

In relation to discussion of technology assessment, we suggest utilising the Precautionary Principle. This principle has been invoked in the context of environmental policy, as well as in the context of public health. It is an important principle in International Environmental Law and is set out in the Rio Declaration on Environment and Development (1992). Principle 15 of the Rio Declaration:

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effec-tive measures to prevent environmental degradation. (Rio Declaration on Environment and Development 1992)

It is also set out in article 191 of the Treaty on the Function-ing of the European Union (Treaty on the Functioning of the European Union 2007).

A useful depiction of when the Precautionary Principle ought to be relied upon stems from the Consensus Statement on the Precautionary Principle developed by the Wingspread Conference on the Precautionary Principle held in 1998 pro-vides that:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect rela-tionships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. (Science and Environmental Health Network 1998; Kriebel et al. 2001: 871)

The Consensus Statement further suggests that:

The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alterna-tives, including no action. (Science and Environmental Health Network 1998)

Once the Principle is triggered in relation to a particular technology, when more scientific information becomes avail-able that would enable for assessment, the situation should then be reviewed.16

14 See for example Chang et al. (2019).15 See for example the recent New Zealand Te Awa Tupua (Whan-ganui River Claims Settlement) Act 2017, http://www.legis latio n.govt.nz/act/publi c/2017/0007/lates t/DLM68 30851 .html. Accessed 1 March 2018; Community Environmental Legal Defense Fund (2015), Tanasescu (2017), O’Donnell and Talbot-James (2017), Biggs (2017), Global Alliance for the Rights of Nature, http://theri ghtso fnatu re.org. Accessed 1 March 2018.

16 Som et al. (2009: 493); Communication from the Commission on the precautionary principle/* COM/2000/0001 final */Precaution-ary Principle. EUR-Lex, http://eur-lex.europ a.eu/legal -conte nt/EN/TXT/?uri=celex :52000 DC000 1. Accessed 1 March 2018.

Page 8: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

73Governance Challenges with Human-Machine Mergers

While the Precautionary Principle has often been invoked in the context of environmental protection, as Som et al. (2009) suggest, it can also be applied to social subjects and in thinking about potential frameworks for an information society that is sustainable (Som et al. 2009; Danaher 2016). We suggest the need for invoking this Principle in the con-text of consideration of whether to adopt these new technol-ogies. Although smart infrastructure has been promoted as facilitating the development of more sustainable, cost effec-tive, and efficient cities, connecting things such as energy, water and monetary supply chains to the Internet renders them vulnerable to physical and cyber attacks (Taylor 2015).

Why is a Historical Lens Necessary?

Recently, IEEE released a report on ethical implementa-tion of autonomous and intelligent systems (A/IS) A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (IEEE 2019). Thus, whilst STEMM stu-dents entering a digital society require a firm foundation in discipline-related matters, they ought to be conversant also in non-technical issues. Given this need for students to become more multilingual but in light of an already highly constrained curriculum, how can the thorny transition from fluency in STEMM to expressivity in SHTEAMM be made (S: Science, H: Humanities, T: Technology, E: Engineer-ing, A: Arts, M: Mathematics, M: Medicine)? Consider the following exemplar. Direct-to-consumer genetic testing (DTC, aka personal genomics) is a way a person can access information about their genome from their home (National Human Genome Research Institute https ://www.genom e.gov/dna-day/15-for-15/direc t-to-consu mer-genom ic-testi ng). Uses of AI/ML in this space range from calling of genetic variants from high-throughput DNA sequencing data (for example, DeepVariant (Google Cloud 2019)) through secure storage and sharing of genomic data (Mittos et al. 2019) (for example, differential privacy (Page et al. 2018)) to developing apps for consumers to ‘interact and experience DNA-powered insights’ about, for example, health, ancestry, genetic relatedness, athletic ability, child talent, and infidel-ity (Phillips 2017, 2019). The DTC market is predicted to exceed $2.5 billion by 2024 (MarketWatch 2019).

In January 2019, the UK’s Health Secretary Matt Han-cock announced plans to offer healthy people the option to have their whole genome sequenced by the NHS for a fee with these ‘genomics volunteers’—if they share data—receiving a personalized health report (Semsarian 2019). In March 2019, the House of Commons Science and Tech-nology Committee launched an inquiry into commercial genomic testing to establish what safeguards need to be put in place to protect those who get tested (Science and Technology 2019). The Committee should be releasing

recommendations later in 2019. One of the authors of the present article, Andelka Phillips, is using the example of the DTC industry’s use wrap contracts as their dominant means of governance to illustrate the challenges disruptive technologies pose for societies and for regulation. She has raised significant questions (Bates 2018) about whether the services are: fit for their claimed purposes, the genetic data and other personal information collected are being stored securely, sufficient protection for privacy is provided, com-panies are sufficiently transparent in their claims about ben-efits and limitations of their services, and consumers actually understand the contracts they enter into when purchasing these tests. Beyond technical and legal solutions to such problems lie other concerns, notably the issue of biological and genetic determinism.

Dubbed ‘21st Century genetically informed social sci-ence’, sociogenomics aims to understand the roots of com-plex behavior (Braudt 2018). For instance, genoeconomics posits that economic outcomes and preferences are about as heritable as many medical conditions and personality traits (Comfort 2018a). It suggests that financial behaviour can be traced to a person’s DNA so ‘genetics could someday be used to build not just personalized medicine, but personal-ized policy that takes into account the genotypes that influ-ence whether you and I are receptive to certain methods of instruction, or punishment, or therapy’ (Ward 2018). Sociog-enomics has been characterized also as opening a new door to eugenics, new ways ‘genetic data could bolster scientific racism and encourage discrimination’ (Comfort 2018b). The Victorian scientist Francis Galton coined the term eugenics (eu = good or true + genus = birth, race or stock) to describe the betterment of the overall quality of the gene pool (Das 2015, 2017). In his Anthropometric Laboratory—established in 1883 (Boulter 2017) at the International Health Exhibition in South Kensington, London—he measured, recorded and evaluated the mental abilities and physical characteristics of ∼ 10,000 people over a year (Das 2015). Galton’s histori-cal connections to and the eugenics movement he initiated have forced University College London (UCL) to confront its past (Bartlett 2018). December 2018 saw the launch of a Commission of Inquiry into the History of Eugenics at UCL (UCL 2018). One key issue is the university’s role in teach-ing and researching eugenics in the past, present and future (Osei-Mensah 2019).

The United Kingdom Research and Innovation (UKRI)17 expects all researchers and their research organizations to commit to an approach that seeks continuously to ‘Antici-pate, Reflect, Engage and Act’ (AREA). This approach will:

17 UK Research and Innovation, https ://www.ukri.org. Accessed 25 July 2019.

Page 9: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

74 A. M. Phillips, I. S. Mian

Anticipate—describing and analyzing the impacts, intended or otherwise, (for example economic, social, environmental) that might arise. This does not seek to predict but rather to support an exploration of possible impacts and implications that may otherwise remain uncovered and little discussed.Reflect—reflecting on the purposes of, motivations for and potential implications of the research, and the asso-ciated uncertainties, areas of ignorance, assumptions, framings, questions, dilemmas and social transformations these may bring.Engage—opening up such visions, impacts and question-ing to broader deliberation, dialogue, engagement and debate in an inclusive way.Act—using these processes to influence the direction and trajectory of the research and innovation process itself (EPSRC 2019).

The UCL Inquiry and the AREA framework are relevant to AI/ML and their applications in human health and agri-culture (Marr 2018). A shift from genetics to genomics—the study of organisms in terms of their full DNA sequences—at the turn of the 21 century is said to have given rise to a new form of eugenics, eugenomics (Aultman 2006). A move from ‘personalized medicine’ to ‘precision health’ and ‘well-ness genomics’ during this period has raised the question of whether a century from now, the latter two ideas will be viewed as eugenics is today (Jeungst et al. 2018). Might a similar fate await ‘public health genomics’ and ‘precision public health’, programmes such as whole genome sequenc-ing on every newborn within a population (Molster 2018)? The HUMAN Project18 (Human Understanding Through Measurement and Analytics)—introduced in 2015, aims to measure, aggregate and analyse the biology, behaviour, environmental conditions and events of ∼ 10,000 New Yor-kers over 20 years (Azmak et al. 2015). This Project uses medical records, biological samples, surveys, questionaires, digital device data, third party data and other modalities in order to create ‘synoptic and granular views of how human health and behavior coevolve over the life cycle and why they evolve differently for different people’ (Azmak et al. 2015). Thus, it is important to understand Big Science and Big Data studies of humanity in Victorian and mod-ern times—anthropometry (UCL Culture https ://www.ucl.ac.uk/cultu re/galto n-colle ction /galto n-and-anthr opome trics ) yesterday, today and tomorrow—and more generally, the past, present and future of datafication (the quantification of objects, actions, processes and other aspects of life and the

world that previously were experienced or existed only in qualitative, non-numeric form).

Conclusion

It is hoped that this article will stimulate reflection about the following matters: the need to engage in a more public, democratic, and open discussion of technologies and their potential impact on society, the environment, and the planet; the need for greater oversight of technologies that pose sig-nificant risks to human and/or environmental health; the need to ensure that technologies that allow for the alteration of the genetic makeup of biological organisms are subject to oversight, especially regarding their safety; and the need for the development of appropriate laws and governance mecha-nisms that will protect the public, the environment, and the planet as a whole.

It should be noted that we have developed bodies of law such as consumer protection and product liability law for sound reasons. Permitting commercialisation of technolo-gies without any regulation other than industry-self regula-tion is unlikely to lead to a safer, fairer world.

The issues raised both by developments in AI which could lead to a super intelligent AI and other fields that could lead to the merging of humans with machines raise issues that need to be considered from a range of perspectives. If the future is Humanity 2.0 (or higher) then this should be a choice that humans make, just as if super intelligent AI is to develop, we do need to ensure that its values are in line with those of humanity and the planet. However, there is a pluriv-ersal and not just a universal notion of what constitutes value (Mignolo 2013). Perhaps, a more holistic approach to assess-ing technology could also serve to guide policy contextually, as a substitute for humanity’s conscience, and thereby shape technology in a consistent and more balanced way.

Acknowledgements Thanks to our colleagues for their support and fruitful discussion on these topics.

References

ACM. 2018. Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award. Bengio, Hinton and LeCun Ushered in Major Breakthroughs in Artificial Intelligence. Association for Computing Machinery, 27 March 2019, https ://www.acm.org/media -cente r/2019/march /turin g-award -2018.

African Centre for Biodiversity. 2019. Civil Society Denounces the Release of GM mosquitoes in Burkina Faso. Press Release, 2 July, https ://acbio .org.za/en/civil -socie ty-denou nces-relea se-gm-mosqu itoes -burki na-faso. Accessed 25 July 2019.

Akyildiz, I.F., M. Pierobon and S. Balasubramaniam. 2019. Moving forward with molecular communication: from theory to human health applications. Proc IEEE 107:858–865. https ://ieeex plore .ieee.org/docum ent/87103 66.

18 The Human Project. https ://www.thehu manpr oject .org. Accessed 23 July 2019.

Page 10: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

75Governance Challenges with Human-Machine Mergers

Associated Press. 2017. Companies Start Implanting Microchips into Workers’ Bodies. LA Times, 3 April, http://www.latim es.com/busin ess/techn ology /la-fi-tn-micro chip-emplo yees-20170 403-story .html. Accessed 2 March 2018.

Atler, Adam. 2017. Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked. Penguin Press. http://adama ltera uthor .com/irres istib le. Accessed 12 September 2019.

Aultman, Julie M. 2006. Eugenomics: Eugenics and Ethics in the 21st Century. Genom Soc Policy 2: 2–28. https ://doi.org/10.1186/1746-5354-2-2-28.

Azmak, Okan, Hannah Bayer, Andrew Caplin, Miyoung Chun, Paul Glimcher, Steven Koonin and Aristides Patrinos. 2015. Using Big Data to Understand the Human Condition: The Kavli HUMAN Project. Big Data. 2015, 3:3, https ://doi.org/10.1089/big.2015.0012.

Barfield, Woodrow. 2015. Cyber-Humans: Our Future with Machines. Springer. https ://www.sprin ger.com/gp/book/97833 19250 489. Accessed 12 September 2019.

Bartlett, Georgina. 2018. Investigation: Moving Forward From Gal-ton’s UCL. PI Magazine, 22 January 2018, https ://pimed iaonl ine.co.uk/news/movin g-forwa rd-from-galto ns-ucl/. Accessed 23 July 2019.

Bates, Mary. 2018. Direct-To-Consumer Genetic Testing. IEEE Pulse, 14 December 2018, https ://pulse .embs.org/novem ber-2018/direc t-to-consu mer-genet ic-testi ng/. Accessed 23 July 2019.

Beck, John J. and Rachel L. Vannette. 2016. Harnessing Insect–Microbe Chemical Communications To Control Insect Pests of Agricultural Systems. Journal of Agricultural and Food Chemis-try 65:23–28. https ://doi.org/10.1021/acs.jafc.6b042 98.

Biggs, Shannon. 2017. When Rivers Hold Legal Rights. Earth Island Journal, 17 April, http://www.earth islan d.org/journ al/index .php/elist /eList Read/when_river s_hold_legal _right s/. Accessed 1 March 2018.

Bonaci, Tamara, Ryan Calo and Howard Jay Chizeck. 2014. App Stores for the Brain: Privacy & Security in Brain–Computer Interfaces. In Proceedings of the IEEE 2014 International Sym-posium on Ethics in Engineering, Science, and Technology, IEEE Press. https ://doi.org/10.1109/ETHIC S.2014.68934 15. Accessed 12 September 2019.

Bonnefon, Jean-François, Azim Shariff and Iyad Rahwan, 2015. The social dilemma of autonomous vehicles,  https ://arxiv .org/abs/1510.03346 . Accessed 12 September 2019.

Borger, Julian. 2019. House Orders Pentagon to Review if It Exposed Americans to Weaponised Tick. The Guardian, 16 July, https ://www.thegu ardia n.com/us-news/2019/jul/16/penta gon-revie w-weapo nised -ticks -lyme-disea se. Accessed 25 July 2019.

Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. https ://globa l.oup.com/acade mic/produ ct/super intel ligen ce-97801 98739 838. Accessed 13 September 2019.

Boulter, Michael. 2017. Bloomsbury Scientists Science and Art in the Wake of Darwin. London: UCL Press, xi. https ://www.ucl.ac.uk/ucl-press /brows e-books /bloom sbury -scien tists . Accessed 13 Sep-tember 2019.

Bradley-Munn, Sharon R. and Katina Michael. 2016. Whose Body Is It?: The body as Physical Capital in a Techno-Society. IEEE Consumer Electronics Magazine 5(3): 107–114. http://www.katin amich ael.com/resea rch/2016/8/10/the-body-as-physi cal-capit al-in-a-techn o-socie ty. Accessed 12 September 2019.

Bradshaw-Martin, Heather and Catherine Easton. 2014. Autonomous or ‘Driverless’ Cars and Disability: A Legal and Ethical Analy-sis. European Journal of Current Legal Issues 20(3). http://webjc li.org/artic le/view/344/471. Accessed 1 March 2018.

Braudt, David B. 2018. Sociogenomics in the 21st Century: An Intro-duction to the History and Potential of Genetically Informed

Social Science. Sociology Compass 12(10): e12626. https ://doi.org/10.1111/soc4.12626 .

Brookman, Justin, Phoebe Rouge, Aaron Alva, and Christina Yeung. 2017. Cross-Device Tracking: Measurement and Disclosures. Proceedings on Privacy Enhancing Technologies 2017(2): 133–148. https ://petsy mposi um.org/2017/paper s/issue 2/paper 29-2017-2-sourc e.pdf.

Brooks, James. 2017. A Swedish Start-Up has Started Implanting Microchips Into its Employees. CNBC, 3 April, https ://www.cnbc.com/2017/04/03/start -up-epice nter-impla nts-emplo yees-with-micro chips .html. Accessed 2 March 2018.

Broussard, Meredith. 2018. Artificial Unintelligence: How Computers Misunderstand the World. Cambridge, MA: MIT Press. https ://mitpr ess.mit.edu/books /artifi cial -unint ellig ence. Accessed 12 September 2019.

Burgess, Matt. 2017. DeepMind’s AI has Learnt to Become ‘Highly Aggressive’ When it Feels like it’s Going to Lose. Wired, 9 Feb-ruary, http://www.wired .co.uk/artic le/artifi cial -intel ligen ce-socia l-impac t-deepm ind. Accessed 13 September 2019.

Calo, Ryan, A. Michael Froomkin and Ian Kerr. 2016. Robot Law. MA: Edward Elgar Publishing. https ://www.e-elgar .com/shop/robot -law. Accessed 12 September 2019.

Campbell, Mark. 2015. Montgomery v Lanarkshire Health Board. Common Law World Review 44(3): 222–228.  https ://doi.org/10.1177/14737 79515 59211 8.

Chang, Marina, Huang, C.-H. and Mian, I.S. 2019. A data science and historical global political ecology perspective on the financial system, agriculture and climate: from the trans-Atlantic slave trade to agroecology. Current version of the discussion paper pre-sented at Data for Policy 2017: Government by Algorithm? 6-7 September 2017. https ://zenod o.org/recor d/88450 3. Accessed 16 September.

Chauriye, Nicole. 2016. Wearable Devices as Admissible Evidence: Technology is Killing Our Opportunity to Lie. Catholic Univer-sity Journal of Law and Technology 24(2) art. 9: 494-528. https ://schol arshi p.law.edu/jlt/vol24 /iss2/9/. Accessed 12 September 2019.

Chemical communication in humans. 2019. Royal Society Meeting 1-2 April 2019. Kavli Royal Society Centre, Chicheley Hall, Newport Pagnell, Buckinghamshire. https ://royal socie ty.org/scien ce-event s-and-lectu res/2019/04/chemi cal-commu nicat ion/.

Chen, Kai, Xueqiang Wang, Yi Chen, Peng Wang, Yeonjoon Lee, XiaoFeng Wang, Bin Ma, Aohui Wang, Yingjun Zhang and Wei Zou. 2016. Following Devil’s Footprints: Cross-Platform Analysis of Potentially Harmful Libraries on Android and iOS. In Security and Privacy (SP), 2016 IEEE Symposium: 357-76. https ://doi.org/10.1109/SP.2016.29.

Clarke, Arthur C. 2016. 2001: A Space Odyssey. 1st edn. 1968, reprint, Penguin Books. https ://www.pengu inran domho use.com/books /53885 1/2001-a-space -odyss ey-by-arthu r-cclar ke/97801 43111 573/.

Coffey, Helen. 2017. The future is here—A Swedish Rail Company is Trialing Letting Passengers Use Biometric Chips as Tickets. The Independent, 16 June, http://www.indep enden t.co.uk/trave l/news-and-advic e/sj-rail-train -ticke ts-hand-impla nt-micro chip-biome tric-swede n-a7793 641.html. Accessed 2 March 2018.

Comfort, Nathaniel. 2018a. Nature Still Battles Nurture in the Haunting World of Social Genomics. Nature 2018(553): 278–280. https ://www.natur e.com/artic les/d4158 6-018-00578 -5. Accessed 12 September 2019.

Comfort, Nathaniel. 2018b. Sociogenomics is Opening a New Door to Eugenics. MIT Technology Review, 23 October 2018, https ://www.techn ology revie w.com/s/61227 5/socio genom ics-is-openi ng-a-new-door-to-eugen ics/. Accessed 23 July 2018.

Committee on Legal Affairs. 2016. Draft Report, 31 May, http://www.europ arl.europ a.eu/sides /getDo c.do?type=COMPA

Page 11: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

76 A. M. Phillips, I. S. Mian

RL&mode=XML&langu age=EN&refer ence=PE582 .443. Accessed 2 March 2018.

Community Environmental Legal Defense Fund. 2015. Rights of Nature: Overview. 4 August, updated 19 May, 2017, https ://celdf .org/right s/right s-of-natur e/. Accessed 1 March 2018.

Constine, Josh. 2017a. Elon Musk’s Brain Interface Startup Neu-ralink Files $27 M Fundraise. TechCrunch, 25 August, https ://techc runch .com/2017/08/25/elon-musks -brain -inter face-start up-neura link-files -27m-fundr aise/. Accessed 25 August 2017.

Constine, Josh. 2017b. Facebook is Building Brain-Computer Inter-faces for Typing and Skin-Hearing. TechCrunch, 19 April, https ://techc runch .com/2017/04/19/faceb ook-brain -inter face/. Accessed 2 March 2018.

Constine, Josh. 2017c. Facebook Now has 2 Billion Monthly Users … and Responsibility. TechCrunch, 27 June, https ://techc runch .com/2017/06/27/faceb ook-2-billi on-users /. Accessed 1 March 2018.

Danaher, John. 2016. New Technologies as Social Experiments: An Ethical Framework. Philosophical Disquisitions, 15 March, http://philo sophi caldi squis ition s.blogs pot.co.uk/2016/03/new-techn ologi es-as-socia l-exper iment s.html. Accessed 1 March 2018.

Daño, Neth, Kathy Jo Wetter and Silvia Ribeiro. 2013. Addressing the ‘Technology Divides’: Critical Issues in Technology and SDGs. Briefing Paper: Science, Technology and Innovation (STI) 6th Session of the Open Working Group on Sustainable Development Goals: New York, 9-23 December, https ://susta inabl edeve lopme nt.un.org/conte nt/docum ents/4673d ano.pdf. Accessed 1 March 2018.

Das, Subhadra. 2015. The Unbelievable Truth About Sir Francis Galton. UCL Museums and Collections Blog, 5 November 2015, https ://blogs .ucl.ac.uk/museu ms/2015/11/05/the-unbel ievab le-truth -about -sir-franc is-galto n/. Accessed 23 July 2019.

Das, Subhadra. 2017. Racism, Eugenics and the Domestication of Humans. UCL Museums and Collections Blog, 25 Octo-ber 2017, https ://blogs .ucl.ac.uk/museu ms/2017/10/25/racis m-eugen ics-and-the-domes ticat ion-of-human s/. Accessed 23 July 2019.

Dormehl, Luke. 2017. Engineers Just Created A Tiny Antenna, Which Could Be Used For Brain Implants. Digital Trends. Digital Trends, 25 August, https ://www.digit altre nds.com/cool-tech/tiny-anten na-brain -impla nt/. Accessed 25 August 2017.

Drabiak, Katherine. 2017. Caveat Emptor: How the Intersection of Big Data and Consumer Genomics Exponentially Increases Informa-tional Privacy Risks. Health Matrix 27(1): 143–525. https ://schol arlyc ommon s.law.case.edu/healt hmatr ix/vol27 /iss1/8/. Accessed 12 September 2019.

Duhigg, Charles. 2012. How Companies Learn Your Secrets. New York Times Magazine, 16 February, https ://www.nytim es.com/2012/02/19/magaz ine/shopp ing-habit s.html. Accessed 12 September 2019.

Ellenberg, Jordan. 2014. What’s Even Creepier Than Target Guessing That You’re Pregnant? Slate, 19 June, https ://slate .com/human -inter est/2014/06/big-data-whats -even-creep ier-than-targe t-guess ing-that-youre -pregn ant.html. Accessed 25 July 2019.

EPSRC. 2019. Anticipate, Reflect, Engage and Act (AREA), https ://epsrc .ukri.org/resea rch/frame work/area/. Accessed 25 July 2019.

ETC Group. 2011. Why Technology Assessment? ETC Group Briefing Paper, New York, March, http://www.etcgr oup.org/sites /www.etcgr oup.org/files /Why%20tec hnolo gy%20ass essme nt201 1.pdf. Accessed 1 March 2018.

ETC Group. 2014. A Note for Discussion: UN Technology Assessment. 29 May, http://www.un-ngls.org/IMG/pdf/Techn ology _Asses sment _Overv iew21 May20 14.pdf. Accessed 1 March 2018.

ETC Group. 2015. UN Moves Towards a Technology Early Listening System. ETC Group, News Release, 16 July, http://www.etcgr

oup.org/conte nt/un-moves -towar ds-techn ology -early -liste ning-syste m. Accessed 1 March 2018.

ETC Group. 2017a. The Wisdom of G.O.A.T.S. (Global Overview Assessment of Technological Systems). Draft Proposal for STI Forum 2, New York, May, http://www.etcgr oup.org/sites /www.etcgr oup.org/files /files /etc_goats _us_may20 17.pdf. Accessed 23 July 2019.

ETC Group. 2017b. UN Sustainable Development Knowledge Plat-form. https ://susta inabl edeve lopme nt.un.org/tfm. Accessed 2 March 2018.

Etzioni, Amitai and Oren Etzioni. 2016. Designing AI Systems that Obey our Laws and Values. Communications of the ACM 59(9): 29–31. https ://paper s.ssrn.com/sol3/paper s.cfm?abstr act_id=28301 46. Accessed 12 September 2019.

European Commission. 2013. Report on the Public Consultation on IoT Governance, https ://ec.europ a.eu/digit al-singl e-marke t/en/news/concl usion s-inter net-thing s-publi c-consu ltati on. Accessed 2 March 2018.

Federal Trade Commission. 2017. Cross-Device Tracking. Staff Report, January, https ://www.ftc.gov/syste m/files /docum ents/repor ts/cross -devic e-track ing-feder al-trade -commi ssion -staff -repor t-janua ry-2017/ftc_cross -devic e_track ing_repor t_1-23-17.pdf. Accessed 1 March 2018.

Floridi, Luciano and Josh Cowls. 2019. A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 23 June, Issue 1: 1-13. https ://doi.org/10.1162/99608 f92.8cd55 0d1.

Frey, Carl Benedikt and Michael Osborne et al. 2016. Technology at Work v2.0: The Future Is Not What It Used to Be. Oxford Martin School and CITI GPS Reports, January, http://www.oxfor dmart in.ox.ac.uk/publi catio ns/view/2092. Accessed 2 March 2018.

Frischmann, Brett, and Evan Selinger. 2018. Re-Engineering Human-ity, 30–32. Cambridge: Cambridge University Press. https ://www.cambr idge.org/acade mic/subje cts/law/law-and-econo mics/re-engin eerin g-human ity?forma t=HB&isbn=97811 07147 096.

Future of Life Institute. 2017a. Autonomous Weapons: An Open Letter From AI & Robotics Researchers, https ://futur eofli fe.org/open-lette r-auton omous -weapo ns/. Accessed 1 March 2018.

Future of Life Institute. 2017b. Killer Robots: World’s Top AI and Robotics Companies Urge United Nations to Ban Lethal Autono-mous Weapons. Future of Life Institute, 20 August, https ://futur eofli fe.org/2017/08/20/kille r-robot s-world s-top-ai-robot ics-compa nies-urge-unite d-natio ns-ban-letha l-auton omous -weapo ns/. Accessed 1 March 2018.

Gao, Wei, Renfeng Dong, Soracha Thamphiwatana, Jinxing Li, Wei-wei Gao, Liangfang Zhang and Joseph Wang. 2015. Artificial Micromotors in the Mouse’s Stomach: A Step Toward in Vivo Use of Synthetic Motors. ACS Nano 9(1): 117-123. https ://doi.org/10.1021/nn507 097k.

Global Privacy Enforcement Network. 2016. GPEN Privacy Sweep, Internet of Things: Participating Authorities’ Press Releases. https ://www.priva cyenf orcem ent.net/node/717. Accessed 2 March 2018.

Google Cloud. Running DeepVariant. https ://cloud .googl e.com/genom ics/docs/tutor ials/deepv arian t. Accessed 23 July 2019.

Greenberg, Andy. 2017. Biohackers Encoded Malware In A Strand Of DNA. Wired, 10 August, https ://www.wired .com/story /malwa re-dna-hack. Accessed 2 March 2018.

Grimm, Nick. 2017. Swedish Employees Agree to Free Microchip Implants Designed for Office Work. ABC News, 3 April, http://www.abc.net.au/news/2017-04-03/swedi sh-emplo yees-agree -to-micro chip-impla nts/84100 18. Accessed 2 March 2018.

Grosfoguel, Ramón. 2012. Decolonizing Western Uni-versalisms: Decolonial Pluri-Versalism from Aimé Césaire to the Zapatistas. Transmodernity: Journal of Peripheral Cultural Production of the Luso-Hispanic World 1(3): 88-104, http://escho larsh ip.org/uc/item/01w71 63v. Accessed 1 March 2018).

Page 12: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

77Governance Challenges with Human-Machine Mergers

Hahn, Adam, Aditya Ashok, Siddharth Sridhar, and Manimaran Govin-darasu. 2013. Cyber-Physical Security Testbeds: Architecture, Application, and Evaluation for Smart Grid. IEEE Transac-tions on Smart Grid 4(2): 847–855. https ://doi.org/10.1109/TSG.2012.22269 19.

Hammond, Edward. 2017. Sequence Data And Benefit Sharing: Divs-eek’s Pitfalls Show Need For Appropriate Policy Penang, Malay-sia: Third World Network, https ://www.twn.my/title 2/serie s/bkr/bkr00 5.htm. Accessed 1 March 2018.

Harris, Tristan. 2016. How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist. Thrive Global, 19 May, https ://journ al.thriv eglob al.com/how-techn ology -hijac ks-peopl es-minds -from-a-magic ian-and-googl e-s-desig n-ethic ist-56d62 ef5ed f3. Accessed 1 March 2018.

HEGAAs. Horizontal Environmental Genetic Alteration Agents. http://web.evolb io.mpg.de/HEGAA s/. Accessed 25 July 2019.

Hilts, Andrew, Christopher Parsons and Jeffrey Knockel. 2016. Every Step you Fake: A Comparative Analysis of Fitness Tracker Pri-vacy and Security. Open Effect Report 76(24): 31-33, https ://opene ffect .ca/repor ts/Every _Step_You_Fake.pdf. Accessed 1 March 2018.

HP. 2015. HP Study Reveals Smartwatches Vulnerable to Attack. HP News Advisory, 22 July, http://www8.hp.com/us/en/hp-news/press -relea se.html?id=20373 86. Accessed 1 March 2018.

HPE. 2015. Fortify and the Internet of Things. Internet of Things Secu-rity Study: Smartwatches, http://go.saas.hpe.com/fod/inter net-of-thing s. Accessed 1 March 2018.

Hu, Shaohua, Yiru Fang, Chee H Ng and J. John Mann. 2019. Editorial: Involvement of Neuro-Immune Mechanism and Brain–Gut Axis in Pathophysiology of Mood Disorders. Frontiers in Psychiatry. https ://doi.org/10.3389/fpsyt .2019.00403 .

Hull, Dana. 2017. Elon Musk’s Neuralink Gets $27 Million to Build Brain Computers. Bloomberg Technology, 25 August, https ://www.bloom berg.com/news/artic les/2017-08-25/elon-musk-s-neura link-gets-27-milli on-to-build -brain -compu ters. Accessed 1 March 2018.

IEEE. 2019. Ethically Aligned Design. First Edition, 2019, https ://stand ards.ieee.org/conte nt/dam/ieee-stand ards/stand ards/web/docum ents/other /ead1e .pdf. Accessed 23 July 2019.

International Foundation for Autonomous Agents and Multiagent Sys-tems. 2017. https ://stora ge.googl eapis .com/deepm ind-media /paper s/multi -agent -rl-in-ssd.pdf. Accessed 1 March 2018.

Irish Data Protection Commissioner. 2016. Findings of International Privacy Sweep 2016 published. 22 September, https ://www.datap rotec tion.ie/docs/23-9-2016-Inter natio nal-Priva cy-Sweep -2016/i/1597.htm. Accessed 1 March 2018.

Jackson, Brian A., Duren Banks, Dulani Woods and Justin C. Dawson. 2017. Future-Proofing Justice: Building a Research Agenda to Address the Effects of Technological Change on the Protection of Constitutional Rights. Santa Monica, CA: RAND Corpora-tion, https ://www.rand.org/pubs/resea rch_repor ts/RR174 8.html. Accessed 1 March 2018.

Johnsen, Andreas. 2016. Investigation of Privacy and Security Issues with Smart toys. Norwegian Consumer Council Report, Version 1, https ://fil.forbr ukerr adet.no/wp-conte nt/uploa ds/2016/12/2016-11-techn ical-analy sis-of-the-dolls -bouve t.pdf. Accessed 1 March 2018.

Jordan, Michael I. 2018. Artificial Intelligence—The Revolution Hasn’t Happened Yet. Medium, 18 April 2018, https ://mediu m.com/@mijor dan3/artifi cial -intel ligen ce-the-revol ution -hasnt -happe ned-yet-5e1d5 812e1 e7. Accessed 23 July 2019.

Jeungst, Eric T, and Michelle L McGowan. 2018. Why Does the Shift from “Personalized Medicine” to “Precision Health” and “Well-ness Genomics” Matter? AMA Journal of Ethics, September 2018, https ://journ alofe thics .ama-assn.org/artic le/why-does-shift

-perso naliz ed-medic ine-preci sion-healt h-and-welln ess-genom ics-matte r/2018-09. Accessed 23 July 2019.

Kearns, Thomas B. 1998. Technology and the Right to Privacy: The Convergence of Surveillance and Information Privacy Concerns. William & Mary Bill of Rights Journal 7(3), art. 10,. http://schol arshi p.law.wm.edu/wmbor j/vol7/iss3/10. Accessed 12 September 2019.

Kriebel, David, Joel Tickner, Paul Epstein, John Lemons, Richard Levins, Edward L. Loechler and Michael Stoto. 2001. The Precautionary Principle in Environmental Science. Environ-mental health perspectives 109(9): 871–876, 871, citing C. Raffensperger and J. A. Tickner, (eds) Protecting public health and the environment: implementing the precautionary princi-ple, Island Press, 1999: 8. https ://doi.org/10.1289/ehp.01109 871.

Kupferschmidt, Kai. 2019. Can atmospheric chemists rescue the stalled quest for a human pheromone? Science Apr 11, 2019. https ://doi.org/10.1126/scien ce.aax66 30.

Leibo, Joel Z., Vinicius Zambaldi, Marc Lanctot, Janusz Marecki and Thore Graepel. 2017. Multi-agent Reinforcement Learn-ing in Sequential Social Dilemmas. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 464–473. https ://dl.acm.org/citat ion.cfm?id=30911 94. Accessed 12 September 2019.

Li, QianQian, Ding Ding and Mauro Conti. 2015. Brain–Computer Interface Applications: Security and Privacy Challenges. Com-munications and Network Security (CNS). In IEEE Conference on Network Security, https ://doi.org/10.1109/CNS.2015.73468 84. Accessed 25 July 2019.

Lopatto, Elizabeth. 2019. Elon Musk unveils Neuralink’s Plans for Brain-Reading ‘Threads’ and a Robot to Insert Them. The Verge, 16 July 2019, https ://www.theve rge.com/2019/7/16/20697 123/elon-musk-neura link-brain -readi ng-threa d-robot . Accessed 23 July 2019.

Lubin, Gus. 2012. The Incredible Story Of How Target Exposed A Teen Girl’s Pregnancy. Business Insider, 16 February, http://www.busin essin sider .com/the-incre dible -story -of-how-targe t-expos ed-a-teen-girls -pregn ancy-2012-2. Accessed 13 Sep-tember 2019.

Ma, Hong, Nuria Marti-Gutierrez, Sang-Wook Park, Wu Jun, Yeonmi Lee, Keiichiro Suzuki, Amy Koski, et al. 2017. Correction of a Pathogenic Gene Mutation in Human Embryos. Nature 548(7668): 413–419. https ://doi.org/10.1038/natur e2330 5.

Mallonee, Laura. 2017. The DIY Cyborgs Hacking Their Bodies For Fun. Wired, 8 June, https ://www.wired .com/story /hanne s-wiede mann-grind ers/. Accessed 1 March 2018.

MarketWatch. 2019. DTC Genetic Testing Market will surpass $2.5 Billion by 2024. Press Release, 22 February 2019, https ://www.marke twatc h.com/press -relea se/dtc-genet ic-testi ng-marke t-will-surpa ss-25-billi on-by-2024-2019-02-22. Accessed 23 July 2019.

Marr, Bernard. 2018. The Wonderful Ways Artificial Intelligence Is Transforming Genomics and Gene Editing. Forbes, 16 Novem-ber 2018, https ://www.forbe s.com/sites /berna rdmar r/2018/11/16/the-amazi ng-ways-artifi cial -intel ligen ce-is-trans formi ng-genom ics-and-gene-editi ng/#cafc0 d042c 11. Accessed 23 July 2019.

Martin, Chris R., Vadim Osadchiy, Amir Kalani, and Emeran A. Maye. 2018. The Brain-Gut-Microbiome Axis. Cellular and Molecu-lar Gastroenterology and Hepatology 6(2): 133–148. https ://doi.org/10.3389/fpsyt .2019.00403 .

Metz, Rachel. 2017. Controlling VR with Your Mind. MIT Tech Review, March, http://www.neura ble.com/news/contr ollin g-vr-your-mind. Accessed 12 September 2019.

Mian, I.S., D. Twiselton and D. Timm. 2019. What is the Resource Footprint of a Computer Science Department? Place, People, and Pedagogy. Current version of the discussion paper presented at

Page 13: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

78 A. M. Phillips, I. S. Mian

Data for Policy 2017: Government by Algorithm? 6-7 Septem-ber 2017. https ://doi.org/10.5281/zenod o.88449 2. Accessed 16 September.

Michael, Katina, M.G. Michael Anas Aloudat, and Christine Perak-slis. 2017. You Want to Do What with RFID?: Perceptions of Radio-Frequency Identification Implants for Employee Identifi-cation in the Workplace. IEEE Consumer Electronics Magazine 6(3): 111–117. https ://ieeex plore .ieee.org/docum ent/79488 58. Accessed 12 September 2019.

Mignolo, Walter. 2010. The communal and the decolonial. Turbu-lence, http://www.turbu lence .org.uk/index .html@p=391.html. Accessed 1 March 2018.

Mignolo, Walter. 2013. On Pluriversality. 20 October, http://walte rmign olo.com/on-pluri versa lity/. Accessed 1 March 2018.

Mittos, Alexandros, Bradley Malin, and Emiliano De Cristofaro. 2019. Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective. In Proceedings on Privacy Enhancing Technologies. https ://doi.org/10.2478/popet s-2019-0006.

Mo, Yilan, Tiffany Hyun-Jin Kim, Kenneth Brancik, Dona Dickinson, Heejo Lee, Adrian Perrig, and Bruno Sinopoli. 2012. Cyber–Physical Security of a Smart Grid Infrastructure. Proceedings of the IEEE 100(1): 195–209. https ://doi.org/10.1109/JPROC .2011.21614 28.

Molster, Caron M, Faye L. Bowman, Gemma A. Bilkey, Angela S. Cho, Belinda L. Burns, Kristen J. Nowak and Hugh J. S. Dawkins. 2018. The Evolution of Public Health Genomics: Exploring Its Past, Present, and Future. Frontiers in Public Health, 4 Septem-ber 2018, https ://doi.org/10.3389/fpubh .2018.00247 . Accessed 23 July 2019.

Morris, Ian. 2017. Apple Has Sold 1.2 Billion iPhones Worth $738 Bil-lion In 10 Years. Forbes, 29 June, https ://www.forbe s.com/sites /ianmo rris/2017/06/29/apple -has-sold-1-2-billi on-iphon es-worth -738-billi on-in-10-years /. Accessed 1 March 2018.

Nan, Tianxiang, Hwaider Lin, Yuan Gao, Alexei Matyushov, Yu. Guoliang, Huaihao Chen, Neville Sun, Shengjun Wei, Zhiguang Wang, Menghui Li, Xinjun Wang, Amine Belkessam, Rongdi Guo, Brian Chen, James Zhou, Yu. Zhenyun Qian, Matteo Rinaldi Hui, Michael E. McConney, Brandon M. Howe, Hu Zhongqiang, John G. Jones, Gail J. Brown, and Nian Xiang Sun. 2017. Acoustically Actuated Ultra-Compact NEMS Magneto-electric Antennas. Nature Communications 8(1): 296. https ://doi.org/10.1038/s4146 7-017-00343 -8.

Narayanan, Arvind, and Dillon Reisman. 2017. The Princeton Web Transparency and Accountability Project, 45–67. Transparent Data Mining for Big and Small Data: Springer International Pub-lishing. https ://doi.org/10.1007/978-3-319-54024 -5_3.

National Human Genome Research Institute. 15 for 15: Direct-to-Con-sumer Genomic Testing, https ://www.genom e.gov/dna-day/15-for-15/direc t-to-consu mer-genom ic-testi ng. Accessed 23 July 2019.

Ney, Peter, Karl Koscher, Lee Organick, Luis Ceze and Tadayoshi Kohno. 2017. Computer Security, Privacy, and DNA Sequenc-ing: Compromising Computers with Synthesized DNA, Privacy Leaks, and More. In 26th USENIX Security Symposium. 25 Octo-ber, https ://www.useni x.org/syste m/files /confe rence /useni xsecu rity1 7/sec17 -ney.pdf. Accessed 2 March 2018.

News.com.au. 2017. Swedish Company Epicenter Implants Microchips into Employees. News.com.au., 4 April, http://www.news.com.au/techn ology /scien ce/human -body/swedi sh-compa ny-epice nter-impla nts-micro chips -into-emplo yees/news-story /5c487 00ebb 54262 ae389 db085 593ab 12. Accessed 2 March 2018.

Newmann, Stuart and Tina Stevens. 2019a. A ban, Not a Moratorium, on Human Embryo Modification. The Berkeley Daily Planet. 28 June 2019, https ://www.berke leyda ilypl anet.com/issue /2019-06-28/artic le/47714 . Accessed 25 July 2019.

Newmann, Stuart and Tina Stevens. 2019b. Biotech Juggernaut: Hope, Hype, and Hidden Agendas of Entrepreneurial BioScience. https ://www.routl edge.com/Biote ch-Jugge rnaut -Hope-Hype-and-Hidde n-Agend as-of-Entre prene urial -BioSc ience /Steve ns-Newma n/p/book/97811 38043 237. Accessed 16 September 2019.

Norwegian Consumer Council. 2017. #WatchOut Analysis of Smartwatches for Children. Norwegian Consumer Council Report, October, https ://fil.forbr ukerr adet.no/wp-conte nt/uploa ds/2017/10/watch out-rappo rt-octob er-2017.pdf. Accessed 1 March 2018.

O’Brien, Carl. 2018. Call for Smartphone Use to be Banned for Primary Students. The Irish Times, 30 January, https ://www.irish times .com/news/educa tion/call-for-smart phone -use-to-be-banne d-for-prima ry-stude nts-1.33728 48. Accessed 1 March 2018.

O’Brolcháin, Fiachra, Tim Jacquemard, David Monaghan, Noel O’Connor, Peter Novitzky, and Bert Gordijn. 2016. The Con-vergence of Virtual Reality and Social Networks: Threats to Privacy and Autonomy. Science and Engineering Ethics 22(1): 1–29, https ://doi.org/10.1007/s1194 8-014-9621-1. Accessed 12 September 2019.

O’Connell, Mark. 2017. To Be A Machine. London: Granta Publica-tions. http://grant abook s.com/to-be-a-machi ne-2. Accessed 12 September 2019.

O’Donnell, Erin and Julia Talbot-James. 2017. Three Rivers are Now Legally People—But That’s Just the Start of Looking After Them. The Conversation, 23 March, https ://theco nvers ation .com/three -river s-are-now-legal ly-peopl e-but-thats -just-the-start -of-looki ng-after -them-74983 . Accessed 1 March 2018.

Osei-Mensah, Jennifer. 2019. “A History That Has Never Gone Away”: UCL Eugenics Inquiry Holds Divisive First Meeting. PI Maga-zine, 7 March 2019, https ://pimed iaonl ine.co.uk/news/a-histo ry-that-has-never -gone-away-ucl-eugen ics-inqui ry-holds -divis ive-first -meeti ng/. Accessed 23 July 2019.

Page, Hector, Charlie Cabot, and Kobbi Nissim. 2018. Differential Pri-vacy: An Introduction for Statistical Agencies. Government Sta-tistical Service. December 2018, https ://gss.civil servi ce.gov.uk/wp-conte nt/uploa ds/2018/12/12-12-18_FINAL _Privi tar_Kobbi _Nissi m_artic le.pdf. Accessed 23 July 2019.

Papadopoulos, Elias P., Michalis Diamantaris, Panagiotis Papadopou-los, Thanasis Petsas, Sotiris Ioannidis and Evangelos P. Mar-katos. 2017. The Long-Standing Privacy Debate: Mobile Web-sites Vs Mobile Apps. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, https ://doi.org/10.1145/30389 12.30526 91.

Patterson, Anna 2017. Introducing Gradient Ventures. Google Blog, 11 July, https ://www.blog.googl e/topic s/machi ne-learn ing/intro ducin g-gradi ent-ventu res/. Accessed 2 March 2018.

Peet, Alastair and Tom Wilde. 2017. Artificial Intelligence: The Invest-ment Of 2017 And Beyond. Financier Worldwide, February, https ://www.finan cierw orldw ide.com/artif icial -intel ligen ce-the-inves tment -of-2017-and-beyon d/. Accessed 2 March 2018.

Perakslis, Christine, Katina Michael and M. G. Michael. 2016. Smart Environments & The Convergence of the Veillances: Privacy Violations to Consider. MBA Faculty Conference Papers & Journal Articles. Paper 92, http://schol arsar chive .jwu.edu/mba_fac/92. Accessed 2 March 2018.

Pfeiffer, Max, Tim Dunte, Stefan Schneegass, Florian Alt and Michael Rohs. 2015. Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation. Proceedings of the 33rd Annual ACM Conference on Human Factors in Comput-ing Systems. https ://doi.org/10.1145/27021 23.27021 90.

Phillips, Andelka M. 2016. Wake Up and Smell the Coffee! A FLE 5SH approach to New and Emerging Technologies … Beyond ‘Responsible Research and Innovation. University of Edinburgh’s

Page 14: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

79Governance Challenges with Human-Machine Mergers

IP/IT/Media Law Discussion Group seminar, Edinburgh Law Faculty. https://www.iash.ed.ac.uk/event/andelka-phillips-wake-and-smell-coffee-fle5sh-approach-new-and-emerging-technolo-gies-%E2%80%A6-beyond. Accessed 1 March 2018.

Phillips, Andelka M. 2017. Reading the Fine Print When Buying Your Genetic Self Online: Direct-to-Consumer Genetic Testing Terms and Conditions. New Genetics and Society 36(3): 273–295. https ://doi.org/10.1080/14636 778.2017.13524 68.

Phillips, Andelka M. 2019. Buying Your Self on the Internet: Wrap Contracts and Personal Genomics, 2019. Edinburgh: Edinburgh University Press. https ://edinb urghu niver sityp ress.com/book-buyin g-your-self-onthe -inter net.html.

Phillips, Andelka M. and I.S. Mian. 2017. Governance and Assessment of Future Spaces: A Discussion of Some Issues Raised by the Possibilities of Human-Machine Mergers. In Data For Policy 2017: Government by Algorithm? London, 6–7 September, https ://doi.org/10.5281/zenod o.89610 9.

Phillips, Andelka M., I.S. Mian and J. Charbonneau. 2015. Mol-ecule say ‘hello’ to molecule: Technological Innovation under the Microscope, GikII Conference Berlin, http://www.gikii .org/?p=280. Accessed 25 July 2019.

Phillips, Andelka M., I.S. Mian and J. Charbonneau. 2016. ‘Living in a Panopticon City: the Biological-Geographic-Economic-Social-Behavioural-Physical Complex—People and Places Under Dynamic Surveillance’. In Surveillance: Power, Performance & Trust (7th Biannual Surveillance and Society Conference, Bar-celona, April. http://www.ssn20 16.net/blog/wp-conte nt/uploa ds/Bookl et_Abstr acts_2016.pdf. Accessed 16 September 2019.

Privacy Commissioner (New Zealand). 2016. International study Finds Privacy Shortfalls In Internet of Things Devices. 28 September, https ://www.priva cy.org.nz/news-and-publi catio ns/state ments -media -relea ses/inter natio nal-study -finds -priva cy-short falls -in-inter net-of-thing s-devic es/. Accessed 1 March 2018.

Reeve, R.G., S. Voeneky, D. Caetano-Anollés, F. Beck, and C. Boëte. 2018. Agricultural Research, or a New Bioweapon System? Sci-ence 362(6410): 35–37. https ://doi.org/10.1126/scien ce.aat76 64.

Reijers, Wessel, Fiachr O’Brolcháin and Paul Haynes. 2016. Govern-ance in Blockchain Technologies & Social Contract Theories. Ledger, 1: 134–151, https ://www.ledge rjour nal.org/ojs/index .php/ledge r/artic le/view/62. Accessed 25 July 2019.

Riley, Boots. 2018. Sorry To Bother You, http://www.sorry tobot heryo u.movie . Accessed 25 July 2019.

Rio Declaration on Environment and Development. 1992. Principle 15, http://www.unesc o.org/educa tion/pdf/RIO_E.PDF. Accessed 1 March 2018.

Rohrscheib, Chelsie E. and Jeremy C. Brownlie. 2013. Microorgan-isms that Manipulate Complex Animal Behaviours by Affecting the Host’s Nervous System. Springer Science Reviews. 1: 1–2, 133–40, https ://doi.org/10.1007/s4036 2-013-0013-8. Accessed 25 July 2019.

Royal Society Steering Group on Neural Interface Technologies. 2019. iHuman perspective: Neural interfaces. https ://royal socie ty.org/topic s-polic y/proje cts/ihuma n-persp ectiv e/. Accessed 16 September.

Semsarian, Christopher. 2019. Genome Sequencing for Sale on the NHS. British Medical Journal 364:l789, https ://www.bmj.com/conte nt/364/bmj.l789. Accessed 23 July 2019.

Sanders, Robert. 2017. Defense Department Pours $65 Million into Making CRISPR Safer. Berkeley News, 19 July, http://news.berke ley.edu/2017/07/19/defen se-depar tment -pours -65-milli on-into-makin g-crisp r-safer /. Accessed 2 March 2018.

Schmidt R., D. Ulanova, L.Y. Wick, H.B. Bode and P. Garbeva. 2019. Microbe-driven chemical ecology: past, present and future. ISME J. 9 July 2019. https ://doi.org/10.1038/s4139 6-019-0469-x.

Science and Environmental Health Network. 1998. The Wingspread Consensus Statement on the Precautionary Principle. http://sehn.

org/wings pread -confe rence -on-the-preca ution ary-princ iple/. Accessed 1 March 2018.

Seppala, Timothy. J. 2015. Scientists successfully Implant Self-Destructing Nanobots into Live Mice. Engadget, 23 January, https ://www.engad get.com/2015/01/23/nanob ots-in-mice-do-the-twist /. Accessed 2 March 2018.

Sheppard, David. 2017. Microchipping Workers: What are the Moral, Practical and Legal Implications? Personnel Today, 22 August, http://www.perso nnelt oday.com/hr/micro chipp ing-worke rs-moral -pract ical-legal -impli catio ns/. Accessed 2 March 2018.

Science and Technology Committee (Commons). 2019. Commercial genomics Inquiry Launched. 4 March 2019, https ://www.parli ament .uk/busin ess/commi ttees /commi ttees -a-z/commo ns-selec t/scien ce-and-techn ology -commi ttee/news-parli ament -2017/comme rcial -genom ics-inqui ry-launc h-17-19/. Accessed 23 July 2019.

Simonite, Tom. 2017. Google Unveils An AI Investment Fund. It’s Betting On An App Store For Algorithms. Wired, 23 June, https ://www.wired .com/story /googl e-ai-ventu re-fund/. Accessed 2 March 2018.

Solon, Olivia. 2016. Self-Driving Trucks: What’s the Future for Amer-ica’s 3.5 Million Truckers? The Guardian, 17 June, https ://www.thegu ardia n.com/techn ology /2016/jun/17/self-drivi ng-truck s-impac t-on-drive rs-jobs-us. Accessed 1 March 2018.

Solon, Olivia. 2017. World’s lamest cyborg? My Microchip Isn’t Cool Now—But it Could be the Future. The Guardian, 2 August, https ://www.thegu ardia n.com/techn ology /2017/aug/02/micro chip-conta ctles s-payme nt-three -squar e-marke t-bioha x. Accessed 2 March 2018.

Som, Claudia, Lorenz M. Hilty, and Andreas R. Kohler. 2009. The Precautionary Principle as a Framework for a Sustainable Infor-mation Society. Journal of Business Ethics 85(3): 493–505. https ://doi.org/10.1007/s1055 1-009-0214-x.

Starritt, Alexander. 2016. Sweden is Paying People to Fix Their Belongings Instead of Throwing Them Away. World Eco-nomic Forum, 27 October 27, https ://www.wefor um.org/agend a/2016/10/swede n-is-tackl ing-its-throw away-cultu re-with-tax-break s-on-repai rs-will-it-work/. Accessed 1 March 2018.

Statistica. 2017. Number of Smartphone Users Worldwide from 2014 to 2020 (in Billions). https ://www.stati sta.com/stati stics /33069 5/numbe r-of-smart phone -users -world wide/. Accessed 1 March 2018.

Statt, Nick. 2017. Elon Musk launches Neuralink, a Venture to Merge the Human Brain with AI. The Verge, 27 March, https ://www.theve rge.com/2017/3/27/15077 864/elon-musk-neura link-brain -compu ter-inter face-ai-cybor gs.

Strickland, Eliza. 2017. Facebook Announces ‘Typing-by-Brain’ Project. IEEE Spectrum, 20 April, http://spect rum.ieee.org/the-human -os/biome dical /bioni cs/faceb ook-annou nces-typin g-by-brain -proje ct. Accessed 2 March 2018.

Suzuki, Tohru, Norio Fujimaki and Kazuhisa Ichikawa. 2007. iBrain: A Simulation and Visualization Tool for Activation of Brain Areas on a Realistic 3D Brain Image. BMC Neuroscience 8(S2), https ://www.ncbi.nlm.nih.gov/pmc/artic les/PMC44 36429 /. Accessed 1 March 2018.

Sørensen, Evelyne J.B. 2016. The Post that Wasn’t: Facebook Moni-tors Everything Users Type and Not Publish. Computer Law & Security Review 32(1): 146–151. https ://doi.org/10.1016/j.clsr.2015.12.007.

Suda, T. and T. Nakano. 2018. Molecular Communication: A Personal Perspective. IEEE Trans Nanobioscience 17:424-432. https ://doi.org/10.1109/TNB.2018.28519 51.

Tanasescu, Mihnea. 2017. Rivers Get Human Rights: They Can Sue to Protect Themselves. Scientific American, 19 June, https ://www.scien tific ameri can.com/artic le/river s-get-human -right s-they-can-sue-to-prote ct-thems elves /. Accessed 1 March 2018.

Page 15: Governance and Assessment of Future Spaces: A Discussion ... · InternetofEverything,9whereourthoughtscanberead, monitored, andpotentially manipulated, thenit will be very diculttoturnbacktheclock.Anillustrativeexample

80 A. M. Phillips, I. S. Mian

Taylor, Harriet. 2015. Biggest Cybersecurity Threats in 2016. CNBC, 28 December, http://www.cnbc.com/2015/12/28/bigge st-cyber secur ity-threa ts-in-2016.html. Accessed 1 March 2018.

Templeton, Brad. 2019. Tesla Autopilot Repeats Fatal Crash; Do They Learn From Past Mistakes? Forbes, 21 May 2019, https ://www.forbe s.com/sites /bradt emple ton/2019/05/21/tesla -autop ilot-repea ts-fatal -crash -do-they-learn -from-past-mista kes. Accessed 23 July 2019.

Timmer, John. 2017. Researchers Encode Malware in DNA, Compro-mise DNA Sequencing Software. Ars Technica, 12 August, https ://arste chnic a.com/scien ce/2017/08/resea rcher s-encod e-malwa re-in-dna-compr omise -dna-seque ncing -softw are/. Accessed 2 March 2018).

Tracy, Phillip. 2017. Infected DNA Successfully Hacks Computer in Terrifying Experiment. The Daily Dot, 10 August, https ://www.daily dot.com/debug /dna-hack-compu ter/. Accessed 2 March 2018.

Trafton, Anne. 2016. Nanobionic Spinach Plants Can Detect Explo-sives. MIT News, 30 October, http://news.mit.edu/2016/nanob ionic -spina ch-plant s-detec t-explo sives -1031. Accessed 1 March 2018.

Treaty on the Functioning of the European Union. 2007. Official Journal C 326, 26/10/2012 P. 0001—0390, http://eur-lex.europ a.eu/legal -conte nt/EN/TXT/?uri=celex %3A120 12E%2FTXT . Accessed 1 March 2018.

Tuptuk, Nilufer and Stephen Hailes. 2016. The Cyber attack on Ukraine’s Power Grid is a Warning of What’s to Come. The Con-versation, 13 January, http://theco nvers ation .com/the-cyber attac k-on-ukrai nes-power -grid-is-a-warni ng-of-whats -to-come-52832 . Accessed 2 March 2018.

UCL. 2018. Inquiry Launches into History of Eugenics at UCL. 5 December 2018. https ://www.ucl.ac.uk/news/2018/dec/inqui ry-launc hes-histo ry-eugen ics-ucl. Accessed 23 July 2019.

UCL Culture. Galton and Anthropometrics. UCL Culture, Galton Col-lection. https ://www.ucl.ac.uk/cultu re/galto n-colle ction /galto n-and-anthr opome trics . Accessed 23 July 2019.

UK Information Commissioner’s Office. 2106. Privacy Regulators Study Finds Internet of Things Shortfalls. 22 September, https ://ico.org.uk/about -the-ico/news-and-event s/news-and-blogs /2016/09/priva cy-regul ators -study -finds -inter net-of-thing s-short falls /. Accessed 2 March 2018.

Vince, Gaia. 2012. The High Cost of Our Throwaway Culture. BBC, 29 November, http://www.bbc.com/futur e/story /20121 129-the-cost-of-our-throw away-cultu re. Accessed 1 March 2018.

Vincent, James. 2017. Elon Musk and AI Leaders Call for a Ban on Killer Robots. The Verge, 21 August, https ://www.theve rge.com/2017/8/21/16177 828/kille r-robot s-ban-elon-musk-un-petit ion. Accessed 1 March 2018.

Vincent, James. 2018. Google ‘Fixed’ Its Racist Algorithm by Remov-ing Gorillas from its Image-Labeling Tech. The Verge, 12 Janu-ary 2018, https ://www.theve rge.com/2018/1/12/16882 408/googl e-racis t-goril las-photo -recog nitio n-algor ithm-ai. Accessed 23 July 2019.

Wakefield, Jane. 2017. Germany bans children’s smartwatches. BBC, 17 November, http://www.bbc.com/news/techn ology -42030 109. Accessed 1 March 2018.

Walter, Damien. 2016. When AI Rules the World: What SF Novels Tell us About our Future Overlords. The Guardian, 18 March, https ://www.thegu ardia n.com/books /books blog/2016/mar/18/ai-sf-novel s-artifi cial -intel ligen ce-scien ce-ficti on-gibso n-neuro mance r. Accessed 2 March 2018.

Ward, Jacob. The ‘Geno-Economists’ Say DNA Can Predict Our Chances of Success. The New York Times Magazine, 16 Novem-ber 2018. https ://www.nytim es.com/inter activ e/2018/11/16/magaz ine/tech-desig n-econo mics-genes .html. Accessed 23 July 2019.

Warwick, Kevin. 2016. The Future of Artificial Intelligence and Cyber-netics. MIT Technology, 10 November, https ://www.techn ology revie w.com/s/60283 0/the-futur e-of-artifi cial -intel ligen ce-and-cyber netic s/. Accessed 2 March 2018).

Weber, Rolf H. 2010. Internet of Things—New Security and Privacy Challenges. Computer Law & Security Review 26: 23–30. https ://doi.org/10.1016/j.clsr.2009.11.008.

Williams, Lucy. 2017. Driverless lorries could mean 600,000 Lost Jobs. It’s Time We Took a Universal Basic Income Seriously. Evolve Politics, 28 August 2017, http://evolv epoli tics.com/drive rless -lorri es-could -mean-60000 0-lost-jobs-its-time-we-took-a-unive rsal-basic -incom e-serio usly/. Accessed 2 March 2018.

Wolbring, Gregor. 2009. Innovation for Whom? Innovation for What? The Impact of Ableism. 2020 Science Guest Blog, 14 December, http://2020s cienc e.org/2009/12/14/wolbr ing/. Accessed 1 March 2018.

Young, Stephen L. 2017. Unintended Consequences of 21st Century Technology for Agricultural Pest Management. EMBO Reports 18(9): 1478. https ://doi.org/10.15252 /embr.20174 4660.

Zimmer, Zac. 2017. Bitcoin and Potosí Silver: Historical Perspectives on Cryptocurrency. Technology and Culture 58: 307–334. https ://doi.org/10.1353/tech.2017.0038.

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.