ai scientists gather to plot doomsday scenarios (and ... › 2017 › 03 › news... · web...

36
https://www.bloomberg.com/news/articles/2017-03-02/ai-scientists- gather-to-plot-doomsday-scenarios-and-solutions AI Scientists Gather to Plot Doomsday Scenarios (and Solutions) Researchers, cyber-security experts and policy wonks ask themselves: What could possibly go wrong? by Dina Bass @dinabassMore stories by Dina Bass March 02, 2017 6:00 AM Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it. Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare. Horvitz is optimistic -- a good thing because machine intelligence is his life's work -- but some other, more dystopian-minded backers of the project seemed to find his outlook too positive when plans for this event started about two years ago, said Krauss, a theoretical physicist who directs ASU's

Upload: others

Post on 30-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

https://www.bloomberg.com/news/articles/2017-03-02/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions)

Researchers, cyber-security experts and policy wonks ask themselves: What could possibly go wrong?

by

Dina Bass

@dinabassMore stories by Dina Bass

March 02, 2017 6:00 AM

Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it.

Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.

Horvitz is optimistic -- a good thing because machine intelligence is his life's work -- but some other, more dystopian-minded backers of the project seemed to find his outlook too positive when plans for this event started about two years ago, said Krauss, a theoretical physicist who directs ASU's Origins Project, the program running the workshop. Yet Horvitz said that for these technologies to move forward successfully and to earn broad public confidence, all concerns must be fully aired and addressed. 

"There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology," said Horvitz, managing director of Microsoft's Research Lab in Redmond, Washington. ``To maximally gain from the upside we also have to think through possible outcomes in more detail than we have before and think about how we’d deal with them."

Participants were given "homework" to submit entries for worst-case scenarios. They had to be realistic -- based on current technologies or those that appear possible -- and five to 25 years in the future. The entrants with the "winning" nightmares were chosen to lead the panels, which featured about four experts on each of the two teams to discuss the attack and how to prevent it.

Blue team, including Launchbury, Fisher and Krauss, in the War and Peace scenario

Tessa Eztioni, Origins Project at ASU

Turns out many of these researchers can match science-fiction writers Arthur C. Clarke and Philip K. Dick for dystopian visions. In many cases, little imagination was required -- scenarios like technology being used to sway elections or new cyber attacks using AI are being seen in the real world, or are at least technically possible. Horvitz cited research that shows how to alter the way a self-driving car sees traffic signs so that the vehicle misreads a "stop" sign as "yield.''

The possibility of intelligent, automated cyber attacks is the one that most worries John Launchbury, who directs one of the offices at the U.S.'s Defense Advanced Research Projects Agency, and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session. What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear program that got out in the wild, but stealthier and more autonomous.

"We're talking about malware on steroids that is AI-enabled," said Fisher, who is an expert in programming languages. Fisher presented her scenario under a slide bearing the words "What could possibly go wrong?" which could have also served as a tagline for the whole event.

How did the defending blue team fare on that one? Not well, said Launchbury. They argued that advanced AI needed for an attack would require a lot of computing power and communication, so it would be easier to detect. But the red team felt that it would be easy to hide behind innocuous activities, Fisher said. For example, attackers could get innocent users to play an addictive video game to cover up their work.

To prevent a stock-market manipulation scenario dreamed up by University of Michigan computer science professor Michael Wellman, blue team members suggested treating attackers like malware by trying to recognize them via a database on known types of hacks. Wellman, who has been in AI for more than 30 years and calls himself an old-timer on the subject, said that approach could be useful in finance.

http://www.infoworld.com/article/3165267/advanced-persistent-threats/ai-isnt

-for-the-good-guys-alone-anymore.html

AI isn't for the good guys alone anymore

Criminals are beginning to use artificial intelligence and machine learning to get around cyberdefenses

By Maria Korolov

| Follow

Contributing Writer, CSO | Feb 4, 2017

Last summer at the Black Hat cybersecurity conference, the DARPA Cyber Grand Challenge pitted automated systems against one another, trying to find weaknesses in the others’ code and exploit them.

“This is a great example of how easily machines can find and exploit new vulnerabilities, something we’ll likely see increase and become more sophisticated over time,” said David Gibson, vice president of strategy and market development at Varonis Systems.

His company hasn’t seen any examples of hackers leveraging artificial intelligence technology or machine learning, but nobody adopts new technologies faster than the sin and hacking industries, he said.

“So it’s safe to assume that hackers are already using AI for their evil purposes,” he said.

The genie is out of the bottle

“It has never been easier for white hats and black hats to obtain and learn the tools of the machine learning trade,” said Don Maclean, chief cybersecurity technologist at DLT Solutions. “Software is readily available at little or no cost, and machine learning tutorials are just as easy to obtain.”

Take, for example, image recognition.

It was once considered a key focus of artificial intelligence research. Today, tools such as optical character recognition are so widely available and commonly used that they’re not even considered to be artificial intelligence anymore, said Shuman Ghosemajumder, CTO at Shape Security.

“People don’t see them as having the same type of magic as it has before,” he said. “Artificial intelligence is always what’s coming in the future, as opposed to what we have right now.”

Today, for example, computer vision is good enough to allow self-driving cars to navigate busy streets.

And image recognition is also good enough to solve the puzzles routinely presented to website users to prove that they are human, he added.

For example, last spring, Vinay Shet, the product manager for Google’s Captcha team, told Google I/O conference attendees that in 2014, they had a distorted text Captcha that only 33 percent of humans could solve. By comparison, the state-of-the-art OCR systems at the time could already solve it with 99.8 percent accuracy.

The criminals are already using image recognition technology, in combination with “Captcha farms,” to bypass this security measure, said Ghosemajumder. The popular Sentry MBA credential stuffing tool has it built right in, he added.

So far, he said, he hasn’t seen any publicly available tool kits based on machine learning that are designed to bypass other security mechanisms.

But there are indirect indicators that criminals are starting to use this technology, he added.

For example, companies already know that if there’s an unnaturally large amount of traffic from one IP address, that there’s a high chance it’s malicious, so criminals use botnets to bypass those filters, and the defenders look for more subtle indications that the traffic is automated and not human, he said.

They can’t just add in more randomness since human behavior is not actually random, he said. Spotting subtle patterns in large amount of data is exactly what machine learning is good at—and what the criminals need to do in order to effectively mimic human behavior.

Smarter email scams

According to the McAfee Labs 2017 Threats Predictions report, cyber-criminals are already using machine learning to target victims for Business Email Compromise scams, which have been escalating since early 2015.

http://metro.co.uk/2017/03/02/computer-beat-11-pro-poker-players-using-intuition-6482908/

Computer beat 11 pro poker players using ‘intuition’

Ashitha Nagesh for Metro.co.ukThursday 2 Mar 2017 7:00 pm

It used intuition (Picture: Shutterstock)

Don’t be alarmed, but a computer just beat 11 professional poker players in a 3,000 hand match of the incredibly complex Texas Hold’em, using ‘intuition’.

DeepStack is the first computer programme to outplay humans at the game, which it played over a period of four weeks.

Card players tend to rely on microexpressions – tiny subconscious flashes of emotion across the face that last no longer than a quarter of a second.

The computer, which was developed by scientists at the University of Alberta, Charles University in Prague and Czech Technical University, honed its ‘intuition’ through deep learning – allowing it to reassess its strategy with each move.

DeepStack has developed intuition (Picture: YouTube)

Professor Michael Bowling, from Alberta, said: ‘Poker has been a longstanding challenge problem in artificial intelligence.

‘It is the quintessential game of imperfect information in the sense that the players don’t have the same information or share the same perspective while they’re playing.’

He added that DeepStack’s winning streak would have major implications for the world outside poker.

‘Think of any real world problem,’ Prof Bowling said. ‘We all have a slightly different perspective of what’s going on – much like each player only knowing their own cards in a game of poker.’

Heads-up no-limit hold’em apparently has more unique situations than there are atoms in the universe – largely because players are able to wager different amounts.

DeepStack acts at a human speed – with an average of just three seconds of ‘thinking’ time – and runs on a simple gaming laptop with an Nvidia graphics processing unit.

The researchers’ full results have been published in the journal Science

https://www.washingtonpost.com/news/animalia/wp/2016/04/13/octopus-slips-out-of-aquarium-tank-crawls-across-floor-escapes-down-pipe-to-ocean/?utm_term=.e75efec070ea

Octopus slips out of aquarium tank, crawls across floor, escapes down pipe to ocean

By Karin Brulliard By Karin Brulliard

Animalia

April 13, 2016

Inky the octopus has won international fame after his stealthy escape from the National Aquarium of New Zealand. Here's how Inky spent his time at the aquarium before breaking free. (Facebook/National Aquarium of New Zealand)

Inky the octopus didn’t even try to cover his tracks.

By the time the staff at New Zealand’s National Aquarium noticed that he was missing, telltale suction cup prints were the main clue to an easily solved mystery.

Inky had said see ya to his tank-mate, slipped through a gap left by maintenance workers at the top of his enclosure and, as evidenced by the tracks, made his way across the floor to a six-inch-wide drain. He squeezed his football-sized body in — octopuses are very malleable, aquarium manager Rob Yarrall told the New Zealand website Stuff — and made a break for the Pacific.

“He managed to make his way to one of the drain holes that go back to the ocean. And off he went,” Yarrall told Radio New Zealand. “And he didn’t even leave us a message.”

The cephalopod version of “Shawshank Redemption” took place three months ago, but it only became public Tuesday. Inky, who already had some local renown in the coastal city of Napier, quickly became a global celebrity cheered on by strangers.

Inky had resided at the aquarium since 2014, when he was taken in after being caught in a crayfish pot, his body scarred and his arms injured. The octopus’s name was chosen from nominations submitted to a contest run by the Napier City Council.

Kerry Hewitt, the aquarium’s curator of exhibits, said at the time that Inky was “getting used to being at the aquarium” but added that staff would “have to keep Inky amused or he will get bored.”

Guess that happened.

This isn’t the first time a captive octopus decided to take matters into its own hands — er, tentacles. In 2009, after a two-spotted octopus at the Santa Monica Pier Aquarium in California took apart a water recycling valve, directed a tube to shoot water out of the tank for 10 hours and caused a massive flood, Scientific American asked octopus expert Jennifer Mather about the animals’ intelligence and previous such hijinks at aquariums

http://www.mirror.co.uk/tech/googles-artificial-intelligence-can-diagnose-9975987

Google's artificial intelligence can diagnose cancer faster than human doctorsThe system is able to scan samples to determine whether or not tissues are cancerous

ByJeff Parsons

· 16:00, 6 MAR 2017

· Updated10:57, 7 MAR 2017

Technology

Top view of a female breast with a tumor (Photo: Getty Images)

·

    

Bottom of Form

Making the decision on whether or not a patient has cancer usually involves trained professionals meticulously scanning tissue samples over weeks and months.

But an artificial intelligence (AI) program owned by Alphabet, Google's parent company, may be able to do it much, much faster.

Google is working hard to tell the difference between healthy and cancerous tissue as well as discover if metastasis has occured.

"Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labour intensive and error-prone," explained Google in a white paper outlining the study.

Prostate cancer (Photo: Getty)

"We present a framework to automatically detect and localise tumours as small as 100 ×100 pixels in gigapixel microscopy images sized 100,000×100,000 pixels.

"Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumour detection task."

Such high-level image recognition was first developed for Google's driverless car program, in order to help the vehicles scan for road obstructions.

Now the company has adapted it for the medical field and says it's more accurate than regular human doctors:

(Photo: Getty Images)

"At 8 false positives per image, we detect 92.4% of the tumours, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity." ** but how many FPs by the human? **

http://languagelog.ldc.upenn.edu/nll/?p=17970

It's not easy seeing green

March 2, 2015 @ 3:00 pm · Filed by Mark Liberman

The whole dress that melted the internet thing has brought back a curious example of semi-demi-science about a Namibian tribe that can't distinguish green and blue, but does differentiate kinds of green that look just the same to us Westerners. This story has been floating around the internets for several years, in places like the BBC and the New York Times and BoingBoing and RadioLab, and it presents an impressive-seeming demonstration of the power of language to shape our perception of the world.  But on closer inspection, the evidence seems to melt away, and the impressive experience seems to be wildly over-interpreted or even completely invented.

I caught the resurrection of this idea in Kevin Loria's article "No one could see the color blue until modern times", Business Insider 2/27/2015, which references a RadioLab episode on Colors that featured those remarkable Namibians. Loria uses them to focus on that always-popular question "do you really see something if you don't have a word for it?" *** stupid statement – philosophers – clearly I can change the hue / saturation / brightness so there is a JND but not a different label – is the fact that I can discern the difference a word for it? **

Here's the relevant segment of Loria's piece:

A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, which speaks a language that has no word for blue or distinction between blue and green.

When shown a circle with 11 green squares and one blue, they could not pick out which one was different from the others — or those who could see a difference took much longer and made more mistakes than would make sense to us, who can clearly spot the blue square.

But the Himba have more words for types of green than we do in English.  When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one.

Can you?

For most of us, that's harder.

https://www.technologyreview.com/s/603745/how-a-human-machine-mind-meld-could-make-robots-smarter/?set=603773

Intelligent Machines How a Human-Machine Mind-Meld Could Make Robots Smarter

Kindred AI is teaching robots new tasks using human virtual-reality “pilots.” The ultimate goal is to create a new kind of artificial intelligence.

· by Will Knight

· March 2, 2017

A secretive Canadian startup called Kindred AI is teaching robots how to perform difficult dexterous tasks at superhuman speeds by pairing them with human “pilots” wearing virtual-reality headsets and holding motion-tracking controllers.

The technology offers a fascinating glimpse of how humans might work in synchronization with machines in the future, and it shows how tapping into human capabilities might amplify the capabilities of automated systems. For all the worry over robots and artificial intelligence eliminating jobs, there are plenty of things that machines still cannot do. The company demonstrated the hardware to MIT Technology Review last week, and says it plans to launch a product aimed at retailers in the coming months. The long-term ambitions are far grander. Kindred hopes that this human-assisted learning will foster a fundamentally new and more powerful kind of artificial intelligence.

Kindred was created by several people from D-Wave, a quantum computing company based in Burnaby, Canada. Kindred is currently testing conventional industrial robot arms capable of grasping and placing objects that can be awkward to handle, like small items of clothing, more quickly and reliably than would normally be possible. The arms do this by occasionally asking for help from a team of humans, who use virtual-reality hardware to view the challenge and temporarily take control of an arm.

“A pilot can see, hear, and feel what the robot is seeing, hearing, and feeling. When the pilot acts, those actions move the robot,” says Geordie Rose, who is a cofounder and the CEO of Kindred, and who previously cofounded D-Wave. “This allows us to show robots how to act like people. Humans aren't the fastest or best at all aspects of robot control, like putting things in specific locations, but humans are still best at making sense of tricky or unforeseen situations.” ** hole is the unexpected Query **

Kindred’s system uses several machine-learning algorithms, and tries to predict whether one of these would provide the desired outcome, such as grasping an item. *** note the cognitive flexibility ** If none seems to offer a high probability of success, it calls for human assistance. Most importantly, the algorithms learn from the actions of a human controller. To achieve this, the company uses a form of reinforcement learning, an approach that involves experimentation and strengthening behavior that leads to a particular goal (see “10 Breakthrough Technologies 2017: Reinforcement Learning”).

Rose says the system can grasp small items of clothing about twice as fast as a person working on his own can, while a robot working independently would be too unreliable to deploy. One person can also operate several robots at once.

Rose adds that Kindred is exploring all sorts of human-in-the loop systems, from ones where a person simply clicks on an image to show a robot where to grasp something, to full-body exoskeletons that provide control over a humanoid robot. He says that pilots usually learn how to control a remote robotic system effectively. “When you’re using the control apparatus, at first it’s very frustrating, but people’s minds are very plastic, and you adjust,” says Rose.

The technical inspiration for the technology comes from Suzanne Gildert, who was previously a senior researcher at D-Wave, and who is Kindred’s chief scientific officer. The company has been operating in stealth mode for several years, but drew attention when details of a patent filed by Gildert surfaced online. The patent describes a scheme for combining different tele-operation systems with machine learning. Indeed, Kindred’s vision for its technology seems to extend well beyond building robots more skilled at sorting. 

“The idea was if you could do that for long enough, and if you had some sort of AI system in the background learning, that maybe you could try out many different AI models and see which ones trained better,” *** how to get to cognitive flexibility ** Gildert says. “Eventually, my thought was, if you can have a human demonstrating anything via a robot, then there’s no reason that robot couldn’t learn to be very humanlike.”

https://www.nytimes.com/2017/03/04/world/asia/north-korea-missile-program-sabotage.html?_r=0

Trump Inherits a Secret Cyberwar Against North Korean Missiles

By DAVID E. SANGER and WILLIAM J. BROADMARCH 4, 2017

An image distributed by the North Korean government showing the country’s leader, Kim Jong-un, visiting a missile test center in North Pyongan Province. Analysts say the pair of engines he is standing in front of could power an intercontinental ballistic missile. Credit Korean Central News Agency, via Reuters

WASHINGTON — Three years ago, President Barack Obama ordered Pentagon officials to step up their cyber and electronic strikes against North Korea’s missile program in hopes of sabotaging test launches in their opening seconds.

Soon a large number of the North’s military rockets began to explode, veer off course, disintegrate in midair and plunge into the sea. Advocates of such efforts say they believe that targeted attacks have given American antimissile defenses a new edge and delayed by several years the day when North Korea will be able to threaten American cities with nuclear weapons launched atop intercontinental ballistic missiles.

But other experts have grown increasingly skeptical of the new approach, arguing that manufacturing errors, disgruntled insiders and sheer incompetence can also send missiles awry. Over the past eight months, they note, the North has managed to successfully launch three medium-range rockets. And Kim Jong-un, the North Korean leader, now claims his country is in “the final stage in preparations” for the inaugural test of his intercontinental missiles — perhaps a bluff, perhaps not.

An examination of the Pentagon’s disruption effort, based on interviews with officials of the Obama and Trump administrations as well as a review of extensive but obscure public records, found that the United States still does not have the ability to effectively counter the North Korean nuclear and missile programs. Those threats are far more resilient than many experts thought, The New York Times’s reporting found, and pose such a danger that Mr. Obama, as he left office, warned President Trump they were likely to be the most urgent problem he would confront.

Pakistan

http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside?utm_campaign=news_daily_2017-03-07&et_rid=54802259&et_cid=1203472

A representation of a neural network.

Akritasa/Wikimedia Commons

Brainlike computers are a black box. Scientists are finally peering inside

By Jackie SnowMar. 7, 2017 , 3:15 PM

Last month, Facebook announced software that could simply look at a photo and tell, for example, whether it was a picture of a cat or a dog. A related program identifies cancerous skin lesions as well as trained dermatologists can. Both technologies are based on neural networks, sophisticated computer algorithms at the cutting edge of artificial intelligence (AI)—but even their developers aren’t sure exactly how they work. Now, researchers have found a way to "look" at neural networks in action and see how they draw conclusions.

Neural networks, also called neural nets, are loosely based on the brain’s use of layers of neurons working together. Like the human brain, they aren't hard-wired to produce a specific result—they “learn” on training sets of data, making and reinforcing connections between multiple inputs. A neural net might have a layer of neurons that look at pixels and a layer that looks at edges, like the outline of a person against a background. After being trained on thousands or millions of data points, a neural network algorithm will come up with its own rules on how to process new data. But it's unclear what the algorithm is using from those data to come to its conclusions.

“Neural nets are fascinating mathematical models,” says Wojciech Samek, a researcher at Fraunhofer Institute for Telecommunications at the Heinrich Hertz Institute in Berlin. “They outperform classical methods in many fields, but are often used in a black box manner.”

In an attempt to unlock this black box, Samek and his colleagues created software that can go through such networks backward in order to see where a certain decision was made, and how strongly this decision influenced the results. Their method, which they will describe this month at the Centre of Office Automation and Information Technology and Telecommunication conference in Hanover, Germany, enables researchers to measure how much individual inputs, like pixels of an image, contribute to the overall conclusion. Pixels and areas are then given a numerical score for their importance. With that information, researchers can create visualizations that impose a mask over the image. The mask is most bright where the pixels are important and darkest in regions that have little or no effect on the neural net’s output.

For example, the software was used on two neural nets trained to recognize horses. One neural net was using the body shape to determine whether it was horse. The other, however, was looking at copyright symbols on the images that were associated with horse association websites.

This work could improve neural networks, Samek suggests. That includes helping reduce the amount of data needed, one of the biggest problems in AI development, by focusing in on what the neural nets need. It could also help investigate errors when they occur in results, like misclassifying objects in an image.

http://www.recode.net/2017/3/1/14771330/boston-dynamics-new-nightmare-robot

Roboticists say Boston Dynamic’s new nightmare robot is unlike anything ever seen beforeHandle can move with confidence — and carry 100 pounds.

BY APRIL GLASER@APRILASER  MAR 1, 2017, 4:19PM EST

1.  TWEET

 

1.  SHARE

 

1.  LINKEDIN

Boston Dynamics, the Alphabet-owned robotics company, unveiled a new robot this week that robotics experts say is unlike anything they’ve ever seen before.

The massive legged, wheeled machine is called Handle. Marc Raibert, the CEO of Boston Dynamics, called it “nightmare inducing.” (Video of Handle was first seen in January, when venture capitalist Steve Jurvetson posted a YouTube video of Raibert introducing the new creation at a conference.)

“It’s very impressive,” said Vikash Kumar, a robotics researcher at the University of Washington. “Nothing like this has been shown before.”

What sets Handle apart is its ability to move with confidence and an understanding of what its body is capable of — it’s remarkably lifelike. Robots are usually stiff, slow and careful. They’re programmed to be cautious in their movements.

“This robot is using the momentum of its body. These dynamic movements are not what we know robots to be able to do,” said Kumar. “There are other robots that have the hardware capabilities of doing something similar, but on the algorithmic side, we’re not at the point where we can really leverage those capabilities.”

But Handle appears to be getting close.

“What's most impressive is how dynamic and powerful it is, while still being stable and, mostly, in control,” said Siddhartha Srinivasa, a professor of computer engineering at Carnegie Mellon, who specializes in robotic manipulation. Srinivasa is moving to teach at the University of Washington next year.

The robot may one day be used in warehouses, as illustrated by the video released by Boston Dynamics. It can jump over hurdles and land on its wheeled feet, lift a single leg while moving, stroll in the snow and go down stairs.

Handle can also carry a payload of 100 pounds, which roboticists say is impressive for its size and shape.

“The field at the moment is trying build robots to take care of really small, low-weight objects,” said Kumar. “To deal with an object that is comparable with its own weight changes the whole ballgame altogether.”

Robots in warehouses now often act like mechanical shelving, carrying items from one part of the facility to another, saving humans time and future back problems. Kiva’s robots — from the warehouse robotics company that Amazon bought in 2014, for example — followed barcoded stickers on the floor.

To be sure, Handle probably has a long way to go before taking a step on a job site.

“It is important to remember that there are many perception and control challenges in getting such a machine to operate reliably in a real warehouse,” said Ken Goldberg, a robotics professor at Berkeley.

It’s not that Handle is necessarily any more dangerous than other industrial robots, which have killed people who got in their way — like what happened when an engineer at a Volkswagen plant in Germanywas crushed to death in 2015 by a stationary factory robot.

Still, industrial robots have stringent safety standards and often have kill zones that humans are supposed to avoid when the machine is operating. And Handle, with all its confidence and dynamic movements, will have to be explicit about its intent as it operates around humans.

“It’s like a car with a blinker sign,” said Kumar. “We need to create ways to communicate so that humans can anticipate what a robot is trying to do and not be surprised by their action.”

It’s also not clear how Boston Dynamics plans to prepare or market its robot for the real world.

Google bought the robotics lab in 2013 under the direction of Android founder Andy Rubin, who had hired around 300 robotics engineers with Google.

But Rubin left Google in 2014 to start his own hardware incubator. The search giant put Boston Dynamics up for sale last year after public relations at Alphabet expressed concerns that the nightmarish robots they made — like the two-legged humanoid Atlas and its massive robotic dog named Spot — were “terrifying” and “ready to take human jobs.”

Though Google has — puzzlingly — yet to find a buyer for Boston Dynamics, the team clearly hasn’t stopped moving.

“This is one of the most remarkable robots I have seen in a long time,” said Srinivasa. “Boston Dynamics is truly the Apple of robotics, when it comes to tightly integrating hardware design with software and artificial intelligence.

The future is exciting.

http://www.huffingtonpost.com/mark-changizi-phd/perceiving-colors-differently_b_988244.html

Understanding Color Perception: Is Your ‘Red’ the Same as My ‘Red?’

10/02/2011 05:21 am ET | Updated Dec 02, 2011

Mark Changizi, Ph.D. Neuroscientist, Author of ‘Harnessed’ & ‘Vision Revolution’

How do we know that your “red” looks ** this is a quale statement – what you perceive in your consciousness is what I perceive ** the same as my “red”? For all we know, your “red” looks like my “blue.” In fact, for all we know your “red” looks nothing like any of my colors at all! If colors are just internal labels ** labels here is not meant to imply the word – it is meant to describe the representation internal in a vocabulary of conscious thought ** , then as long as everything gets labeled, why should your brain and my brain use the same labels? ** as long as we can align – why would they have to be the same – keep in mind the Kaku view – philosophers waste our time with these thought problems – he speaks to the ‘what is life concern that has disappeared **

Richard Dawkins wrote a nice little piece on color, and along the way he asked these questions.

He also noted that not only can color labels differ in your and my brain, but perhaps the same color labels could be used in non-visual modalities of other animals. Bats, he notes, use audition for their spatial sense, and perhaps furry moths are heard as red, and leathery locusts as blue. Similarly, rhinoceroses may use olfaction for their spatial sense, and could perceive water as orange and rival male markings as gray. ** we’ve discussed these issues many times – but I think our recent discussions on representations can lock down what we can say about the characteristics of our respective representations **

However, I would suggest that most discussions of rearrangements of color qualia (a quality or property as perceived or experienced by a person) severely underestimate how much structure comes along with our color perceptions. *** this is exactly what we are suggesting in QuEST – to comprehend qualia – to capture how they are related and how they can interact is equivalent to what he calls structure ** Once one more fully appreciates the degree to which color qualia are linked to one another and to non-color qualia, it becomes much less plausible to single color qualia out as especially permutable. ** we of course in QuEST make all qualia and even ‘self’ just different aspects of the working memory – color is just one quale – as is time and self and … every word was developed to communicate to another agent some aspect of your qualia representation **

Few of us, for example, would find it plausible to imagine that others might perceive music differently, e.g., with pitch and loudness swapped, so that melody to them sounds like loudness modulations to me, and vice versa. Few of us would find it plausible to imagine that some other brain might perceive “up” (in one’s visual field) and “down” as reversed. *** buy yet we have inverting googles that allow us to do this experiment ** And it is not quite so compelling to imagine that one might perceive the depth of something as the timbre of an instrument, and vice versa. And so on. ** he makes a big deal about inverted color – I’m not sure why – maybe because it is what is easy to describe to folks – I’ve never taken this argument to be unique to color **

Unlike color qualia, most alternative possible qualia rearrangements do not seem plausible. Why is that? Why is color the butt of nearly all the “inverted-spectra” arguments?

The difference is that these other qualia seem to be more than just mere labels that can be permuted willy nilly. Instead, these other qualia are deeply interconnected with hosts of other aspects of our perceptions. They are part of a complex structured network of qualia, and permuting just one small part of the network destroys the original shape and structure of the network — and when the network’s shape and structure is radically changed, the original meanings of the perceptions (and the qualia) within it are obliterated. ** everything in this paragraph is exactly where we are focusing our representation / objective function argument –

https://www.technologyreview.com/s/603797/toyota-tests-backseat-driver-software-that-could-take-control-in-dangerous-moments/?set=603799

Intelligent Machines Toyota Tests Backseat Driver Software That Could Take Control in Dangerous Moments

Cameras that watch where a driver is looking allow a car to judge when they are likely to miss a dangerous situation.

· by Tom Simonite

· March 7, 2017

This modified Lexus is used by Toyota to test autonomous driving software.

Making a turn across oncoming traffic is one of the most dangerous maneuvers drivers undertake every day. Researchers at Toyota think it’s one of the situations in which a software guardian angel built into your car could save lives.

In trials at private testing grounds in the U.S., left turns are one of the first scenarios Toyota has used to test the concept of a system it has dubbed “Guardian,” which judges whether a human is about to make a dangerous mistake.

Radar and other sensors on the outside of the car monitor what’s happening around the vehicle, while cameras inside track the driver’s head movements and gaze. Software uses the sensor data to estimate when a person needs help spotting or avoiding a hazardous situation, or can be left alone.

So far Toyota is just testing the ability of software to understand the hazards around a car and whether a person has spotted them, but the company plans to eventually make Guardian capable of taking action if a person doesn’t look ready to do so themselves.

“Imagine going through an intersection and you’re going to get T-boned—the right thing for the car to do is accelerate you out of it,” says Ryan Eustice, VP of autonomous driving at Toyota Research Institute, established in 2015 to work on robotics and automated driving (see “Toyota’s Billion Dollar Bet”). The group first said it would start developing Guardian last year (see “Toyota Joins the Race for Self-Driving Cars with an Invisible Co-Pilot”).

Eustice argues that the Guardian effort could have a widespread impact on public safety earlier than cars that fully remove driving duties from humans. Toyota is working on such technology, along with competitors like Alphabet, Ford, and Uber. But despite high-profile testing programs on public roads, Eustice and his counterparts at other companies say that truly driverless vehicles are still some years from serving the public, and will initially be limited to certain routes or locales.

“We see an opportunity to deploy it sooner and more widely,” says Eustice of the backseat driver approach. That’s because unlike full autonomy it won’t be reliant on hyper-detailed maps and could be easily packaged into a conventional vehicle sold to consumers, he says. However, he declines to predict how soon the Guardian might be ready for commercialization.

Steven Shladover, a researcher at the University of California, Berkeley, says the claim that Guardian could save lives sooner than fully autonomous vehicles makes sense. “If the driver has a 99 percent chance of detecting hazards and the automation system also has a 99 percent chance of detecting hazards, that gives the combination of the driver and system a 99.99 percent chance,” he says. “But this is much simpler and easier than designing a fully automated system that could reach that 99.99 percent level by itself.”

Getting the relationship between Guardian and humans right will be critical, though. Any mistakes it makes such as intervening or sending a warning when a person has correctly interpreted a situation would undermine a person’s trust in the system, and could even lead to new kinds of accidents, says Shladover.

https://www.nytimes.com/2017/03/07/world/europe/wikileaks-cia-hacking.html

WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents

Leer en español

By SCOTT SHANE, MARK MAZZETTI and MATTHEW ROSENBERGMARCH 7, 2017

The C.I.A. headquarters in Langley, Va. If the WikiLeaks documents are authentic, the release would be a serious blow to the C.I.A. Credit Jason Reed/Reuters

WASHINGTON — WikiLeaks on Tuesday released thousands of documents that it said described sophisticated software tools used by the Central Intelligence Agency to break into smartphones, computers and even Internet-connected televisions.

If the documents are authentic, as appeared likely at first review, the release would be the latest coup for the anti-secrecy organization and a serious blow to the C.I.A., which maintains its own hacking capabilities to be used for espionage.

The initial release, which WikiLeaks said was only the first part of the document collection, included 7,818 web pages with 943 attachments, the group said. The entire archive of C.I.A. material consists of several hundred million lines of computer code, it said.

Among other disclosures that, if confirmed, would rock the technology world, the WikiLeaks release said that the C.I.A. and allied intelligence services had managed to bypass encryption on popular phone and messaging services such as Signal, WhatsApp and Telegram. According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect “audio and message traffic before encryption is applied.”

The source of the documents was not named. WikiLeaks said the documents, which it called Vault 7, had been “circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.”

WikiLeaks said the source, in a statement, set out policy questions that “urgently need to be debated in public, including whether the C.I.A.’s hacking capabilities exceed its mandated powers and the problem of public oversight of the agency.” The source, the group said, “wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons.”