human-robot collaboration for bin-picking tasks to...

8

Click here to load reader

Upload: doancong

Post on 07-May-2018

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

Human-robot Collaboration for Bin-picking Tasksto Support Low-volume Assemblies

Krishnanand N. Kaipa, Carlos W. Morato, Jiashun Liu, and Satyandra K. GuptaMaryland Robotics Center

University of Maryland, College Park, MD 20742Email: [email protected]

Abstract—In this paper, we present a framework to createhybrid cells that enable safe and efficient human-robot collabo-ration (HRC) during industrial assembly tasks. We present ourapproach in the context of bin-picking, which is the first taskperformed before products are assembled in certain low-volumeproduction scenarios. We consider a representative one-robotone-human model in which a human and a robot asynchronouslywork toward assembling a product. The model exploits compli-mentary strengths of either agents: Whereas the robot performsbin-picking and subsequently assembles each picked-up partto build the product, the human assists the robot in criti-cal situations by (1) resolving any perception and/or graspingproblems encountered during bin-picking and (2) performingdexterous fine manipulation tasks required during assembly. Weexplicate the design details of our overall framework comprisingthree modules: plan generation, system state monitoring, andcontingency handling. We use illustrative examples to showdifferent regimes where human-robot collaboration can occurwhile carrying out the bin-picking task.

I. I NTRODUCTION

Assembly tasks are integral to the overall industrial man-ufacturing process. After parts are manufactured, they mustbe assembled together to impart the desired functionality toproducts. Pick-and-place, fastening, riveting, welding,solder-ing, brazing, adhesive bonding, and snap fitting constituterepresentative examples of industrial assembly tasks [1].

Humans and robots share complementary strengths in per-forming assembly tasks. Humans offer the capabilities of ver-satility, dexterity, performing in-process inspection, handlingcontingencies, and recovering from errors. However, they havelimitations w.r.t. factors of consistency, labor cost, payloadsize/weight, and operational speed. In contrast, robots canperform tasks at high speeds, while maintaining precision andrepeatability, operate for long periods of times, and can handlehigh payloads. However, currently robots have the limitationsof high capital cost, long programming times, and limiteddexterity. Owing to these reasons, small batch and customproduction operations predominantly use manual assembly.However, in mass production lines, robots are often utilizedto overcome the limitations of human workers.

Small and medium manufacturers (SMMs) represent a piv-otal segment of the manufacturing sector in US (NationalAssociation of Manufacturers estimates that there are roughly300,000 SMMs in US). High labor costs make it difficultfor SMMs to remain cost competitive in high wage markets.Purely robotic cells is not a solution as they do not provide the

Fig. 1. Hybrid assembly cell: Human assisting the robot in resolving arecognition problem during a bin-picking task.

needed flexibility. These reasons, along with short productioncycles and customized product demands, set SMMs as primarycandidates to benefit fromhybrid cells that support human-robot collaborations. However, currently shop floors installrobots in cages. During robot operation, the cage door islocked and elaborate safety protocol is followed in order toensure that no human is present in the cage. This makes it verydifficult to design assembly cells where humans and robots cancollaborate effectively.

In this paper, we present a framework for hybrid cells thatenable safe and efficient human-robot collaboration (HRC)during industrial assembly tasks. Advent of safer industrialrobots [2, 3, 4] and exteroceptive safety systems [5] in therecent years are creating a potential for hybrid cells wherehumans and robots can work side-by-side, without being sepa-rated from each other by physical cages. The main idea behindhybrid cells is to decompose assembly operations into taskssuch that humans and robots can collaborate by performingtasks that are suitable for them. In fact, task decompositionbetween the human and robot (Who does what?) has beenidentified as one of the four major problems in the field ofhuman robot collaboration [6].

We consider a representative one-robot one-human modelin which a human and a robot asynchronously work towardassembling a product. The model exploits complimentarystrengths of either agents: Whereas the robot performs a bin-picking task and subsequently assembles each picked-up partto form the product, the human assists the robot in critical

Page 2: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

situations by (1) resolving any perception and/or graspingproblems encountered during bin-picking and (2) performingdexterous fine manipulation tasks required during assembly.We explicate the design details of our overall frameworkcomprising three modules: plan generation, system state mon-itoring, and contingency handling.

In this paper, we focus on the bin-picking task to illustrateour human-robot collaboration (HRC) model. Bin-pickingis one of the crucial tasks performed before products areassembled in certain low-volume production scenarios. Thetask involves identifying, locating, and picking a desiredpartfrom a container of randomly scattered parts. Many researchgroups have addressed the problem of enabling robots, guidedby machine-vision and other sensor modalities, to carry outbin-picking tasks [7, 8]. The problem is very challenging andstill not fully solved due to severe conditions commonly foundin factory environments [10, 9]. In particular, unstructured binspresent diverse scenarios affording varying degrees of partrecognition accuracies: 1) Parts may assume widely differentpostures, 2) parts may overlap with other parts, and 3) partsmay be either partially or completely occluded. The problemis compounded due to factors like background clutter, shad-ows, complex reflectance properties of parts made of variousmaterials, and poorly lit conditions.

Owing to the complicated nature of these problems, mostmanufacturing solutions resort to kitting where parts are man-ually sorted into a container by human workers. This therebysimplifies the perception task and allows the robot to pick fromthe kitted set of parts before proceeding to final assembly.However, kitting takes significant time and manual labor.

Whereas robots can repetitiously perform routine pick-and-place operations without any fatigue, humans excel attheir perception and prediction capabilities in unstructuredenvironments. Their sensory and mental-rehearsal capabilitiesenable humans to respond to unexpected situations, quite oftenwith very little information. We exploit these complemen-tary strengths of either agents in order to design a deficit-compensation model that overcomes the primary perceptionand decision-making problems associated with the bin-pickingtask. An overview of the hybrid cell is shown in Fig. 1.

II. RELATED WORK

Different modes of collaboration between humans androbots have been identified in the recent past [12, 13]. Aconcept feasibility study conducted by the United States Con-sortium for Automotive Research (USCAR) in 2010-2011−toinvestigate requirements of fenceless robotics work cellsin theautomotive industry−defined three (low, medium, and high)levels of human and robot collaboration.

Shi et al. [13] used this standard to categorize current roboticsystems in automotive body shop, powertrain manufacturingand assembly, and general assembly. In low-level HRC, thehuman does not interact directly with the robot. Instead, thehuman loads the part into a transfer device (e.g., conveyor belt,rotary table, etc) by standing outside of the working range ofthe robot. Medium-level HRC differs from the low-level in

that the operator directly loads the part into the end-of-arm-tooling (EOAT). However, the robot is de-energized until thehuman exits the working range and initiates the next action.In high-level HRC, both robot and human simultaneously actwithin the working range of the robot. The human may or maynot come in physical contact with the robot.

Shi and Menassa [14] proposedtransitional andpartnershipbased tasks that are examples of medium level and high-level HRC, respectively. The transitional manufacturing taskconsists of a welding subtask by the robot and a randomquality inspection subtask by the human. This task is char-acterized by serial transitions of the task between the humanand the robot (human places part on EOAT before weldingand picks up part after welding) during a short duration. Inthe partnership based assembly task, an intelligent assistdevice(IAD) plays the role of the robot partner. The task consists ofa moon-roof assembly: whereas the IAD transports and placesa heavy and bulky moon-roof, the human performs dexterousfine manipulative assembly like fastening and wiring harness.

The transitional and complementary nature of the firstmanufacturing task is similar to that of the HRC task chosenin our paper. In their approach, the human loads the partinto the EOAT. The human feeds the part to the robot in ourapproach too. However, we exploit the human’s perceptionability−in recognizing parts in a wide variety of complexpart-bin scenarios−to fill the deficit-compensatory role in thecollaborative assembly task chosen in our work.

Sisbot and Alami [15] developed a human-aware manipu-lation planner that generated safe and “socially-acceptable”robot movements based on the human’s kinematics, visionfield, posture, and preferences. They demonstrated their ap-proach in a “robot hand over object” task. Jido, a physical6 DOF robot, used the planner to determine where the in-teraction must take place, thereby producing a legible andintentional motion while handing over the object to the human.They conducted physiological user studies to show that suchhuman-aware collaboration by the robot decreases cognitiveburden on the human partner. The aspect of achieving saferobot motions based on sensed human-awareness is similarto the one achieved in our work. Their robot is used in anassistive role. In contrast, the robot and the human partnersare used in complimentary roles in our approach.

Nikolaidis et al. [16] present methods for predictable, jointactions by the robot for HRC in manufacturing. The authorsused computationally-derived teaming models to quantify arobot’s uncertainty in the successive actions of its humanpartner. This enables the robot to perform risk-aware collab-oration with the human. In contrast, our approach uses real-time sensing and replanning to address contingencies that ariseduring collaboration between the partners.

Duan et al. [17] developed a cellular manufacturing systemthat allows safe HRC for cable-harness tasks. Two mobilemanipulators play an assistive role through two functions:(1) deliver parts from a tray shelf to the assembly tableand (2) grasp the assembly parts and prevent any wobblingduring the assembly process. Safety is ensured using light

Page 3: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

curtains, safety fence, and camera-based monitoring system.Assembling directions are provided via visual display, audio,and laser pointers. Some other approaches to hybrid assemblycells can be found in [18, 19, 20, 21, 22]. While having most ofthe primary features of the previous hybrid cell approaches,our framework also provides methods that explicitly handlecontingencies in human robot collaboration.

III. H YBRID CELL FRAMEWORK

Our approach to hybrid cells considers a representativeone-human one-robot model, in which a human and a robot willcollaborate to assemble a product. In particular, the cell willoperate in the following manner:

1) The cell planner will generate a plan that will provideinstructions for the human and the robot in the cell.

2) Instructions for the human operator will be displayed ona screen in the assembly cell.

3) The robot will pick up parts from an unstructured binand assemble them into the product.

4) The human will be responsible to assist the robot byresolving any perception and/or grasping problems en-countered during bin-picking.

5) If needed, the human will perform the dexterous finemanipulation to secure the part in place in the product.

6) The human and robot operations will be asynchronous.7) The cell will be able to track the human, the locations

of parts, and the robot at all time.8) If the human makes a mistake in following an assembly

instruction, re-planning will be performed to recoverfrom that mistake and appropriate warnings and errormessages will be displayed in the cell.

9) If the human comes too close to the robot to cause acollision, the robot will perform a collision avoidancestrategy.

The overall framework of the hybrid cell consists of thefollowing three modules: (1) Plan generation, (2) system statemonitoring, and (3) contingency handling.

A. Plan generation.

We should be able to automatically generate plans in orderto ensure efficient cell operation. This requires generatingfeasible assembly sequences and instructions for robots andhuman operators, respectively. Automated planning poses thefollowing two challenges. First, generating precedence con-straints for complex assemblies is challenging. The complexitycan come due to the combinatorial explosion caused by thesize of the assembly or the complex paths needed to performthe assembly. Second, generating feasible plans requires ac-counting for robot and human motion constraints.Assembly sequence generation:Careful planning is requiredto assemble complex products ([23]). We utilize a methoddeveloped in [25, 24] that automatically detectspart interac-tion clusters that reveal the hierarchical structure in a product.This thereby allows the assembly sequencing problem to beapplied to part sets at multiple levels of hierarchy. A 3D CADmodel of the product is used as an input to the algorithm.

Fig. 2. (a) A simple chassis assembly. (b) Feasible assemblysequencegenerated by the algorithm.

The approach described in [25] combines motion planningand part interaction clusters to generate assembly precedenceconstraints. The result of applying the method on a simpleassembly model (Fig. 2(a)) is a feasible assembly sequence(Fig. 2(b)).Instruction generation for humans: The human workerinside the hybrid cell follows an instructions list to performassembly operations. However, poor instructions lead to thehuman committing assembly related mistakes. We address thisissue by utilizing an instruction generation system developedin [26] that creates effective and easy-to-follow assemblyin-structions for humans. A linearly ordered assembly sequence isgiven as input to the system. The output is a set of multimodalinstructions (text, graphical annotations, and 3D animations).Instructions are displayed on a big monitor located at a suitabledistance from the human. Text instructions are composed usingsimple verbs such asPick, Place, Position, Attach, etc. Datafrom each step of the input assembly plan is used to simulatethe corresponding assembly operation in a virtual environment(developed based on Tundra software). Animations are gener-ated based on visualization of the computed motion of theparts in the virtual environment.

Humans can easily distinguish among most parts. However,they may get confused about which to pick when two partslook similar. To address this problem, we utilize a partidentification tool developed in [26] that automatically detectssuch similarities and present the parts in a manner that enablesthe human to select the correct part.

B. System state monitoring.

To ensure smooth and error-free operation of the cell,we need to monitor the state of the assembly operations.

Page 4: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

Accordingly, we present methods for real-time tracking of thehuman operator, the parts, and the robot.Human tracking: The human tracking system is based ona N -Kinect based sensing framework developed in [5, 27].The system is capable of building an explicit model of thehuman in near real-time. It is designed for a hybrid assemblycell where one human interacts with one robot to performassembly operations. The human has complete freedom ofhis/her motion. Human activity is captured by the Kinectsensors that reproduce the human’s location and movementsvirtually in the form of a simplified animated skeleton. Thesensing system consists of four Kinects mounted on theperiphery of the work cell. Occlusion problems are resolvedby using multiple Kinects. The output of each Kinect is a 20-joint human model. Data from all the Kinects are combinedin a filtering scheme to obtain the human motion estimates.Part Tracking: The assembly cell state monitoring uses a dis-crete state-to-state part monitoring system that was designedto be robust and decrease any possible robot motion errors. Afailure in correctly recognizing the part and estimating its posecan lead to significant errors in the system. To ensure that sucherrors do not occur, the monitoring system is designed basedon 3D mesh matching with two control points−the first controlpoint detects the part picked-up either by the robot or thehuman and the second control point detects the part’s spatialtransformation when it is placed in the robot’s workspace. Thedetection of the selected part in the first control point helps thesystem to track the changes introduced by the robot/human inreal-time and trigger the assembly re-planning and the robotmotion re-planning based on the new sequence. Moreover, thedetection of the posture of the assembly part related to therobot in the second control point sends a feedback to the robotwith the ”pick and place” or ”wait” flag.

The 3D mesh matching algorithm uses a real-time 3D partregistration and a 3D CAD-mesh interactive refinement [31].Multiple acquisitions of the surface are necessary in orderto register the assembly part in 3D format. These views areobtained by the Kinect sensors and represented as dense pointclouds. In order to perform a 3D CAD-mesh matching, aninteractive refinement revises the transformations composedof scale, rotation, and translation. Such transformationsareneeded to minimize the distance between the reconstructedmesh and the 3D CAD model. Point correspondences wereextracted from the reconstructed scene and the CAD modelusing the iterative closest point algorithm [32].

The part detection (first control point) and part posturedetermination (second control point) results are shown in Figs.3 and 4, respectively. The initial scene in the first experimentis shown in Fig. 3(a). The 3D mesh representation of the initialscene is shown in Fig. 3(c). The human is instructed to pickthe ’Cenroll bar’ part. However, he picks the ’Rear brace’ (Fig.3(d)). Now, we show how this is detected by the system. Figure3 (f) shows a 3D mesh representation of the scene after the partis picked. Next, point cloud of the current scene is matchedwith that of CAD model of every part. Since the ’Rear brace’is not present in the live scene mesh, the error associated with

Fig. 3. Part detection (first control point) results: (a) Camera image ofthe initial set of parts. (b) Representation in a 3D virtual environment. (c)3D mesh representation of the initial scene. (d) Human picksa part. (e)Representation in 3D virtual environment after part is picked. (f) 3D meshrepresentation of the scene after the part is picked. (g) Point cloud matchingbetween CAD model of ’Cenroll bar’ and 3D mesh of the current scene. (h)Point cloud matching between CAD model of ’Rear brace’ and 3Dmesh ofthe current scene. (i) Human replaces the part and picks a different part. (j-m)Part matching routine is repeated.

its mesh matching with the live scene is greater than that ofevery other part. This determines that the part picked is ’Rearbrace’, thereby indicating to the human that a wrong part hasbeen picked (Figs. 3(g) and 3(h)). Human replaces the partand picks a different part in Fig. 3(i). Application of the partmatching routine to the new scene is shown in Figs. 3(j)- 3(m).

In the second experiment, the human places the part inrobot’s workspace (Fig. 4(a-c)). The desired part postureis shown in a virtual environment in blue color (only forillustration purpose) in Fig. 4(d). The 3D reconstruction of thereal scene is shown in Fig. 4(e). A 3D reconstruction of thereference scene with the part in the desired posture is shownin Fig. 4(f). A large MSE (shown in Fig. 4(g)) and a low

Page 5: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

Fig. 4. Pose detection (second control point) results: (a-c) Human placespart in front of the robot. (d) Desired posture of the part shown in blue color.(e) 3D reconstruction of the real scene. (f) 3D template withthe part in thedesired posture. (g, h) A large MSE and a low convergence in scale alignmentrepresent incorrect posture. (i) Part is picked and placed in a different posture.(j-n) Posture detection routine is repeated and found that part is placed in thecorrect posture.

convergence in scale alignment (shown in Fig. 4(h)) indicatethat the part is placed in an incorrect posture. In Fig. 4(i),the human picks the part and places it in a different posture.In Figs. 4(j-n), the posture detection routine is repeated andfound that part is placed in the correct posture.

C. Robot Tracking

We assume that the robot can execute motion commandsgiven to it so that the assembly cell will know the robot state.

D. Contingency handling

We consider three types of contingency handling−collisionavoidance between robot and human, replanning, andwarninggeneration.Collision avoidance between robot and human:Our ap-proach to ensuring safety in the hybrid cell is based onthe pre-collision strategy developed in [5, 27]: robot pauseswhenever an imminent collision between the human and therobot is detected. Thisstop-go safety approach conforms tothe recommendations of the ISO standard 10218 [29, 30]. Inorder to monitor the human-robot separation, the human modelgenerated by the tracking system (described in the previoussection) was augmented by fitting all pairs of neighboringjoints with spheres that move as a function of the human’smovements in real-time. Aroll-out strategy was used, in whichthe robot’s trajectory into the near future was pre-computedto create a temporal set of robot’s postures for the next fewseconds. Next, the system verified if any of the postures in thisset collides with one of the spheres of the augmented humanmodel. The method was implemented in a virtual simulationengine developed based on Tundra software.Replanning and warning generation: An initial assemblyplan is generated before the operations begin in the hybridassembly cell. We define deviations in the assembly cell as amodification to the predefined plan. These modifications canbe classified into three main categories:

i Deviations that leads to process errors. These aremodifications introduced by either the human or robotthat cannot generate a feasible assembly plan. Theseerrors can generate an error in the assembly cell in away that will require costly recovery. In order to preventthis type of errors, the system has to detect the presenceof this modification by the registration of the assemblyparts. Once the system has information about the selectedassembly part, it evaluates the error in real-time bypropagating the modification in the assembly plan andgiving a multi-modal feedback.

ii Deviations that leads to improvements in the assemblyspeed or output quality.Every single modification to themaster assembly plan is detected and evaluated in real-time. The initial assembly plan is one of the many feasibleplans that can be found. A modification in the assemblyplan that generates another valid feasible plan classifiesas an improvement. These modifications are accepted andgive the ability and authority to the human operators touse their experience in order to produce better plans. Thisprocess helps the system to evolve and adapt quicklyusing the contributions made by the human agent.

iii Deviation that leads to adjustment in the assemblysequence.Adjustments in the assembly process mayoccur when the assembly cell can easily recover fromthe error introduced by the human/robot by requestingadditional interaction in order to fix it. Another commonmistake in assembly part placement is the wrong pose(rotational and translational transformation that diverges

Page 6: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

Fig. 5. Representative parts affording different recognition and graspingcomplexities: (a) Part 1 (b) Part 2 (c) Part 3 (4) Part 4.

Fig. 6. Part bin scenarios of varying complexities: (a) Ordered pile of parts.(b) Random pile with low clutter. (c) Random pile with mediumclutter. (d)Random pile with high clutter.

from the required pose). Two strategies can be foundto solve this issue: a) robot recognizes the new poseand recomputes its motion plan in order to complete theassembly of the part or b) human is informed by thesystem about the mistake and is prompted to correct it.

More details about the replanning system can be found in [28].

IV. HRC MODEL FORBIN-PICKING

Our HRC model to perform the bin-pick task consists ofthe following steps. Under normal conditions, the robot uses apart recognition system to automatically detect the part, locatepart posture, and plan its motion in order to grasp and transferthe part from the bin to the assembly area. However, if therobot determines that the part recognition is not clear fromthecurrent scene, then it initiates a collaboration with the human.The particular bin scenario determines the specific nature ofcollaboration between the robot and the human. We presentfour different regimes where human and robot collaborationcan occur while performing the bin-picking task.

1) Confirmative feedback: If the robot recognizes the partbut not accurately enough to act upon (occurs whenthe system estimates a small recognition error), thenit requests the human for confirmative feedback. Thisis accomplished by displaying the target part and thebin image (with the recognized part highlighted) side-by-side on a monitor. Now, the human responds to asuitable query displayed on the monitor (e.g., Does thehighlighted part match the target part?) with Yes/Noanswer. If the human’s answer is ”Yes”, the robotproceeds with picking up the part. If the answer is ”No”,

Fig. 7. Illustration 1: (a) Human shines a laser on the part tobe picked bythe robot. (b)-(d) Robot proceeds to pick up the indicated part.

Fig. 8. Illustration 2: Robot de-clutters the random pile ofparts to make thehidden parts visible to the human and the part recognition system.

the human then provides a simple cue by casting a laserpointer on the correct part. This thereby resolves theambiguity in identifying and locating the part.

2) Positional cues:Part overlaps and occlusions may leadto recognition failures by the sensing system. In thesesituations, the robot requests additional help from thehuman. The human responds by identifying the part thatmatches the target part and conveying this informationto the robot by directly shining the laser on that part.

3) Orientation cues: Apart from the part’s location, therobot also requires orientation information in order toperform the assembly operation properly. The humancan provide the postural attributes of the part by castingthe laser on each visible face of the part and labeling itusing a suitable interface. This information can be usedby the system to reconstruct the part orientation.

4) De-clutter mode: Suppose, the human predicts that inthe current part arrangement, there exists no posturein which the robot can successfully grasp and pick up

Page 7: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

Fig. 9. Illustration 3: (a) Part to be picked is in an unstableposition. (b) Robotattempts to grasp the part. (c) The part tips off resulting ina grasp failure.(d) Resulting bin scenario after robot de-cluttering in response to human’scommand. (e)-(f) Robot successfully picking up the part from the changedpile of parts.

the part; for example, when the part is positioned inan unstable posture and any attempt to hold the partcauses it to tip off and slip from the robot’s grasp. Thehuman predicts this in advance and issues a de-cluttercommand. The robot responds by using raster-scan likemovements of its gripper to randomize the part cluster.If the human predicts that the part can be grasped in thechanged state, then he/she continues to provide locationand postural cues to the robot. Otherwise, the de-cluttercommand is issued again. De-cluttering can also be usedwhen the target part is completely hidden under therandom pile of parts, thereby making itself unnoticeableto both the robot and the human.

V. I LLUSTRATIVE EXPERIMENTS

We create four representative parts (Fig. 5) that afforddifferent recognition and grasping complexities to illustratevarious challenges encountered during the bin-picking task.Four copies of each part are made to give rise to a total ofsixteen parts. A few bin scenarios of varying complexitiesgenerated using these parts are shown in Fig. 6.

Illustration 1: The bin scenario shown in Fig. 7(a) repre-sents a case where the robot is tasked with picking Part 1, butit is difficult to detect the part due to overlap with other parts.Therefore, robot requests human assistance and the humanresponds by directly shining a laser on the part (shown inthe figure). The robot uses this information to locate and pickup the part from the bin (Figs. 7(b)-(d))

Illustration 2: The bin scenario shown in Fig. 8 represents acase where the parts are highly cluttered disallowing detectionof parts hidden under the pile. In this situation, the humanissues a de-clutter command to the robot. Figures 8(a)-(c) show

how the robot uses back-and-forth motions to shackle the pile,thereby enabling detection of the hidden parts.

Illustration 3: The bin scenario shown in Fig. 9(a) rep-resents a case where the part to be picked (Part 1) is in aunstable position. That is, any attempt by the robot to pick upthe part results in the part tipping over and slipping from therobot’s grasp (Figs. 9(b) and (c)). However, as the human canpredict this event before hand, he/she can force the robot tode-clutter the pile before attempting to pick up the part. Thisis illustrated in Fig. 9(d)-(f).

VI. CONCLUSIONS

We presented design details of hybrid cell framework thatenables safe and efficient human robot collaborations dur-ing assembly operations. We used illustrative experimentstopresent different cases in which human robot collaborationcantake place to resolve perception/grasping problems encoun-tered during bin-picking. In this paper, we considered bin-picking used for assembly tasks. However, our approach canbe extended to the general problem of bin-picking as appliedtoother industrial tasks like packaging. More experiments basedempirical evaluations, using the Baxter robot, are in orderforsystematically testing the ideas presented in the paper.

ACKNOWLEDGEMENT

This work is supported by the Center for Energetic ConceptsDevelopment. Dr. S.K. Gupta’s participation in this researchwas supported by National Science Foundation’s IndependentResearch and Development program. Opinions expressed arethose of the authors and do not necessarily reflect opinions ofthe sponsors.

REFERENCES

[1] Lee, S., Choi, B. W., and Suarez, R ” Frontiers of Assem-bly and Manufacturing: Selected papers from ISAM-09”Springer, 2009

[2] Baxter, ”Baxter - Rethink Robotics”. [Online: 2012].http://www.rethinkrobotics.com/products/baxter/.

[3] Kuka, ”Kuka LBR IV”. [Online: 2013]. http://www.kuka-labs.com/en/medicalrobotics/lightweightrobotics/.

[4] ABB, ”ABB Friendly Robot for Indus-trial Dual-Arm FRIDA”. [Online: 2013].http://www.abb.us/cawp/abbzh254/8657f5e05ede6ac5c1257861002c8ed2.aspx.

[5] Morato, C., Kaipa, K. N., and Gupta, S. K., 2014. “To-ward Safe Human Robot Collaboration by using MultipleKinects based Real-time Human Tracking”.Journalof Computing and Information Science in Engineering,14(1), pp. 011006.

[6] Hayes, B., and Scassellati, B., 2013. “Challenges inshared-environment human-robot collaboration”. In Pro-ceedings of the Collaborative Manipulation Workshop atthe ACM/IEEE International Conference on Human-RobotInteraction (HRI 2013).

[7] Buchholz, D., Winkelbach, S., and Wahl, F.M., 2010.“RANSAM for industrial bin-picking”. In Proceedings

Page 8: Human-robot Collaboration for Bin-picking Tasks to …terpconnect.umd.edu/~skgupta/Publication/RSS14_WS_Kaipa.pdf · Human-robot Collaboration for Bin-picking Tasks ... hybrid cells

of the 41st International Symposium on Robotics and 6thGerman Conference on Robotics (ROBOTIK 2010).

[8] Schyja, A., Hypki, A.,and Kuhlenkotter, B., 2012. “Amodular and extensible framework for real and virtual bin-picking environments”. In Proceedings of IEEE Interna-tional Conference on Robotics and Automation, pp. 5246–5251.

[9] Marvel, J.A., Saidi, K., Eastman, R., Hong, T., Cheok, G.,and Messina, E., 2012. “Technology Readiness Levelsfor Randomized Bin Picking”, In Proceedings of theWorkshop on Performance Metrics for Intelligent Systems,pp. 109–113.

[10] Liu, M-Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y.,Marks, T.K., and Chellappa, R., 2012. “Fast object local-ization and pose estimation in heavy clutter for robotic binpicking”. The International Journal of Robotics Research,31(8), pp. 951–973.

[11] Balakirsky, S., Kootbally, Z., Schlenoff, C., Kramer,T.,and Gupta, S.K., 2012 “An industrial robotic knowledgerepresentation for kit building applications”. In Proceed-ings of IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS 2012), PP. 1365–1370.

[12] Kruger, J., Lien, T., and Verl, A., 2009. “Cooperation ofhuman and machines in assembly lines”.CIRP Annals -Manufacturing Technology, 58(2), pp. 628 – 646.

[13] Shi, J., Jimmerson, G., Pearson, T., and Menassa, R.,2012. “Levels of human and robot collaboration for auto-motive manufacturing”. In Proceedings of the Workshopon Performance Metrics for Intelligent Systems, pp. 95–100.

[14] Shi, J., and Menassa, R., 2012. “Transitional or part-nership human and robot collaboration for automotive as-sembly”. In Proceedings of the Workshop on PerformanceMetrics for Intelligent Systems, pp. 187–194.

[15] Sisbot, E., and Alami, R., 2012. “A human-awaremanipulation planner”.IEEE Transactions on Robotics,28(5), Oct, pp. 1045–1057.

[16] Nikolaidis, S., Lasota, P., Rossano, G., Martinez, C.,Fuhlbrigge, T., and Shah, J., 2013. “Human-robot col-laboration in manufacturing: Quantitative evaluation ofpredictable, convergent joint action”. In Robotics (ISR),2013 44th International Symposium on, pp. 1–6.

[17] Duan, F., Tan, J., Gang Tong, J., Kato, R., and Arai, T.,2012. “Application of the assembly skill transfer systemin an actual cellular manufacturing system”.AutomationScience and Engineering, IEEE Transactions on, 9(1),Jan, pp. 31–41.

[18] Morioka, M., and Sakakibara, S., 2010. “A new cell pro-duction assembly system with human-robot cooperation”.CIRP Annals - Manuf. Tech., 59(1), pp. 9 – 12.

[19] Wallhoff, F., Blume, J., Bannat, A., Rsel, W., Lenz, C.,and Knoll, A., 2010. “A skill-based approach towardshybrid assembly”. Advanced Engineering Informatics,24(3), pp. 329 – 339.

[20] Bannat, A., Bautze, T., Beetz, M., Blume, J., Diepold, K.,Ertelt, C., Geiger, F., Gmeiner, T., Gyger, T., Knoll, A.,

Lau, C., Lenz, C., Ostgathe, M., Reinhart, G., Roesel, W.,Ruehr, T., Schuboe, A., Shea, K., Stork genannt Wersborg,I., Stork, S., Tekouo, W., Wallhoff, F., Wiesbeck, M.,and Zaeh, M., 2011. “Artificial cognition in productionsystems”. Automation Science and Engineering, IEEETransactions on, 8(1), Jan, pp. 148–174.

[21] Chen, F., Sekiyama, K., Sasaki, H., Huang, J., Sun, B.,and Fukuda, T., 2011. “Assembly strategy modeling andselection for human and robot coordinated cell assembly”.In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJInternational Conference on, pp. 4670–4675.

[22] Takata, S., and Hirano, T., 2011. “Human and robotallocation method for hybrid assembly systems”.CIRPAnnals - Manufacturing Technology, 60(1), pp. 9 – 12.

[23] Jimenez, P., 2011. “Survey on Assembly Sequencing: ACombinatorial and Geometrical perspective”.Journal ofIntelligent Manufacturing, 23, Springer on-line.

[24] Morato, C., Kaipa, K. N., and Gupta, S. K., 2012.“Assembly sequence planning by using dynamic multi-random trees based motion planning”. In Proceedings ofASME International Design Engineering Technical Con-ferences, IDETC/CIE, Chicago, Illinois.

[25] Morato, C., Kaipa, K. N., and Gupta, S. K., 2013. “Im-proving Assembly Precedence Constraint Generation byUtilizing Motion Planning and Part Interaction Clusters”.Journal of Computer-Aided Design, 45 (11), pp. 1349–1364.

[26] Kaipa, K. N., Morato, C., Zhao, B., and Gupta, S. K.“Instruction generation for assembly operation performedby humans”. In ASME Computers and Information inEngineering Conference, Chicago, IL, August 2012.

[27] Morato, C., Kaipa, K. N., and Gupta, S. K., 2013. “SafeHuman-Robot Interaction by using Exteroceptive SensingBased Human Modeling”. ASME International DesignEngineering Technical Conferences & Computers andInformation in Engineering Conference, Portland, Oregon.

[28] Morato, C., Kaipa, K. N., Liu, J., and Gupta, S. K.,2014. “A framework for hybrid cells that support safe andefficient human-robot collaboration in assembly opera-tions”. ASME International Design Engineering TechnicalConferences & Computers and Information in EngineeringConference, Buffalo, New York.

[29] ISO, 2011. Robots and robotic devices: Safety require-ments. part 1: Robot. ISO 10218-1:2011, InternationalOrganization for Standardization, Geneva, Switzerland.

[30] ISO, 2011. Robots and robotic devices: Safety require-ments. part 2: Industrial robot systems and integrationt.ISO 10218-2:2011, International Organization for Stan-dardization, Geneva, Switzerland.

[31] Petitjean, S., 2002. “A survey of methods for recoveringquadrics in triangle meshes”.ACM Computing Surveys,34(2). pp. 211–262.

[32] Rusinkiewicz, S., and Levoy, M., 2001. “Efficientvariants of the ICP algorithm”. In Proceedings of theThird International Conference on 3D Digital Imaging andModeling, pp. 145-152.