mohamed marei 120126864 dissertation design and construction of hardware add-ons for a modular...

99
Automatic Control & Systems Engineering. Design and Construction of Hardware Extensions (Add-ons) for a Modular Self-Reconfigurable Robot System Mohamed Marei 2016 MAY Supervisor: Dr. Roderich Gross A dissertation submitted in partial fulfilment of the requirements for the degree of MEng Mechatronic and Robotic Engineering

Upload: mohamed-marei

Post on 15-Apr-2017

136 views

Category:

Documents


11 download

TRANSCRIPT

Page 1: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

Automatic

Control &

Systems

Engineering.

Design and Construction of Hardware Extensions (Add-ons) for a

Modular Self-Reconfigurable Robot System

Mohamed Marei

2016

MAY

Supervisor: Dr. Roderich Gross

A dissertation submitted in partial fulfilment of the

requirements for the degree of MEng Mechatronic and Robotic

Engineering

Page 2: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

II

EXECUTIVE SUMMARY

INTRODUCTION/BACKGROUND

This project aims to develop a vision add-on for a modular robot platform under

development at the University of Sheffield. After researching the field of modular

self-reconfigurable modular robots, including existing robot implementations and

applications of these systems, the need for improved sensing was identified. The

identification of design requirements and consequently the development of the add-on

was conducted. The various stages of development, including hardware, software, and

control, are detailed in their subsequent chapters.

AIMS AND OBJECTIVES

This project aimed to study, design, and build hardware add-ons for the HiGen

modular self-reconfigurable robot (MSR) system. The objectives of this project were

to research current MSR systems, design and build a hardware add-on to be used in

conjunction with the HiGen module, and program and test the add-on with the

module.

ACHIEVEMENTS

Research in the field of MSR systems was extensively explored. Following that, the

area of application domains was investigated to identify similar implementations. The

hardware design of the add-on, including its subcomponents, was fully realised. In

addition, software to integrate the add-on subcomponent functionality was developed.

Following that, the add-on was explored in terms of its ability to acquire and process

data.

Page 3: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

III

CONCLUSIONS / RECOMMENDATIONS

This report has successfully laid the framework for developing the MSR add-on,

including its hardware, software, and control aspects. Further work in the areas of

software and control development could be realised to unlock the full potential of this

device, and to ensure its successful integration with the HiGen MSR.

Page 4: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

IV

ABSTRACT

Modular Self-Reconfigurable Robot (MSR) platforms are useful for modelling

complex problems of self-reconfiguration and cooperative self-assembly at the macro

and micro scales alike. They also derive immense potential from their ability to

reconfigure into any shape or form to suit any task, making them infinitely versatile –

theoretically. Practically, due to design constraints imposed upon their morphology,

they often lack the ability to interact with their environment and thus execute useful

tasks. This project aims to explore methods to develop a sensory and computational

add-on for an MSR system (the HiGen MSR). By identifying initial requirements,

developing the hardware and software capabilities of the add-on, and integrating the

functional elements together, the design of the add-on was realised. Experiments to

verify the utility of the add-on were also attempted. The report is concluded with

potential future works to build on this work and develop similar add-ons for the MSR.

Page 5: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

V

ACKNOWLEDGEMENTS

Firstly, I would like to express my gratitude to my Project Supervisor, Dr Roderich

Gross, whose constant encouragement and support have been of great value to me. I

would also like to thank Mr. Christopher Parrott, whose patience and technical know-

how helped me fulfil much of what I intended to achieve.

I would also like to thank all of the academic and technical staff at the department of

Automatic Control and Systems Engineering at the University of Sheffield. You have

all helped me a great deal towards achieving my project objectives and endure the

stresses of the project, and for that I am truly grateful.

I would also like to give a huge thank you to my friends at the University of Sheffield,

my second family. Yohahn, Umang, Yaseeen, Simran, Christiaan, Matei, Sa’ad,

Thaqib, Sangam, Maha, Anupama, and Ishita. It’s been an absolute pleasure going

through this university experience with all its ups and downs, and you’ve all made

sure it was mostly ups! For that, I am eternally grateful. A special word of thanks to

my friend Younes as well, whom, despite I have not seen in so many years, has

always been at my side.

And last, and most certainly not least, to my family; my brother, Mazen; my Sister,

Reem; and my parents, Mr. Hesham Marei and Mrs Maha Seleem. Nothing I could

say could express my gratitude for your never-ending moral, emotional, and financial

support. Your belief in my abilities and your tireless dedication to my wellbeing have

been my sustenance throughout my life, and through this academic year – when I’ve

needed it most. I dedicate this work to you.

Page 6: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

VI

TABLE OF CONTENTS

Chapter 1 - Introduction .......................................................................................... 1

1.1. Background and Motivation .......................................................................... 1

1.2. Problem Definition......................................................................................... 2

1.3. Aim of the Project .......................................................................................... 2

1.4. Objectives of the Project ................................................................................ 2

1.4.1. Primary Objectives................................................................................. 2

1.4.2. Stretch Goals (Advanced Objectives) .................................................... 2

1.5. Project Management ...................................................................................... 3

1.5.1. Initial Project Plan.................................................................................. 3

1.5.2. Project Resources ................................................................................... 4

1.5.3. Costs and Risk Assessment .................................................................... 4

1.5.4. Project Plan ............................................................................................ 7

1.5.5. Design Implementation .......................................................................... 7

1.6. Structure of the Report ................................................................................... 8

Chapter 2 - Literature Review .................................................................................... 9

2.1. Modular Self-Reconfigurable Robots: A Brief Overview ............................. 9

2.2. Modular Self-Reconfigurable Robot Systems: Taxonomy and Attributes .. 10

2.2.1. Morphology: Lattice, Chain, and Hybrid ............................................. 11

2.2.2. Locomotion Modes .............................................................................. 12

2.2.3. Control ................................................................................................. 13

2.2.4. Connector Design................................................................................. 14

2.2.5. Sensing ................................................................................................. 15

2.2.6. Communication .................................................................................... 16

2.3. Heterogeneous MSR Systems ...................................................................... 17

2.3.1. MSR Systems with Hardware Attachments (Add-ons) ....................... 17

2.3.2. Reconfigurable Manufacturing Systems (RMS) .................................. 19

2.4. Design Challenges of MSR Platforms ......................................................... 19

2.4.1. Computational Limitations .................................................................. 20

2.4.2. Power Sharing ...................................................................................... 20

Page 7: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

VII

2.5. The HiGen Modular Robot .......................................................................... 21

2.5.1. The HiGen Module and Connector ...................................................... 21

2.5.2. HiGen Connector Architecture ............................................................ 22

2.5.3. Controller Area Network (CAN) ......................................................... 22

2.5.4. System Architecture ............................................................................. 23

Chapter 3 - Hardware Design ................................................................................... 24

3.1. Design Requirements ................................................................................... 24

3.1.1. System Definition ................................................................................ 24

3.1.2. Physical Characteristics ....................................................................... 25

3.1.3. Performance Characteristics ................................................................ 25

3.1.4. Compatibility ....................................................................................... 25

3.1.5. Usability ............................................................................................... 25

3.1.6. Early Concept Prototyping ................................................................... 26

3.2. Chassis Design (Mechanical Design) .......................................................... 27

3.2.1. Enclosure Design ................................................................................. 27

3.2.2. General Design Considerations............................................................ 28

3.2.3. Pan-Tilt Unit Design ............................................................................ 30

3.2.4. Pan-Tilt Alignment .............................................................................. 30

3.3. The Single-Board Computer ........................................................................ 31

3.4. On-board Electronics ................................................................................... 32

3.4.1. The Raspberry Pi.................................................................................. 32

3.4.2. The Raspberry Pi Camera .................................................................... 33

3.4.3. The Teensy Microcontroller ................................................................ 33

3.4.4. Low-Level Communication Management ........................................... 34

3.4.5. Power Requirements ............................................................................ 36

3.5. Electronics Integration: The Tool Expander ................................................ 37

3.5.1. Schematic Design................................................................................. 37

3.5.2. PCB layout ........................................................................................... 38

3.5.3. Surface-Mounted Components ............................................................ 41

3.6. Hardware Integration and Assembly ........................................................... 41

Chapter 4 - Software Design ..................................................................................... 43

Page 8: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

VIII

4.1. Operating System ......................................................................................... 43

4.2. SSH (Secure Shell) ...................................................................................... 44

4.2.1. Configuring the Network Interface ...................................................... 44

4.3. Concept Testing and High-Level MATLAB Connectivity ......................... 46

4.3.1. Pi MATLAB Initialisation ................................................................... 46

4.3.2. Camera Board Initialisation ................................................................. 47

4.4. Tool Expander Programming (The Teensy) ................................................ 49

4.4.1. Serial Communication Using MiniCom .............................................. 50

4.4.2. Pan-Tilt from Keyboard ....................................................................... 50

4.4.3. Actuating the Motors using MATLAB ................................................ 51

4.5. Functionality Integration .............................................................................. 52

Chapter 5 - Experimentation and Results ............................................................... 54

5.1. The Target Tracking Problem: Approach .................................................... 54

5.2. The Target Tracking Problem: Framework ................................................. 54

5.3. Target Tracking: Experiments ..................................................................... 56

5.3.1. Experimental Outline ........................................................................... 56

5.3.2. Results .................................................................................................. 57

5.4. Target Tracking: Controller Design ............................................................. 59

5.5. Camera Calibration ...................................................................................... 60

Chapter 6 - Conclusion .............................................................................................. 63

6.1. Further Work ................................................................................................ 63

Appendix A: Project Task Sheet ............................................................................ i

Appendix B: Project Gantt Chart ......................................................................... ii

Appendix C: Project Resources Collected ...........................................................iii

Appendix D: .................................................................................................................. v

Appendix E: Source Code for Trackball ............................................................. vi

Appendix G: MATLAB Serial with Teensy ........................................................ ix

Appendix H: Pan-Tilt Model in MATLAB ......................................................... xi

Page 9: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

IX

Appendix I: Source Code for Simple Tracking Controller ............................. xii

Appendix J: Design Sketches and SolidWorks Prototypes ............................. xiv

References ................................................................................................................... xv

Page 10: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

X

LIST OF FIGURES

Figure 1: the ATRON self-reconfigurable robot combined into a snake configuration

(left), a vehicle-like configuration (right), and an intermediate configuration (back).

Printed from [3]............................................................................................................ 11

Figure 2 showing the components involved in robot design and their interaction.

Adapted from [8].......................................................................................................... 12

Figure 3(a) and (b): standalone HiGen connector module (left) [20] and HiGen

modules on the self-reconfigurable modular robot (right) [28]. Printed with

permission from C. Parrott, 2014. ................................................................................ 21

Figure 4: (left) the HiGen connector broken down into its components, showing the

(a) housing, (b) docking hooks, (c) motor and switch mount, (d) drive shaft, (e)

shroud, (f) connection board, and (g) DC geared motor; (right): the controller and its

functional pins. Both images printed with permission from C. Parrott, 2016. ............ 22

Figure 5: the overall system architecture, showing the communication pathways

between different system elements. The HiGen robot modules interface with each

other via the connector controllers (CC), which connect together to form a CAN bus.

...................................................................................................................................... 23

Figure 6 : the Arduino connected to the TTL JPEG Serial Camera via breadboard ... 26

Figure 7: the attachment template based on which the enclosure has been designed.

Courtesy of Parrott ....................................................................................................... 27

Figure 8: the full add-on assembly. Not shown: camera ribbon cable or servo wires . 28

Figure 9: the pan/tilt mechanism for two use cases: front attachment (left), and side

attachment (right) ......................................................................................................... 30

Figure 10: pan/tilt motion arcs, showing a range of approximately ±180° ................. 31

Page 11: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

XI

Figure 11: four different single-board computers; Raspberry Pi 2 (left, back); ODroid

C1 (right, back); HummingBoard (left, front); MIPS Creator Ci20 (right, front).

Reproduced from [30] .................................................................................................. 31

Figure 12: (left) Raspberry Pi NoIR Camera. Retrieved from [31]; (centre) Teensy 3.2

LC (Low Cost). Retrieved from [32]; (right): Raspberry Pi Model A+. Retrieved from

[33] ............................................................................................................................... 33

Figure 13: an n+4-bit 'word' transmitted over UART serial, showing the start bit

(StrB), data bits (DB01-DBn), parity bit (PB), and stop bits (StpB). .......................... 35

Figure 14: schematic diagram for the Tool Expander PCB ......................................... 39

Figure 15: the Tool Expander PCB design.................................................................. 40

Figure 16: the Tool Expander PCB prototype board with surface-mounted

components, showing the Raspberry Pi interface header in the top right corner, and

the right-angle sensor header at the bottom. ................................................................ 41

Figure 17: the fully-assembled vision add-on, with a dummy connector base template

...................................................................................................................................... 42

Figure 18 showing the rasp-config interface. .............................................................. 44

Figure 19: the interfaces file........................................................................................ 45

Figure 20: the supplicant configuration file ................................................................ 46

Figure 21 showing the basic initialisation function for a Raspberry Pi board ........... 46

Figure 22 showing the cameraboard initialisation command using the board name

and resolution arguments (top) and the command line output (bottom) ..................... 47

Figure 23: true-colour JPEG frame showing the centre of the green object (top);

intensity thresholding of the image to isolate green colour from background (bottom)

...................................................................................................................................... 48

Page 12: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

XII

Figure 24: sample code that uses the trackball algorithm to track the position of the

green object .................................................................................................................. 49

Figure 25: serial device initialisation command; input arguments: host device name

(Raspberry Pi), serial port address, and baud rate ....................................................... 49

Figure 26: MiniCom used to input values to the serial port via SSH (left) and the

Arduino serial monitor echoing the data read (right). .................................................. 50

Figure 27: sample code snippet showing vertical (up and down) tilt commands within

the Pan Tilt Serial script ............................................................................................... 51

Figure 28: the Instrument Control Application Interface in MATLAB, showing the

data read/write operations sent to the Teensy via serial. ............................................. 52

Figure 29: (left) object centre and bounding box surrounding object; (right)

thresholded version of the image ................................................................................. 53

Figure 30: the point T (left diagram) corresponds to an equivalent point on an image

plane EFGH (the top of a frustum). The equivalent real-world plane in which T lies

maps out the base of a frustum, E’F’G’H (right). The frustum EFGH-E’F’G’H defines

the projection volume. Reprinted from [38]. ............................................................... 55

Figure 31: (top left) true-colour and thresholded image with tracked centre; (top right)

variation of x- and y- coordinates of tracked centre; (bottom left) variation of tilt and

pan angles; (bottom right) rate-of-change of pan-tilt angles ....................................... 57

Figure 32: Simulink scheme used to simulate the behaviour of the pan-tilt model ..... 58

Figure 33: the camera calibration parameters .............................................................. 61

Figure 34: The camera calibration session in MATLAB. The checkerboard is used to

train the calibrator (top), which generates a pattern-centric view that shows the

position of the camera in the different frames (bottom). ............................................. 62

Page 13: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

XIII

LIST OF TABLES

Table 1: The project resources identified .................................................................................. 5

Table 2: Cost and risk assessment for the project ..................................................................... 6

Table 3: ATRON module [3], SMORES [9], AND HiGen [16] connectors and their

properties. ATRON image printed from [1]; SMORES image printed from [17]; HiGen image

printed with permission from C. Parrott. ................................................................................. 14

Table 4: Comparing three serial data protocols; SPI, I2C, and UART ................................... 35

Table 5: Tested Current Draws of Multiple Configurations using a 5 V 2 A regulated power

supply ....................................................................................................................................... 37

Table 6: List of components used within the vision add-on .................................................... 42

LIST OF EQUATIONS

Equation 1: the baud rate of the serial protocol for a word size of 12 and a 5 ms sample rate

................................................................................................................................................. 36

Equation 2: the pan and tilt angles obtained from the inverse pipeline method described in

[34] .......................................................................................................................................... 56

Equation 3: the visual servo (VS) problem as an error minimisation of the target feature w.r.t.

the current camera target. Reprinted from [32] ...................................................................... 59

Equation 4: the interaction matrix of the point x. Reprinted from [32] .................................. 59

Equation 5: image coordinates x and y defined in terms of the pixel coordinates (u and v), the

focal length (f), the camera centre (cu and cv), and the pixel ratio α. Reprinted from [32] ..... 60

Equation 6: the velocity of x in terms of the linear and angular components of the target’s

motion ...................................................................................................................................... 60

Page 14: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

XIV

LIST OF ABBREVIATIONS

ABS Acrylonitrile Butadiene Styrene Plastic

bps Bits per second

CAD Computer-Aided Design

CAN Controller Area Network

CC Connector Controller

CNC Computational Numerical Control

DOF Degree of Freedom

FOV Field of View

fps Frames per second

GPIO General Purpose Input Output Ports

HDMI High-Definition Media Interface

HiGen High-Speed Genderless Actuation Mechanism

I2C Inter-Integrated Circuit

IoT Internet of Things

IP Internet Protocol

IR Infrared

JPEG Joint Photographic Experts Group (Graphics Format)

LiDAR Light Detection and Ranging

MATLAB MATrix LABoratory

MIPS Microprocessor without Interlocked Pipeline Stages

Page 15: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

XV

MSR Modular Self-Reconfigurable Robot

NASA National Aeronautics and Space Administration

NoIR No Infrared Filtering

OpenCV Open-Source Computer Vision Library

PCB Printed Circuit Board

PSK Pre-Shared Key

PTZ Pan-tilt-zoom

RAM Random Access Memory

RGB Red-Green-Blue

RISC Reduced Instruction Set Computer Architecture

RMS Reconfigurable Manufacturing System

SBC Single-Board Computer

SOC System-On-Chip

SPI Serial Peripheral Interface

SSH Secure Shell

SSID Service Set Identification

TTL Transistor-Transistor Logic

UART Universal Asynchronous Transmit-Receive Protocol

UI User Interface

USB Universal Serial Bus

VGA Video Graphics Array

VS(PB)/(IB) Visual Servo(Position-Based)/(Image-Based)

WASD “W”, “A”, “S”, “D” keys

Wi-Fi Wireless Fidelity

Page 16: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

1

Chapter 1 - Introduction

1.1. Background and Motivation

The concept of Modular Self-Reconfigurable Robots (MSR) has been an interest for

many different research groups and institutions. Research in this field has yielded

many unique implementations of multi-robot systems whose units can connect and

disconnect on demand, through actuated connector hubs, either autonomously or by

being joined and separated externally by an operator.

MSR platforms are commonly thought beneficial for modelling and simulating the

behaviours of self-assembling systems at different scales [1], including: enzyme-

substrate and hormone-drug interaction, programmable matter [2], insect swarm

systems [3], and several others. In addition, having a reconfigurable robot platform

could drive forward research in areas such as modular mobile robots [4] or

reconfigurable manufacturing systems (RMS) [5]. MSR systems have immense

potential in applications where cooperative robot behaviour is crucial, such as mobile

search-and-rescue, remote reconnaissance, and space exploration. Authors on this

subject recognize the potential of MSR and self-assembling robot systems, claiming

that once sufficient progress is made in their design as a whole, they will “cease to be

merely biologically inspired artefacts [sic] and become super biological robots” [6].

An MSR platform, however, is usually insufficient on its own - without some way of

showcasing, or indeed improving, its versatility. Often MSR systems are under-

equipped, with limited proprioceptive (internal) sensing and manipulation abilities;

the lack of exteroceptive sensors and manipulation tools limits their environmental

interaction and ultimately their effectiveness as robotic systems. Enter hardware

extensions (add-ons); they enhance the system capabilities by substituting their end-

effectors. For hardware extensions to work effectively with their intended platform,

they must comply with specific design criteria, dictated by the platform in question.

Page 17: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

2

1.2. Problem Definition

The limited ability of MSR systems to perceive and affect their environment stands in

the way of practical realisations of such systems. Application-Oriented Hardware [1]

is therefore used to tackle this problem, by augmenting the robots’ sensory and/or

functional capabilities. To this end, a recent development at the University of

Sheffield, the HiGen Module [2], attempts to produce an MSR system which is

readily expandable through hardware add-ons. This project aims to further expand on

the applicability of hardware add-ons, through designing a complete multifunctional

add-on that could be readily integrated with the HiGen platform.

1.3. Aim of the Project

The aim of this project is to study, design, and build hardware extensions, or add-ons,

for an MSR platform currently under development at the University of Sheffield.

1.4. Objectives of the Project

The objectives of this project are:

1.4.1. Primary Objectives

Research the area of modular self-reconfiguring robots, and review the most

relevant implementations of MSR systems to date, including systems capable

of independent locomotion, heterogeneous systems, and systems with

hardware extensions (add-ons).

Research applications of MSR systems, including general and industrial

applications.

Design and program a hardware extension (add-on) to increase the sensing

ability of the MSR system under development.

Build, program, and test the hardware add-on.

1.4.2. Stretch Goals (Advanced Objectives)

Some advanced objectives have been outlined at the start of the project. However, due

to time constraints they were partially side-tracked to focus on the initial objectives.

Modify an existing hardware add-on to use in conjunction with the system.

Conduct experiments to ensure the efficacy of the hardware add-ons.

Page 18: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

3

1.5. Project Management

This section details the initial project plan, subsequent modifications, and compares

the initial task outlines with the work actually done. It also presents several iterations

of the project Gantt chart, presented in Appendix A. In addition, it presents the

resources used throughout the project as well as the intended project phases against

what was actually implemented.

1.5.1. Initial Project Plan

A summary of the initial project tasks is presented below.

Stage 1: Project Specification and Research:

Determine the overall scope, project aims, and objectives of the project

Stage 2: Research, to include:

Research the history and implementations of MSR systems, including those with

extensions (add-ons).

Research potential applications of MSR hardware extensions.

Research required components for implementation of the hardware extension.

Stage 3: Design, encompassing the following areas:

Learning SolidWorks and using it to design the first add-on.

Ordering components for the first add-on.

Designing the second add-on.

Ordering components for the second add-on.

Stage 4: Build and Test, including the following tasks:

Building the controller and camera for add-on 1.

Assembling the controller and camera.

Building and assembling module 2.

Testing the modules with the HiGen connector hub.

Stages 1-4 have been executed in mostly the same order. However, due to the lead

time associated with acquiring, designing, and testing certain components, the second

Page 19: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

4

add-on design was not completed. The design methodology to create the secondary

add-on was investigated and some prototyping components acquired.

1.5.2. Project Resources

Table 1 shows the project resources involved and their availability. Appendix 2

contains a spreadsheet of the resources acquired for this project to date. The list

comprises the main components and the dates they were ordered and collected. The

spreadsheet also contains the project budget so far, the amount of money spent and

the total funds remaining. In a bid to pre-empt lead processing and/or delivery

schedules for certain parts, components were acquired much earlier than they were

needed.

Despite the careful consideration of different factors associated with the project, some

resources, such as the module connector, were not provided in time. Tasks based on

the integration of these resources in the project could therefore not be completed.

However, the associated design factors have been considered throughout the

development of the project. This will be further illustrated in Chapter 6.

1.5.3. Costs and Risk Assessment

The hardware nature of the project meant that access to the laboratory facilities was

essential for project completion. In addition, certain tasks, such as early prototyping

and testing of components, required the use of electric power supplies which are a

safety concern if mishandled.

Table 2 shows the project costs in terms of time and financial resources, as well as the

risk associated with them. Furthermore, the table outlines the risk associated with

some of the project deliverables, such as the hardware add-on development and

subcomponent assembly and integration.

Page 20: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

5

Table 1: The project resources identified

Resource Description Availability

Peo

ple

Student Responsible for completing the project Y

Supervisor Provides guidance on how to approach the project Y

Second Reader Provides feedback when required Y

Technical Support Laboratory staff responsible for providing access to tools and equipment Y

PhD Project Consultant Provides design consultation and guidance to establish project specifications Y

Tec

hnic

al R

esou

rces

Computer Developing the source code for various parts of the add-on and Y

Additional Components Peripherals for initial setup/configuration of the add-on Y

Software Designing the hardware, electronics, and programming/control for the add-on Y

3D Printer Printing the hardware components Y

Robot and add-on

Connector

The MSR robot and the connector used on-board the robot add-on, designed and Produced by C.

Parrott N

Page 21: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

6

Table 2: Cost and risk assessment for the project

Resource Risk Risk Outcome Risk Level Mitigation Strategies

Student time Time mismanagement

due to internal/external

circumstances

Not completing the

tasks required

High Plan work beforehand; set achievable

targets; schedule tasks efficiently to

minimise lead times.

Supervisor time Absence due to external

circumstances

Not providing

necessary

guidance/feedback

in a timely manner

High Attend regular meetings to give progress

updates; promptly send queries over email

when in-person meetings are not feasible.

Primary project

resources

Component loss during

delivery/transit;

Delays due to part

replacements or

reproduction

High Ensure prompt communication with

suppliers regarding missing or damaged

components;

Primary/Secondary

project resources

Mishandling/misplacing

components

Damage to

components, other

resources, injury, or

death

Very High Ensure parts are always used according to

instruction manuals; always store

components in safe places away from

damage; store components directly after

use

Laboratory

facilities, e.g.

tools, power

supply, 3D printer

Mishandling/misuse of

equipment

Damage to

equipment or other

resources, injury

Very High Ensure equipment is always used

according to instruction manuals; ensure

appropriate technical support is always at

hand during operation of new equipment

Software Damage to computer Loss of current

project work and/or

software packages

Med. Ensure regular backups of any critical

work; store project files in a safe place;

store backups in multiple places; document

required software resources and setup

procedures

Page 22: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

7

1.5.4. Project Plan

Appendix 1 contains an updated version of the project Gantt chart illustrating how the

project will be continued and what has been achieved. As with every project, the

initial plan is seldom fully followed through due to factors outside of the control of

the person(s) responsible for it. In this project, this was due to an incomplete

awareness of what the project would involve in terms of design work. Initial estimates

of the timeframe for hardware design, including SolidWorks modelling, PCB design,

hardware assembly, and part updates, took up to three weeks longer than scheduled.

Nevertheless, some of the outlined project objectives have been met as scheduled by

reordering the outlined tasks to be conducted at a different date.

While most of the components were ordered at the correct timeframes, the process of

ordering the PCB required an additional three days due to miscommunication during

component requisition. This, however, did not impede progress in terms of developing

the software for the microcontroller.

1.5.5. Design Implementation

A breakdown of the design stage with its independent phases is shown below.

Phase 1: Hardware Implementation: This involved the 3D-design, prototyping, and

construction of the mechanical assembly of the module, including the module pan/tilt

unit and the enclosure for the system electronics. This phase involved developing

early design prototypes in SolidWorks, and subsequently verifying them with C.

Parrott. This stage also included the development of control and actuation

mechanisms for the add-ons, as this determined subsequent features required on the

electronics. This phase required a working knowledge of CAD design, to which a

considerable amount of time was dedicated.

Phase 2: Electronics Implementation: At this stage, the electronics of the first add-

on were prototyped, designed, connected, assembled, and tested with the on-board

computer. This phase entailed developing the schematic layout for the tool expander,

as well as the CAN bus connection between the main controller and the tool

controller. In addition, the second add-on electronics shall be prototyped and

programmed.

Page 23: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

8

Phase 3: Software Implementation: For the first add-on, the on-board controller and

its peripherals are selected, and the image acquisition and processing software is

designed. This phase has been partially completed; a connection between the main

controller and the workstation computer has been achieved, and a video stream has

been accessed remotely (from the workstation). The next part of this phase is to

develop the image processing software for the add-on. Research shall be conducted on

image processing, including the most useful applications for mobile robotics and

MSR systems, and methods of implementation on the add-on.

The actual design process involved reordering these phases, as it was discovered that

the chassis (hardware) design of the first add-on would dictate where the other

components would fit within the module. Therefore, the hardware 3D design was

carried out first, then simultaneously focusing on the electronics implementation and

low-level software development.

1.6. Structure of the Report

This report is outlined as follows. Chapter 2 gives an overview of the research

conducted in the subject of MSR systems. This includes an overview of their general

properties, applications, and design challenges. The chapter also introduces the HiGen

connector and the MSR system under development, for which the add-on is designed.

Chapter 3 details the hardware design phase, including the process of requirements

capture, early concept prototyping, general design considerations, and the add-on

design. This encompasses the design of the hardware subcomponents, including the

add-on body, the internal mechanisms, and the on-board electronics. Chapter 4

outlines the software development process for the add-on, including initial setup and

configuration, networking, low-level hardware testing and interfacing, and

subcomponent integration. Chapter 5 demonstrates the applied hardware and

software integration, through data acquisition, developing an image tracking model

based on existing techniques, and analysing this model. Finally, Chapter 6 concludes

the report and gives an outline of further work to be done in the future.

Page 24: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

9

Chapter 2 - Literature Review

The concept of Modular Self-Reconfiguring Robots (MSR) has been an interest for

numerous different research groups and institutions. Research in this field has yielded

many unique implementations of multi-robot systems whose units can connect and

disconnect on demand, through actuated connector hubs, either autonomously or by

being joined and separated externally by an operator. However, an equivalent amount

of research has emerged attempting to assess the potential of MSR platforms in terms

of real-world applications.

This chapter highlights research endeavours in modular self-reconfiguring robot

systems, from their initial conception and a brief general history, to early

developments including a host of systems that have been designed for different

problems and use cases. It also introduces MSR systems with hardware add-ons and

Reconfigurable Manufacturing Systems (RMS). Next, the chapter illustrates some of

the implications of MSR system design as well as the challenges facing such systems.

Finally, the chapter introduces the HiGen connector and MSR system, for which the

hardware add-ons have been designed.

2.1. Modular Self-Reconfigurable Robots: A Brief Overview

In the 1940’s Alan Turing introduced the concept of a universal computation device

[1], one that could perform any task required through reprogramming. By the 1970’s

personal computers had gained significant traction within the consumer goods market,

ushering in a new age of innovation through technology, consequently fulfilling

Turing’s vision. Successive advancements in computer technology have driven

numerous innovations in mobile robotics, and in turn, small-scale mobile robots.

1988 marked the first theoretical implementation of a dynamically reconfigurable

modular robot, one that could be constructed from basic ‘cells’ or ‘modules’ and

reconfigured autonomously as per the desired use case [3]. Following on from these

theories, many research groups have successfully built and simulated robotic systems

comprising basic modules capable of different types of reconfiguration.

Page 25: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

10

As of 2014 there have been at least 40 different implementations -and counting- of

MSR systems, with each system aiming to address a specific set of design challenges.

These challenges encompass novel connector designs, unique control methodologies,

or improved communication, to name a few. It is worth noting that some of these are

iterations of previous systems.

2.2. Modular Self-Reconfigurable Robot Systems: Taxonomy and Attributes

MSR platforms comprise basic building blocks that can reconfigure into different

shapes to perform the desired task. A single module contains all the working internals

of a robot: sensing, actuation, battery, and processing power. Though virtually useless

as individual robots, MSR platforms draw their immense potential from scalability;

ten, even hundreds, of modules can combine together in any configuration forming

rigid, complex structures and functional limbs. An example of a MSR platform, the

ATRON self-reconfigurable robot [4], is shown in Figure 1, with individual modules

combined into different configurations.

MSR systems are classified into three main archetypes: pack robots (consisting of

dozens of robots within the same system); herd robots (with several hundred modules

per system); and swarm robots, whose numbers are typically many thousands of units.

The significance of each module within the system decreases proportionally with the

size of the MSR population; pack robots are the most dependent on the actions of

each individual, whereas in large swarms of robots no particular attention is paid to

single modules, much like natural swarm systems.

Another method of classifying MSR platforms is based on their reconfiguration

abilities. The robots are said to be reconfigurable if they can be

connected/disconnected and combined into different configurations; dynamically

reconfigurable, if they can disconnect /connect while modules are active (hot-

swapping); and self-reconfigurable, if they can be connected/disconnected

autonomously, without external aid.

This section introduces the general attributes of MSR systems, including their

morphology, locomotion classes, control, connector design, and communication

methods, with references to existing implementations.

Page 26: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

11

Figure 1: the ATRON self-reconfigurable robot combined into a snake configuration (left), a

vehicle-like configuration (right), and an intermediate configuration (back). Printed from [3]

2.2.1. Morphology: Lattice, Chain, and Hybrid

MSR systems are commonly characterized using their morphology [1], which is

primarily concerned with how the robots are assembled into structures. Chain-systems

are where the individual robots combine together end to end in chain-like shapes,

trees, and loops in some cases. Lattice-systems, such as Fracta [5] and the

Metamorphic Robot [6], are examples of predominantly planar (2D) architectures

where each robot unit is connected to two or more units, consequently requiring

multiple connection points. Robots in a lattice only connect in discrete points in the

configuration, akin to how atoms combine to form structural lattices. Hybrid

architectures, such as the ATRON [4], also exist where combinations of chain clusters

and lattice structures could be formed using the same basic building blocks [7]. Such

systems allow multiple legs to assemble, providing greater locomotion versatility,

whereas lattice arrangements allow rigid support structures to form, improving the

stability of the whole system, in addition to facilitating self-reconfiguration tasks. For

these reasons, hybrid architectures tend to prevail throughout MSR designs.

Determining the intended morphology of the system is a consideration that takes form

at the module and system level alike. As shown in figure 2, Stoy et al. have

demonstrated that the MSR morphology largely dictates other aspects of the design

[4]. The fact that most morphologies tend to be hybrid slightly simplifies this from a

mechatronic design point of view, but might cause a plethora of challenges for control

design. This will be discussed later in this section.

Page 27: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

12

Figure 2 showing the components involved in robot design and their interaction. Adapted

from [8]

2.2.2. Locomotion Modes

A crucial design element for MSR platforms, and perhaps one of their most

advantageous traits, is the various locomotion modes they inherently support through

reconfiguration [3] [8]. Some MSR systems have been created to have independently

mobile modules, such as M3Express [9], SMORES [10], M-TRAN [11] and PolyBot

[12] [13], whereas most are only capable of propulsion in clusters [4]. Many MSR

systems can form into different configurations capable of varying locomotion modes.

For example, ATRON [4] can form snake-like or “wheeled” robots, and M-TRAN

can form into multi-legged configurations. Kutzer et al. proposed a MSR design with

a hybrid morphology, whose modules were capable of self-propulsion on disc-shaped

connectors that doubled as wheels [9]. Those could be used to propel individual

modules or a large multi-robot cluster.

Another interesting prospect for MSR locomotion lies in the individual flexibility and

self-locomotion of the modules combined with the structural support and rigidity of

the MSR as a whole. Within a large cluster or multi-robot, robots could move from

the back of the configuration to the front, or holes move to the back of the robot,

creating what is known as cluster flow [14], moving the robot in a specific direction.

Several examples of cluster flow exist in MSR platforms, both physical and

simulated, some of which have been reviewed in [15]. Similar behaviour is exhibited

in real-life insect swarm systems such as ant colonies, whereby ants form into large

structures that self-propel across a terrain or conform around an obstacle to navigate

their path.

Page 28: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

13

The locomotion versatility of MSR platforms proves invaluable in the presence of

unstructured environments, making them ideal candidates for space exploration,

reconnaissance, or remote search and rescue tasks. However, until certain design

challenges are mostly overcome, these benefits remain largely hypothetical, save for

the few example MSR systems that have demonstrated animal-like locomotion.

Cluster flow and task-based growth are examples of challenges that might be

overcome once single modules are efficient enough.

2.2.3. Control

Self-reconfiguration control in MSR systems poses a significant multi-layer

challenge. At the module level, control of on-board mechatronics to arrive at the

desired configurations requires a complete understanding of the system kinematics,

posing a hardware challenge of itself. The obvious solution to this problem is to

restrict the DOF count on each module, rendering them simpler but less versatile.

Even controlling chains of simple 2DOF modules becomes challenging from a

software perspective. Adding an extra module increases the difficulty of docking two

units at different points together, as arm, and therefore chain, poses grow less singular

and more error-prone with each joint added [1], unless direction constraints are

employed. This increase in complexity degrades the computational efficiency of the

controller in question. For embedded controllers, this becomes even more

challenging, as individual robots are, by design, often limited in terms of memory and

processing power.

Globally, it is not feasible to instruct all modules to converge to a certain

configuration; however, it is possible to guide them to incrementally approach one

another, thereby allowing them to reconfigure [1]. Some researchers proposed control

paradigms by which only certain modules performed the required control action; the

surrounding modules then use their internal models to “follow suit”, effectively

enabling the modules to form the desired configuration or execute cluster flow. This

leader-follower approach has been adopted by Stoy and Nagpal [16].

Control of individual modules is almost only necessary in cases where precise self-

reconfiguration is required, for example if the end-goal is to reconfigure into a

specific shape, such as an arm or leg for locomotion. However, for some tasks such as

Page 29: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

14

cluster flow locomotion and task-driven growth, it might not be entirely necessary, or

indeed feasible, to control individual robot modules.

2.2.4. Connector Design

A fundamental hardware challenge for MSR platforms lies in the design of robust

connectors capable of lifting multiple units in series, with minimal actuation time. Of

great importance, as well, are the modules’ having a simple alignment mechanism and

control implementation. Genderless connectors excel in comparison to the other

methods. Connectors can be mechanical, electromagnetic, or magneto-mechanical,

though other methods have also been used.

In [2] Parrott et al. compared three classes of connector designs: gendered, where

male connectors latch onto chassis parts by means of hooks or pins; bi-gendered

(hermaphrodite), where pins/hooks latch into connector grooves; and genderless,

where two hooks latch together. Table 1 presents some different connectors that have

been implemented, comparing their properties.

Table 3: ATRON module [4], SMORES [10], AND HiGen [2] connectors and their

properties. ATRON image printed from [1]; SMORES image printed from [17]; HiGen image

printed with permission from C. Parrott.

Robot

System ATRON SMORES HiGen

Connection

Mechanism

Mechanical Magneto-Mechanical Mechanical

Connector

Gender

Gendered (male

hooks attach to

female slots)

Bi-gendered (modules

have in-built features

of both genders)

Genderless

(connecting faces

have identical

interlocking hooks)

Connector

Face

(Pictured)

Page 30: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

15

2.2.5. Sensing

Numerous experiments conducted on MSR platforms aim to understand how capable

the modules are of cooperating under various circumstances. The limited capabilities

of MSR platforms in the way of exteroceptive sensing consequently limits their task

execution abilities. Most MSR systems are restricted to using a few types of

exteroceptive sensors (that measure external environmental variables such as

temperature, humidity, light), save for SWARM-bot [18] which employed various

exteroceptive and proprioceptive sensors. Consequently, if the robots were to affect

their external environment, some way of enhancing their awareness of their

environment is crucial for cooperation, particularly in unstructured settings. Several

researchers have devised non-vision solutions to tackle the problem of self-

reconfiguration, one of the most challenging aspects of MSR implementation.

Payne et al. [19] propose localization based on elliptical approximations to estimate

probable robot locations. While their algorithm is fast and light, producing robust

estimates of robot locations from one Infrared sensor per robot in a few samples, it is

sensitive to varying light intensities and is therefore less effective in environments

with varying lighting. Furthermore, the availability of cheap low-power

microprocessor and memory modules renders computational requirements trivial, so

embedded robust sensing can easily be exploited.

2.2.5.1. Vision: Motivation

Vision presents itself as a suitable candidate due to its simple implementation and

relatively well-understood requirements. A multitude of different camera technologies

that can generate medium-resolution, high-speed video or MJPEG (a sequence of

JPEG images “stitched” together to form low-framerate video) do exist, rendering the

task more manageable. Simple implementations of camera-based navigation use low-

level image processing to guide the robots’ decision-making with respect to the

environment, or based on the task at hand.

2.2.5.2. Vision: Implementation

In experiments by Yim et al. [20] [21], the CKBot system used smart camera

attachments on three module clusters (each cluster comprised four modules connected

together) to form a bipedal robot. Each camera comprised a VGA imager, an Analog

Page 31: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

16

Devices Digital Signal Processor, a 3-axis accelerometer, and a wide-angle signalling

LED. The camera communicated with the robots over a CAN bus, and the robots

could be fitted with Bluetooth transceivers for wireless cluster intercommunication.

The module also featured an independent power supply, increasing the duration of its

usability. It was screwed to the robot cluster, as opposed to using the magnetic

connectors on-board the modules. To test the effectiveness of this vision and

signalling platform, the robot was forcefully separated (or exploded) by a kick from

the experimenter. Using visual servoing, the clusters autonomously scanned the scene,

detected LED signals from the other robots and moved towards them. The clusters

successfully regrouped, realigned and reconnected into their previous configuration.

Vision has also been implemented within the CONRO system [22]. The researchers

have devised a simple CMOS camera sensor that could be attached to some of the

robot modules. The module was capable of capturing 8-bit monochrome images at 30

fps. While the modules did not have the computational capabilities to capture or

process the image at the time, it was a step forward in empowering simple MSR

robots with camera sensors.

Simple 2D-vision is not the end goal, however. Some experts suggest using multiple

MSR clusters to create 3D maps of their surroundings using inexpensive cameras and

IR transceivers, as opposed to expensive sonar or LiDAR (Light Detection and

Ranging) sensors. This would be practical in applications where creating a 3D map of

an inaccessible environment is required, such as in a collapsed building, inside a tank

or cave. Powerful and inexpensive computers would allow sophisticated image

processing software to be embedded, and enable the robot to do more with the data

collected.

2.2.6. Communication

Many MSR platforms employ multiple communication modes between robots to

allow them to cooperate and navigate unforeseen obstacles. Robust communication

enables efficient use of sensory data. In addition to redundancy, which pre-empts

potential failures and provides fall-back routes, the communication methods

employed might be desired for specific tasks. For example, aside from a Controller

Area Network (CAN) bus using the Robotic Bus Protocol for data transfer between

robot modules, the CKBot modules [20] supported optional Bluetooth inter-cluster

Page 32: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

17

communication. A similar principle has been adopted with the HiGen module. This

will be discussed later in the chapter.

The communication modes used are typically chosen based on factors such as desired

range, latency, implementation simplicity, and system control requirements. For task-

dependent communication, attention must be paid to how these decisions would

influence the robot’s ability to conduct its task in the real world. For example, a

particular mode of communication might have a sufficient range in open spaces, but

be susceptible to attenuation in enclosed areas or in the presence of large obstacles.

Selecting a powerful and versatile communication platform is therefore a crucial step

in MSR design, to ensure the robot can perform its task without interruption.

2.3. Heterogeneous MSR Systems

The use of swappable ‘quick change’ tools on CNC milling machines could be

loosely regarded as the first practical implementation of a reconfigurable robot system

with an add-on. In 1988, the first implementation of a self-assembling robot, formed

of heterogeneous modules, the CEBOT (short for cellular robot), was achieved by

Toshio Fukuda [3]. He devised a system comprising three types of units; Type 1

locomotion modules (joints or wheels); Type 2 structural (branching or power)

modules; and Type 3 functional (tools or grippers) modules. Later robots build on this

concept, but attempt to combine the structural and locomotion aspects, effectively

producing modules with actuated DOFs, power sharing and multiple connector hubs

all in one. Functional modules, however, remain mostly separate and are designed to

be changeable depending on the task required.

2.3.1. MSR Systems with Hardware Attachments (Add-ons)

Stoy et al. describe the implementation of “application-oriented hardware”, or

hardware add-ons, as one of the crucial steps towards realising practical MSR

platforms [1]. It is well recognised that practical implementations that use hardware

add-ons make the robot platform less homogeneous in nature. However, as long as the

majority of the robot is composed of identical modules, barring the add-ons, the MSR

would still qualify as a homogeneous system.

The AMOEBA-I platform [23] is an example of a robot system which, while shares

some common features with MSR systems, cannot be classified as such. Instead this

Page 33: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

18

system sacrificed the prospect of homogeneous structural self-reconfiguration for

practicality. The robot’s chassis could be rearranged into different shapes, and it made

extensive use of hardware attachments to vary its suitability for certain tasks such as

search and rescue or military reconnaissance. The robot could employ different

configurations of tires or treads to increase its locomotion versatility, and had

functional add-ons which could be attached to various parts of it.

In a similar vein to Fukuda’s early visions, Akin et al. propose MORPHbots [24], an

MSR with anthropomorphic robotic manipulator arm linkages with interchangeable

end-effectors, to assess the space exploration capabilities of MSR systems. The 6-

DOF arm with a genderless connector had a spherical workspace. Impressively, at just

10 kg, it was capable of tasks an astronaut in a pressure suit would normally execute.

MORPHbots featured three component types: modules, providing manipulation

capabilities (such as pitch/yaw and prismatic actuators); nodes, serving as branching

points and can also provide additional networking hubs and computation capabilities;

and packs, providing additional services (battery packs, power generation, sensors,

tool carriers, or communications devices). These components (modules, nodes, and

packs) then combine together into entities, grouped together into a system.

Using similar principles, Researchers in [25] developed Thor: A heterogeneous MSR

system with eight different module types: motors; cubic nodes (with six hub

connectors); rotation 165 (capable of rotating ±165°); angle 90; wheel; gripper;

battery; and wireless. These are some implementations resembling what Yim et al.

call a “box of stuff” [26], which could be likened to a robotic Swiss army knife.

While the idea of dedicated functional units that provide only the necessary

functionality is attractive for well-understood applications, it may prove unnecessarily

restrictive. It could result in a more complex overall implementation as systems would

require multiple entities, each of which responsible for one part of a multi-task

mission. A system would be rendered useless against a particular task without

precisely the right attachment(s) equipped, so it would not be suitable for stochastic

environments. Conversely, designing the components as multi-purpose tools would

increase versatility at the cost of implementation complexity.

Page 34: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

19

2.3.2. Reconfigurable Manufacturing Systems (RMS)

Drawing on inspiration from MSR systems such as PolyBot [12] [13], and

particularly the work of Fukuda on CEBOT [3], Chen proposed a heterogeneous

system of workcells [27] that could reconfigure according to the demands of the

production line, as opposed to using traditional single-purpose manipulators or

manufacturing lines. The workcells comprised passive joint modules which also

contained sensors for accurate positioning and internal measurements, and various

active actuation modules, such as prismatic and rotational joints. His intended

customers were high-mix and low-volume manufacturers who required their

production facilities to quickly adapt to the market.

Chen successfully demonstrated the versatility and reconfiguration capabilities of his

RMS by constructing and showcasing a light machining prototype system at the

International Industrial Automation Exhibition in 1999. Provided they were capable

of kinematic efficiency similar to traditional industrial robots, RMS’s would be

beneficial to medium scale manufacturers – so long as their large initial investment

cost could be justified [28].

2.4. Design Challenges of MSR Platforms

According to most experts in this field, the problem of optimizing MSR design,

including which features to incorporate, is dependent on the intended application of

the system [1]. It is largely accepted that a “killer application” [26] that would help

specify strict design goals for MSR platforms has not yet been discovered, making the

problem much more difficult to characterize. Few systems have therefore been created

with the goal of optimal design in mind. SuperBot [7], a University of Southern

California platform created in 2006, was an MSR whose design goals were focused

towards optimality. This prominent example, partially funded by NASA and the US

Army Research Office, featured rigid, robust modules, each of which had 3 degrees of

freedom (DOF), an array of exteroceptive sensors, and fault-tolerant adaptive control.

While a lot of practical results have painted MSR platforms in a ubiquitous, all-

purpose light, it is important to acknowledge their shortcomings in certain areas.

Those are most notably computational weaknesses, design challenges, exteroceptive

deficiency, and control-related challenges, such as autonomous self-reconfiguration.

Page 35: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

20

This section discusses some of those challenges and presents ways to deal with them

within MSR design. Once these have been fully addressed by the researchers

involved, implementations of MSR systems will start to verge on the ideal picture

most commonly depicted in science fiction.

2.4.1. Computational Limitations

Firstly, most such platforms are severely restricted in terms of their computational

real-estate. Owing to the specific niche application areas, MSR platforms and their

accompanying extensions do not typically feature computationally powerful

components. At most, some systems have been able to demonstrate some degree of

high-level data manipulation in the form of image processing, an arguably taxing feat

using the proposed implementations. In the case of the CONRO robot, for example,

the researchers were not capable of capturing, let alone storing, an image directly on

the robot [22].

Memory bottlenecks and fragmentation are some other likely outcomes in systems

whose programming is ad hoc. Memory mismanagement could cause frequent crashes

which are to be avoided at all costs if precise self-reconfiguration is the ultimate goal.

Furthermore, for the tasks of surveying or data collection, the robots would have very

little, if any, available storage to retain any data recorded. In a practical scenario, it

might not be feasible to live-stream data, especially under strict limitations of

available power; periodic data collection could therefore be the only viable option,

and full access to the data could only be attained on retrieval of the robot.

2.4.2. Power Sharing

Power sharing is one of the potential benefits of MSR platforms, yet it remains an

elusive goal to many researchers in the field. To this date, power sharing across an

MSR system is a huge obstacle to robust and efficient self-reconfiguration. Several

implementations have used external connecting wires to emulate power sharing,

though few have been successful at embedding power-sharing functionalities within

the MSR’s connectors.

Challenges lie not only within the design of the electrical and electronic infrastructure

of the MSR itself, but also within optimising the design of the connector mechanisms

to allow for reduced energy expenditure while maintaining sufficient actuation power.

Page 36: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

21

This means that connection mechanisms must be designed with maximum connection

speed as a main goal, to reduce the energy cost of reconfiguration.

Another hurdle is the availability of battery technologies capable of sustaining

prolonged operation as well as enduring repeated discharge and recharge cycles.

2.5. The HiGen Modular Robot

This section introduces the HiGen module [29], along with the connector design

which serves as the connection point between the robot and the add-on.

2.5.1. The HiGen Module and Connector

The add-on is being developed to be used in conjunction with the HiGen module, a

140 mm × 140 mm spherical-shaped robot, featuring four HiGen connectors. As

discussed earlier, the genderless nature of this mechanism allows single-sided

connect/disconnect, and outperforms other connector implementations in terms of

actuation speed and efficiency [2]. Figure 3 shows the connector (left) and a single

robot module (right) equipped with four connectors. To ensure the add-on would be

compatible with HiGen module, the hardware choices and implementation method

were agreed with Christopher Parrott, the PhD candidate responsible for the HiGen

module design.

Figure 3(a) and (b): standalone HiGen connector module (left) [20] and HiGen modules on

the self-reconfigurable modular robot (right) [28]. Printed with permission from C. Parrott,

2014.

Page 37: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

22

2.5.2. HiGen Connector Architecture

The add-on module infrastructure is composed of three main components: The Single-

Board Computer (SBC), responsible for image acquisition, processing,

communication to the workstation, and decision-making; the connector controller

board (CC), a low-level controller to interface directly with the connector; and the

Tool Expander board, serving to mediate between the brain of the module (the SBC)

and the connector controller.

Figure 4: (left) the HiGen connector broken down into its components, showing the (a)

housing, (b) docking hooks, (c) motor and switch mount, (d) drive shaft, (e) shroud, (f)

connection board, and (g) DC geared motor; (right): the controller and its functional pins.

Both images printed with permission from C. Parrott, 2016.

Figure 4 shows the CC, the circuit which contains and manages the connector

functionality. In addition to joining two connected faces via CAN bus (labelled the

communications header), the CC has been designed to be an executive controller for

low-level hardware add-ons. The functionality of this circuit reduces to relaying

control signals to the recipient devices.

2.5.3. Controller Area Network (CAN)

The CAN bus enables low-bandwidth, low-latency communication between robots

within a limited neighbourhood. Simple status messages of position and orientation,

in addition to robot ID, could be exchanged between robots via Bluetooth. This

network was designed to facilitate communication between robots not directly

connected to one another.

Page 38: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

23

The CAN bus uses a 5 V power and ground line pair, a HIGH and LOW CAN signal

pair, and Signal Data and Clock lines.

2.5.4. System Architecture

An overview of the multi-robot system, comprising a robot connected to an add-on

and multiple other robots, is pictured in Figure 5. The system network encompasses

the robots, the add-on, and the workstation computer used as the point of contact

between the robot and an external user.

The robots are connected together via the HiGen connector which enables

communication over the Connector Area Network (CAN). Connected to each HiGen

module is a Connector Controller (CC), a circuit which manages the electronic

interface between two connectors joined together. Therefore, this assigns the CC the

role of communication router between one side of the connector and the

corresponding side; this side could be an adjacent robot or an add-on, as shown in the

figure.

Figure 5: the overall system architecture, showing the communication pathways between

different system elements. The HiGen robot modules interface with each other via the

connector controllers (CC), which connect together to form a CAN bus.

The following chapter outlines the process of designing the add-on to be integrated

with this system, using the network architecture as the basis for many of the add-on

requirements.

Page 39: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

24

Chapter 3 - Hardware Design

The hardware design encompasses two main areas; the chassis design (the enclosure

for all the electronics components and the pan-tilt mechanism), and the electronics

design (the Raspberry Pi, the tool expander board, and the connector controller).

Firstly, this chapter introduces the design requirements for the hardware extension,

including functionality, performance, compatibility, and usability. In addition to

presenting early concept prototypes, this chapter outlines the process of designing the

custom-built components, presents arguments for certain design choices including

component choices, and presents the final hardware design and all of its components.

3.1. Design Requirements

From the literature review conducted in the previous chapter, it was decided to use

vision as the primary method of sensing based on which the add-on would be

constructed. The add-on would be used in a similar fashion to those designed by Yim

[20] and Castano [22].

Before the design process could commence, it was important to fully define the

problem to be solved. Specifically, the problem was combining the basic elements of

an add-on for the HiGen MSR to constitute a vision add-on. This vision add-on would

use a camera to detect an object and identify its location within an image frame,

following which it could track it using its on-board actuation. The add-on would be

attached to the robot through a standardised connection method, through which it

would be able to interface with the robot and hence communicate with the rest of the

system.

3.1.1. System Definition

The system being developed is the robot add-on and its subcomponents. The add-on is

a component designed to interface with the robot platform currently being designed at

the University of Sheffield. The scope of the system is the individual add-on and its

subcomponents. The system is part of a larger system comprising the robot/platform

Page 40: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

25

to which the add-on is connected and multiple robots with which the add-on could be

integrated.

3.1.2. Physical Characteristics

The add-on shall comprise a hardware assembly of various mechanical

components and electronics.

The mechanical components shall comprise to the add-on enclosure, the pan-

tilt servo configuration, the pan-tilt attachment, and the passive connector.

The hardware assembly, in its entirety, shall fit within a cylinder of a

maximum diameter of 140 mm. The height of the hardware assembly shall not

exceed 140 mm at the highest point.

The electronics shall comprise the on-board computer, the microcontroller, the

tool controller, the connector controller, the communication device, the

camera board, and the additional peripheral devices connected.

The electronics shall contain a full communication pathway between the robot

and the add-on.

The add-on shall contain a pan-tilt camera configuration.

3.1.3. Performance Characteristics

The pan-tilt configuration shall allow for ±180° rotation on the pan-tilt axes, to

attain the largest field-of-view (FOV) in each frame.

The add-on shall be able to recognise an object up to two metres away.

The add-on shall consume no more than 5 V rated at 2 A.

The add-on shall be able to capture still images of up to 1024 × 980 pixels.

The add-on shall be able to stream live video up to 15 fps.

3.1.4. Compatibility

The add-on should be designed to accommodate for alternative modes of operation;

either at the front of a MSR robot, or attached to the side of a robot.

3.1.5. Usability

The add-on shall be designed to acquire data from the environment, extract relevant

information, and route the information to the correct target device.

Page 41: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

26

3.1.6. Early Concept Prototyping

Having established the need for exteroceptive sensing, research was conducted into

how image acquisition on an embedded system could be achieved. Various resources

outlined the methodology used to acquire images from a camera connected to a

microcontroller. Several other resources were examined to identify potential

applications of this acquired imagery. In the applications of remote search and rescue,

a useful feature to possess would be online video processing and object identification.

In addition to determining vision as the method of sensing to be used on-board the

add-on, research was conducted to determine how streaming large amounts of data

over a low-bandwidth network could be implemented. However, most resources

favoured using a high-speed Wireless Local Area Network (WLAN) to stream such

data, and limiting the other networks to basic neighbour-to-neighbour communication

for low-level data exchange. The CAN bus connecting the add-on to the robot(s)

could be used for this purpose.

Following this initial research, a stage-zero concept prototype was developed. This

prototype comprised a serial TTL JPEG camera bread-boarded to an Arduino Uno.

The Arduino was connected to the computer via a USB serial cable. In order to

acquire an image from the camera, a basic serial communication software package

was used. However, this method failed to acquire any useful image output. Figure

6shows the breadboard layout of the aforementioned prototype.

Figure 6 : the Arduino connected to the TTL JPEG Serial Camera via breadboard

Page 42: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

27

3.2. Chassis Design (Mechanical Design)

The process of designing the chassis required an in-depth understanding of how the

components of the add-on fit together, as well as how they integrate with the overall

MSR platform. The design process included various considerations, and underwent

several iterations, both conceptual and physical, to arrive at a final configuration.

Appendix I shows some of the early model prototypes, some of which have been

abandoned partway due to infeasibility.

The chassis was designed and rendered in SolidWorks and manufactured using 3D-

printing. The chassis includes a variety of components: the main enclosure in which

all the subcomponents are assembled, the passive connector to which a robot

connector attaches, a top ‘lid’ on which the pan-tilt servomechanism is attached, and

the actuated pan-tilt module, comprising the pan motor, the tilt motor mount, the tilt

motor, and the camera mount. A full breakdown of the hardware components used on

the add-on is provided at the end of the chapter (section 3.8).

3.2.1. Enclosure Design

Figure 5 shows the enclosure design. The enclosure comprises a 75x75 mm octagonal

shell with diagonal mounting holes at the top, joined to a round base with an outer rim

at its top. The outer rim, used to align the add-on with a docking station, is a round

section measuring 81 mm across with four segments trimmed. The round base of this

component is based on an existing attachment template which dictates the mounting

hole placements and base diameter for the passive attachment. This template, shown

in figure 6, was used by Parrott for some of his add-on designs.

Figure 7: the attachment template based on which the enclosure has been designed. Courtesy

of Parrott

Page 43: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

28

Figure 8: the full add-on assembly. Not shown: camera ribbon cable or servo wires

It was desired to leave access points to the essential Raspberry Pi ports in case any

were required during programming or testing without having to remove the top lid. In

addition to the mounting holes for the top attachment and the passive connector, the

enclosure had mounting holes for the Raspberry Pi computer. The enclosure also had

three rectangular holes on two of its faces. The two left holes corresponded to the

positions of the power inlet and HDMI port on the Pi respectively, whereas the back

hole corresponded to the USB port in which the Wi-Fi adapter is plugged.

The main principle was to use a common template on which future add-ons could be

based. This greatly simplified the process of designing an attachment point for the

passive connector face, a design that is used throughout the multi-robot system. This

also allowed the electronics on-board the connector to retain their configuration

regardless of which add-on was connected to it. These considerations have played a

role in maintaining design modularity, a critical aspect spanning the system design as

a whole.

3.2.2. General Design Considerations

3.2.2.1. Material Cost

The hardware chassis was designed with the cost of 3D-printing material in mind.

This consideration was cause for several changes within the design iterations, as it

was more favourable to reduce the amount of ABS and support material being used.

The printer used a soluble support material to be deposited where gaps existed in the

model, so it was important to reduce the size of those gaps within the design, even at

Page 44: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

29

the expense of potentially complicating the assembly sequence or increasing the

overall number of hardware components.

3.2.2.2. Sturdiness/Robustness

It was more preferential to come up with a design that used fewer fastening points to

reduce the chances of potential breakages. The top “lid” contains the assembly points

for the pan-tilt servo unit as well as mounting holes that attach to the enclosure via

nuts and bolts at each of its four corners. The base is enclosed with the connector and

its accompanying electronics, producing a mostly enclosed unit that protects its

internal circuitry, with the option to add an internal cap to fully shield the exposed

parts. For mobile use cases, the internal components need to be well protected from

physical damage, hence the design of a roll cage to protect the camera and servos.

3.2.2.3. Expandability

A passive benefit of including some of the previous constraints was allowing the same

basic features of the module to be used on other unique module types. For example,

using the Pi allows more peripherals to be connected if necessary, making it in some

ways an expandable, general-purpose module. In addition, the mounting pegs on the

tilt bracket allow more sensors (such as IR sensors/emitters, microphones, ultrasonic

sensors etc.) to be placed alongside the camera, enabling the robot to capture a much

more wholesome perspective of its surroundings. These features add a large degree of

flexibility to the design that enables the user to define an appropriate use case and

modify the attachment accordingly.

3.2.2.4. Hardware Assembly Sequence

One of the most essential considerations was the order certain components would be

assembled in. This required a sound understanding of how to integrate the various

elements, such that the minimum number of assembly steps could be carried out to

assemble the module. It also dictated the placement of certain components with

respect to one another and guided the positioning of fastening components such as

nuts and screws. An animation of this exploded diagram is also presented in the

following address: <link here>.

Page 45: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

30

3.2.3. Pan-Tilt Unit Design

While a fixed camera could be used on-board an MSR attachment, would greatly limit

the field of vision (FOV) of the robot attached to it. In order to expand this FOV, a 2-

degree-of-freedom (DOF) pan-tilt unit was designed for the add-on. Figure 8

illustrates the pan-tilt unit design as well as the camera mounting arm.

Figure 9: the pan/tilt mechanism for two use cases: front attachment (left), and side

attachment (right)

The attachment was envisioned for two primary use cases: to be used at the front of a

multi-robot serving as the primary attachment (on an active connector); or to be

connected to the side of a robot, on a robotic arm for example, through one of the

robot’s passive connectors as a secondary attachment (a claw/gripper would be the

primary attachment in this case). It was therefore required that the attachment had a

maximum field of view in any of these cases, hence the proposed pan-tilt

arrangement. Some early design prototypes are also presented in Appendix I. Those

were discarded due to inefficiency, and in favour of this “spherical” configuration.

3.2.4. Pan-Tilt Alignment

Figures (5) shows the angle range (approximately ±180°) of the pan-tilt mechanism.

This particular arrangement was created so the distance between the servo axes would

be brought as close together as possible. This adds the benefit of maintaining a

constant visual reference, by having the camera frames start from the same origin

point for all possible pan-tilt poses. This spherical joint configuration also allows for a

larger range of motion compared to those of other joint configurations.

Page 46: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

31

Figure 10: pan/tilt motion arcs, showing a range of approximately ±180°

3.2.4.1. Pan-Tilt Mechanism Actuation

Most pan-tilt actuation mechanisms use standard servo motors. Servo motors are

small, geared motors usually limited to ±180° of rotation, although there are many

designs that allow for continuous rotation. The servos selected for the pan-tilt

mechanism were Hextronik HXT900 9 gram servos, a common size for use on small

robots such as the HiGen module.

3.3. The Single-Board Computer

Following the initial prototyping stage, further in-depth research revealed it would be

much simpler to acquire images or video from a single-board computer (SBC) device

featuring Wi-Fi communication. Several different SBC’s have been compared to

identify which would be suitable for use on-board the add-on. Of these SBC’S, the

Raspberry Pi was deemed the most viable option for the intended use case. Figure 5

shows the four SBC’s that have been compared. Appendix A presents a table

comparing these four devices.

Figure 11: four different single-board computers; Raspberry Pi 2 (left, back); ODroid C1

(right, back); HummingBoard (left, front); MIPS Creator Ci20 (right, front). Reproduced

from [30]

Page 47: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

32

3.4. On-board Electronics

As part of the compatibility design requirements, an intermediate tool expander board

was to be designed. This takes the form of a custom Printed Circuit Board (PCB)

which connects to the Teensy microcontroller. This board is connected to the Tool

Controller which attaches to the connector.

This section outlines the motivation for the board, on-board communications design

considerations, and outlines the process of designing the tool expander from

schematic design to finalised prototype board.

3.4.1. The Raspberry Pi

The Raspberry Pi model A+ is a slightly smaller Raspberry Pi model, measuring

65×56×12 mm. With 256 MB of RAM, the computer not only towers over similar-

sized microcontroller devices such as the Arduino Uno, which only provided a

maximum of 32 MB of RAM, but also narrowly edges out competition from similar-

priced SBC’S. Figure 12 (left) shows the Raspberry Pi board.

One crucial aspect of the design of the add-on is future expandability. It was therefore

important to enable the electronics configuration of the add-on to be changeable

according to the intended use case. The Raspberry Pi computer was partly chosen for

this reason; its 40-pin General-Purpose Input Output (GPIO) header enabled a

multitude of devices to be connected to it, utilising various communication protocols

such as Universal Asynchronous Receiver-Transmitter (UART) serial, Inter-

Integrated Circuit (I2C), and Serial Peripheral Interface (SPI).

Apart from ease of implementation, there are other benefits associated with the use of

single-board computers for the on-board processing software development. Firstly, it

would facilitate the development and implementation of control algorithms robustly

using MATLAB and Simulink, as hardware support packages have been created to

interface with these platforms. Those could be used in three main use cases: running

standalone packages on the platform directly in Simulink; running hardware-in-the-

loop-type simulations with the ability to tune the simulation parameters from the

computer in real-time; and using MATLAB to run commands, scripts, or functions

directly on the device. These will be further illustrated in Chapter 4.

Page 48: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

33

Another added benefit would be exploiting the potential of cloud computing. SBC’s

are designed to interface with the internet, so imbuing the module with direct access

to the Internet of Things (IoT) could have the potential to make the entire MSR

system more powerful and applicable in practice, for live-streaming data and remote

operation. This will be reviewed in more depth in Chapter 6.

3.4.2. The Raspberry Pi Camera

The Raspberry Pi Camera presented itself as the primary candidate for the on-board

camera implementation, as it was readily supported by the Pi in terms of hardware as

well as software interfacing. The powerful, 5 MegaPixel camera unit features an

OmniVision OV5647 camera sensor with a resolution of 2592 x 1944 pixels. The

chosen version of the camera, the Pi NoIR (no IR) has no IR filtering. This type of

camera is commonly used in low-light conditions alongside IR emitters, making it

suitable for poorly lit environments such as caves, inside collapsed buildings, or

inside pipes. Figure 12 shows the Pi NoIR camera.

3.4.3. The Teensy Microcontroller

The HiGen module uses a specialised microcontroller as its “brain”. This device, the

Teensy 3.2 Low Cost (LC) microcontroller, shown in figure 12 (centre), interfaces

with the CC which, in turn, routes the signals to/from the connector board circuit. The

reader is referred to section 2.5.2 for the functions of the CC and connector board).

The Teensy, featuring a 96 MHz Atmel AVR microprocessor and support for various

communication platforms (SPI, UART and I2C), and a small form factor (35.56 mm x

17.78 mm) render it ideal for embedded applications.

Figure 12: (left) Raspberry Pi NoIR Camera. Retrieved from [31]; (centre) Teensy 3.2 LC

(Low Cost). Retrieved from [32]; (right): Raspberry Pi Model A+. Retrieved from [33]

Page 49: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

34

Since the Wi-Fi adapter occupied the board’s only USB port and the Pi’s micro-USB

port was reserved for power and the infeasibility of connecting more peripherals to

the Pi via a powered USB hub, the GPIO provided the alternative low-bandwidth

communication method required to interface with other electronics. It was therefore

important to implement the on-board electronics with this in mind, to help make the

board as expandable as possible.

3.4.4. Low-Level Communication Management

In order to allow more efficient high-level computation on board the Pi for tasks such

as real-time image processing and video streaming over Wi-Fi, it was important to

implement a solution that delegated low-level networking tasks to another device. For

this reason, an intermediate Printed Circuit Board (PCB) was designated as a

communication ‘middleman’ that regulated low-level peripheral control. In addition,

this microcontroller has in-built support for adding Controller Area Network (CAN)

bus communication. The CAN bus is used to enable the robots to exchange low-level

messages about each other’s status and orientation, giving the robots useful

knowledge of the other robots. From within the Pi, low-level control over peripheral

devices is well established, using a selection of open-source, third-party libraries such

as wiringPi [34] for accessing GPIO pins. These libraries, built in a similar fashion to

Arduino libraries, support standard communication protocols such as UART serial,

SPI, and I2C communication. A summary table of the various communication

protocols compatible with both devices is shown in table 3.

Initially, SPI, the most advantageous protocol, was chosen as the method of

communication between the Pi and the Teensy. However, practical implementation of

this protocol proved too challenging as neither device natively supported running in

SLAVE mode. Several attempts have been made to compile programs including a

third-party library that supported SPI SLAVE mode [teensy-master-slave], but the

programs refused to compile and flagged up numerous errors. Therefore, the use of

SPI was foregone.

Page 50: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

35

Table 4: Comparing three serial data protocols; SPI, I2C, and UART

The alternative low-level communication channel between the Pi and the Teensy was

the UART, which used a transmit-receive (TX-RX) channel from each device

connected to its counterpart on the other device (RX-TX), in addition to a common

ground line. UART is popular for such applications as it offers a sufficiently low

latency solution for the intended use as well as flexibility over the message content.

The serial communication protocol allows the Raspberry Pi to send data to the Teensy

once this data is available, in intervals determined by the baud rate of the serial link.

In most cases, the user controls this baud rate while initialising the serial protocol;

however, the maximum baud rate is governed by the size of the on-board

microprocessor. The Teensy would then process this data and execute commands

based on the nature of the data acquired.

StrB DB01 DB02 DB03 DB04 DB0… DBn PB StpB StpB

Figure 13: an n+4-bit 'word' transmitted over UART serial, showing the start bit (StrB), data

bits (DB01-DBn), parity bit (PB), and stop bits (StpB).

Protocol Properties Pros/Cons

SPI 4-wire connection: Master-In-Slave-Out

(MISO), Master-Out-Slave-In (MOSI),

Chip-Enable (CE), and Signal Clock

(SCLK).

Synchronous protocol: transmits/receives

data every clock cycle

Highest data bandwidth (MHz range)

Less sensitive to noise

Less complex to

implement than I2C

Number of wires

increases with added

devices

I2C Synchronous, two-wire (SCL and SDA)

half-duplex serial bus

Multiple masters and slaves

Bandwidth: 100 kHz – 3.4 MHz

More complex than

SPI or serial

Level-triggered;

corruptible

UART Asynchronous, Two-wire communication:

Rx and Tx lines only.

Predetermined interface parameters: baud

rate, start/stop bits, and parity bit

Bandwidth: 0. 3kbps – 1 Mbps

Simple to implement

Wastes CPU clock

cycles

Overhead since LSB’s

reserved for start; MSB

for stop and parity

Page 51: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

36

Figure 13 shows a breakdown of a UART serial word, with a standard message size of

n+4 bits, where each word is composed of n+4 = 12 bits (1 start bit, 8 data bits, 1

parity bit and 2 stop bits). Assuming a required sampling rate of r = 5 ms (i.e.

transmitting a new word once every 5 ms), the data transfer rate, or baud rate, would

be:

𝐵 = (𝑛 + 4) ×1

𝑟 = (12) ×

1

5×10−3 = 2400 𝑏𝑝𝑠.

Equation 1: the baud rate of the serial protocol for a word size of 12 and a 5 ms sample rate

For example, if the coordinates of a point were identified by the image processing

algorithm, the Pi could compute a set of servo motor angles to be transmitted to the

servos to centre the image. These servo angles, received over serial every 5 ms, would

then be transmitted to the signal lines of the servo motors connected to the Teensy.

While this pathway introduces some degree of latency between the Pi and the Teensy,

it means the process of actuation and control can be separated from computation.

Implementation of the actual image processing algorithm are presented in chapter 4.

While the use of the GPIO is straightforward, it is usually unfavourable to connect

devices directly to the Pi unless absolutely necessary. This is because the GPIO is

regulated for a maximum of a few amperes, rendering it susceptible to potentially

destructive current surges from other devices. It was therefore required to include

low-level power regulation either directly from the power source or from an

intermediary board, i.e. the tool expander board. The full power requirements of the

add-on are described in the following section (3.4.5).

3.4.5. Power Requirements

On an MSR platform, power is a scarce resource, particularly when the robot is

connected to multiple other robots. To assess the full power requirements of the add-

on module, a regulated bench power supply was used to power the module. The

power supply was set to output a 5-V, 2-A power signal, which was connected to the

Pi’s 5 V and ground lines. The power supply had a measurement feature that indicated

the total current dissipated throughout the hardware when fully connected. A table of

power tests is provided in Table 5.

Page 52: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

37

Table 5: Tested Current Draws of Multiple Configurations using a 5 V 2 A regulated

power supply

Test Description of Test Components

Value

(mA)

Add-on Boot A test of the power consumption at boot with all

peripherals connected 150

Teensy powering

the motors

A test of the teensy connected to power and used to

control the pan/tilt servos

350

(avg)

All connected A test of all the electronic subcomponents fully

connected; the Pi, Teensy, servos, and camera 900

The servos operate at 4.8-6 V, drawing an average current of 150 mA each. Were

larger, more powerful servos to be used, their larger current draw would jeopardize

the Pi’s safe operation, potentially causing electrical damage. In addition, larger servo

motors would add unnecessary weight to the add-on, which must be kept as light as

possible to maintain the efficiency of the MSR.

3.5. Electronics Integration: The Tool Expander

Having identified the requirements for the PCB design, including its components and

desired functionality, the board schematic and PCB layout were designed. The boards

were designed using DesignSpark PCB, a free software package for electronics board

layouts. The designs inherited many of the predefined components for the board’s

predecessors, the processing boards on the MSR robots as well as the tool controller.

3.5.1. Schematic Design

The process of designing a PCB using DesignSpark involved two stages: the

schematic design, and the PCB layout. Each of these stages required initial

prototyping to fully identify the required components and their corresponding

connections. To achieve this, prototype connections were established between the Pi

and the Teensy using a breadboard. Initially, a setup of 4-line SPI was created, but

after numerous failed attempts to achieve results from this method, it was substituted

for three-line serial.

The schematic contains five main blocks. The Teensy board block lies at the centre of

the board, acting as the communications middleman. The Pi connection header block

provides the serial RX and TX channels in addition to a common ground, 5V power,

and 2 status pins to inform the Teensy of the Pi’s state during boot. The CAN

Page 53: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

38

transceiver block connects the Teensy to the tool controller (the board on which this

PCB attaches) by means of a CAN bus, comprising CAN High and Low lines, 5V

power, ground, and I2C clock and data signals.

The CAN bus is managed by two microcontrollers; a TJA1055 fault-tolerant IC, and a

MPC4651 mixed signal digital potentiometer with I2C interface. A sensor block

connects the Teensy to four analogue inputs by means of headers into which sensors

could be plugged (via signal, power, and ground lines), and the servo block provides a

similar connection interface for two servos (connected to digital ports on the Teensy

as well as power and ground). Other components connected to the PCB are 3.3 kΩ

resistors, whose function is to regulate the 5 V line down to 3.3 V to use on-board the

CAN bus, which is only 3.3 V tolerant. In addition to those, header pins are

designated to connect to other boards.

3.5.2. PCB layout

The PCB layout follows the above schematic design, marking the connection ports for

signal, power, sensors, and actuators (servos). Designing the PCB involved

transferring the schematic design into the PCB layout environment. A board layout

was created as a DXF file in SolidWorks to ensure the board profile was precisely

traced. The layout is shown in Figure 15.

The form factor of the board is designed to fit directly above the CC. Therefore, the

positions of mounting holes and top-to-bottom headers were traced from the CC

board. The next stage involved placing the components on the board such that the

connections to other components within the system, i.e. the Pi, the Teensy, and the

various sensors and servos, were all accounted for in terms of physical clearance. The

servo headers were measured to be about 15mm long, so it was decided they would be

placed on the underside of the board. This was intentional as sufficient clearance

could not be left on the board topside. To connect the servo headers horizontally,

right-angled male-to-male header pins would be soldered to the underside.

Page 54: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

39

Figure 14: schematic diagram for the Tool Expander PCB

Page 55: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

40

Figure 15: the Tool Expander PCB design

After all components had been locked in place, individual tracks were manually

routed between the different components. Following that, the ground copper was

poured to ensure all the ground pins are connected. Once the preliminary PCB design

was completed, a host of error checking reports were conducted on the board. This

was done to ensure the components were all correctly connected, the copper pour

ground could be routed to all its nets, and the component nets were not too close to

one another. Following these checks and subsequent revision of the layout, a Gerber

file, a common industry standard for PCB manufacturing plots was then generated

within DesignSpark. The Gerber plots were then sent to the manufacturer as a .zip file

and a prototype version of the board was manufactured.

Page 56: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

41

Figure 16: the Tool Expander PCB prototype board with surface-mounted components,

showing the Raspberry Pi interface header in the top right corner, and the right-angle sensor

header at the bottom.

3.5.3. Surface-Mounted Components

To finalise the PCB prototype, the remaining electronics components were soldered to

the PCB. Those included the CAN IC chip (TJA1055), the digital potentiometer chip

(MPC4651), the 3.3 V regulator resistors (valued at 3.3 kΩ each), and the header pins

for each point of connection to the connector controller. The Teensy was then fixed in

place with soldered headers. Figure 16 shows the tool expander PCB with the surface-

mounted components soldered on. Assistance was given by C. Parrott for the surface

mounting of the CAN transceiver, the potentiometer, the resistors, and the headers.

3.6. Hardware Integration and Assembly

Once the electronics have been fully designed and manufactured, the assembly stage

could commence. The first components to be assembled were those on the pan-tilt

assembly. Firstly, the pan servo was calibrated to ensure it could rotate to ±180°. It

was then mounted onto the top attachment using screws. Next, the tilt servo bracket,

fixed to the servo arm, was attached to the pan servo. The tilt motor mounting brace

was attached to the servo horn using the servo arm.

Following that, the tilt servo was attached to the bracket and screwed in place. Next, it

was calibrated to ensure it would give the desired rotation range. The servo arm and

camera tilt mount were combined, and the camera, whose ribbon cable was passed

through a slit at the top of the cap, was attached to the camera cap. The camera and

cap were then fixed to the arm via mounting pegs. An additional screw could be

added to fully secure camera to the camera bracket; however, it was not used.

Page 57: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

42

Before the Raspberry Pi could be fixed in place, the connection wires from the servos

and the GPIO were passed down through to the base of the add-on. The Pi was then

fixed into the enclosure via screws. The camera connector was then attached to the Pi.

The Wi-Fi dongle was added next. To conclude the assembly, the Tool Expander was

connected to the Pi and the servo motors. A full list of the add-on components is

provided in table 3 below. Figure 13 shows the assembled add-on.

Figure 17: the fully-assembled vision add-on, with a dummy connector base template

Table 6: List of components used within the vision add-on

Component Name Description Qty Cost (£) (M/P)

Passive Connector The passive connector to which a

robot unit attaches

1 TBD (M)

HXT900 9 gr micro

servo + servo arms

The servo motor model used for the

pan-tilt actuation mechanism and

servo arms to attach parts to

2

4.59 (P)

Raspberry Pi Model

A+

The single-board computer acting as

the brain of the add-on module

1 25.00 (P)

Pi NoIR Camera A Raspberry Pi-compatible camera

module with no IR filtering

1 21.00 (P)

Wi-Fi USB Dongle A Wi-Fi adapter for the Raspberry Pi 1 8.50 (P)

8 GB micro SD

memory card

A memory card to store the Raspberry

Pi OS

1 6.50 (P)

Tool Expander

Board

The PCB designed for the add-on and

the Teensy microcontroller

1 65.00 (M)

Page 58: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

43

Chapter 4 - Software Design

This chapter details the various software development activities undertaken within

this project, including low-level hardware code written to interface the Pi and the Tool

Expander, concept testing using MATLAB to verify connectivity capabilities, and

network-level code to communicate between different elements of the add-on: the

add-ons, the robots, the workstation, and the various communication buses. In

addition, some code is demonstrated for some basic task execution, such as simple

face and object detection. Throughout this chapter, line numbers shall be used as

follows to reference their files in Appendixes: e.g. (l. 1-10) marks lines 1 through 10.

4.1. Operating System

Having selected the Pi as the SBC, it was required to select a software environment to

use on-board the computer. Since the Raspberry Pi was capable of running various

operating systems, as presented in Chapter 3, there were several options to choose

from. However, during research it was discovered that MATLAB had released a

hardware support package for Raspberry Pi. Using MATLAB and Simulink greatly

simplified image acquisition, source code development, and implementation on

hardware. It also meant that entire control strategies could be developed and modified

more easily, and powerful tools within Simulink easily exploited to program the add-

on behaviour. This will be further addressed later in the chapter.

Not only was it more convenient to use MATLAB for these purposes, but it also

meant it would be unnecessary to configure large and computationally taxing image

processing code on the Pi, such as the Open Computer Vision library (OpenCV) [35].

According to this tutorial [36], the library would take an excess of 15 hours to install

on the B+ version of the Pi, rendering it much too unwieldy for the less capable A+.

The support package required a specific OS to be installed onto the SD card on-board

the Pi. The OS installed was Raspbian Wheezy, a variant of Linux Debian. The full

version of the OS was installed, although for most cases the add-on would be used in

headless mode, so the default boot option was set accordingly. Using this

Page 59: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

44

configuration discarded the graphical user interface in favour of a more streamlined

terminal interface, a preferred programming environment for Linux-based operating

systems. This reduced CPU overhead that would be better utilised for real-time

computation on the Pi.

The software was installed as per the MATLAB installation guide. To use this support

package, however, it was required to connect the Pi to a workstation PC running

MATLAB, as the Pi could not run MATLAB on-board. Therefore, a Secure Shell

(SSH) connection was established between the Pi and the workstation using SSH

PuTTy, an SSH client application.

4.2. SSH (Secure Shell)

SSH enabled remote access into the Pi over a Wi-Fi network through providing direct

access to the Pi command line. This was done to simplify communication between the

Pi and the master workstation. Before this could be set up on the Pi, it was necessary

to correctly set up the network configuration files.

4.2.1. Configuring the Network Interface

Initially, the Pi was connected to the external USB hub to allow multiple peripherals,

such as a wireless keyboard-mouse and the wireless USB dongle, to be connected

simultaneously. This was required to access the terminal directly on-board the Pi,

since Wi-Fi is not enabled by default within Raspbian Wheezy. Next, the Pi was

connected to an HDMI monitor. At this stage, raspi-config was used to configure

specific settings on the Pi, such as enabling the camera board and SSH and serial

connectivity. The Pi was then restarted. Figure 18 shows the raspi-config interface.

Figure 18 showing the rasp-config interface.

Page 60: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

45

The next stage involved configuring wireless connectivity using the terminal

command-line interface. This involved modifying two of the Pi’s default files; the

interfaces file (etc/network/interfaces), and (etc/wpa_supplicant/supplicant.conf), the

supplicant configuration file. The interfaces file managed the different network

configurations enabled on the device for different interfaces, i.e. Ethernet or Wireless,

whereas the supplicant configuration file recorded all the wireless network settings

such as the network name (SSID), the pre-shared key (PSK), authentication protocol,

and authentication algorithms.

Figure 19 shows the interfaces file on the Pi, modified to allow static IP address

allocation. This ensures the IP address of the Pi remains the same after every

reconnection operation. Other settings, such as the gateway ID (i.e. the router IP

address), subnet mask, and broadcast ID, would also be required here. Those were

retrieved using the terminal commands: route -n and ifconfig.

1

2

3

4

5

6

7

8

9

10

11

# /etc/network/interfaces

auto wlan0

iface lo inet loopback

iface eth0 inet dhcp

allow-hotplug wlan0

iface wlan0 inet static

address 192.168.27.167

netmask 255.255.240.0

gateway 192.168.28.1

wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

iface default inet dhcp

Figure 19: the interfaces file

To automatically connect the Pi to the Wi-Fi network interface specified, the

supplicant file was also modified to include the network settings as shown in figure

13. Within this file, multiple networks could be initialised so the Pi would have a

choice of network to connect to.

Page 61: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

46

1

2

3

4

5

6

7

8

network=

ssid="guest"

psk="shefconfkey"

proto=RSN

key_mgmt=WPA-PSK

pairwise=CCMP

auth_alg=OPEN

Figure 20: the supplicant configuration file

4.3. Concept Testing and High-Level MATLAB Connectivity

This section outlines the code developed in MATLAB for interfacing the Pi and the

devices connected to it. This MATLAB code has mainly been used to test out certain

hardware features. In addition to connectivity code, a simple object detection script

was developed to execute on-board the module, to test its processing capabilities.

4.3.1. Pi MATLAB Initialisation

Figure 14 demonstrates the process of interfacing with the Pi through the MATLAB

command prompt. The function raspi initialises a Raspberry Pi board object given

three arguments: its IP address (ip), the username (usr), and password (pswd).

6. ip = '192.168.27.167'; 7. usr = 'pi'; 8. pswd = 'raspberry'

9. mypi = raspi(ip, usr, pswd)

mypi =

Raspi with Properties:

DeviceAddress: 'mo-pi'

Port: 192.168.27.167

BoardName: 'Raspberry Pi Model A+'

AvailableLEDs: 'led0'

AvailableDigitalPins: [4 7 8 9 10 11 14 15 17 18 22 23 24 25 27 30 31]

AvailableSPIChannels:

AvailableI2CBuses: 'i2c-0' 'i2c-1'

I2CBusSpeed: 100000

Figure 21 showing the basic initialisation function for a Raspberry Pi board

Page 62: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

47

The object declaration returns a set of properties for the Pi: the device address, Pi

model, the available SPI channels, available I2C buses and their speed, and available

LED’s. These properties are shown in figure 16.

4.3.2. Camera Board Initialisation

A similar procedure has been used to initialise a camera board object. The function

cameraboard was called to set up a camera board, with the target board ID (mypi) and

the camera resolution as its main arguments. Figure 17 shows this command run and

its output. The output shows numerous parameters that could be used to control

various picture settings, such as brightness, exposure, auto-white balance (AWB),

effects, as well as options for video recording.

10. res = '640x480'; 11. mycam = cameraboard(mypi, 'Resolution', '1280x720')

mycam =

Cameraboard with Properties:

Name: Camera Board

Resolution: '640x480' (View available resolutions)

Quality: 10 (1 to 100)

Rotation: 0 (0, 90, 180 or 270)

HorizontalFlip: 0

VerticalFlip: 0

FrameRate: 30 (2 to 30)

Recording: 0

Picture Settings

Brightness: 50 (0 to 100)

Contrast: 0 (-100 to 100)

Saturation: 0 (-100 to 100)

Sharpness: 0 (-100 to 100)

Exposure and AWB

ExposureMode: 'auto' (View available exposure modes)

ExposureCompensation: 0 (-10 to 10)

AWBMode: 'auto' (View available AWB modes)

MeteringMode: 'average' (View available metering modes)

Effects

ImageEffect: 'none' (View available image effects)

VideoStabilization: 'off'

ROI: [0.00 0.00 1.00 1.00] (0.0 to 1.0 [top, left, width, height])

Figure 22 showing the cameraboard initialisation command using the board name

and resolution arguments (top) and the command line output (bottom)

Page 63: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

48

Once the camera board has been initialised, some simple scripts were used to test its

performance in teleoperation mode. Figure 18 shows a simple code snippet used to

detect the position of a green object in space and mark its centre using a red marker.

Figure 23 shows the sample screen output when the program was run; the top view

(subplot 1) shows the true-colour image, and the bottom view (subplot 2) shows the

image after pixel thresholding was used to isolate the green colour. The script uses an

example function developed by MathWorks to demonstrate implementing image

processing algorithms on the Pi, modified to output the coordinates of the centre of

the object with respect to the current picture frame.

The function, trackball, is demonstrated in Appendix D. The original function

performs pixel thresholding on the acquired image to isolate the green pixels which

are extracted from the image as RGB colour channels using matrix assignment (l. 5-

7). A black-and-white version of the image is calculated by subtracting half of the red

and blue channels (l. 11) and then shifting the image by the threshold specified as a

function argument (l. 14). The algorithm then scans the green area, locates its

centroid, and marks it with a red dot (l. 17-26) after normalising the coordinates of the

object relative to the image centre. The function then returns the black-and-white and

colour image by re-adding the black-and-white pixels (l. 29-31).

Figure 23: true-colour JPEG frame showing the centre of the green object (top); intensity

thresholding of the image to isolate green colour from background (bottom)

Page 64: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

49

1. for i = 1: 50 2. [img, bw, xm, ym] = trackball(snapshot(cam), 40); 3. figure(2); 4. subplot(211); 5. imagesc(img); 6. subplot(212); 7. imagesc(bw); 8. drawnow; 9. pause(0.150) 10. end

Figure 24: sample code that uses the trackball algorithm to track the position of the green

object

4.4. Tool Expander Programming (The Teensy)

This section outlines the development of low-level code to interface the Teensy with

the Pi and the other board peripherals. As discussed in the previous chapter, the tool

expander board provided a serial communication link between the Pi and the Teensy.

This was used for two-way data streaming between the two devices; the Pi would

upstream messages to the motors, whereas the Teensy would transmit any relevant

status information received from the CAN bus.

Using the serial device support for Raspberry Pi, a serial device (the Teensy

controller) could be initialised as follows (figure 20).

1. teensy = serialdev(mypi, 'dev/ttyAMAO', 115200)

teensy =

Serial Device with Properties:

DeviceAddress: 'mo-pi'

Port: 192.168.27.167

BoardName: 'Raspberry Pi Model A+'

AvailableLEDs: 'led0'

AvailableDigitalPins: [4 7 8 9 10 11 14 15 17 18 22 23 24 25 27 30 31]

AvailableSPIChannels:

AvailableI2CBuses: 'i2c-0' 'i2c-1'

I2CBusSpeed: 100000

Figure 25: serial device initialisation command; input arguments: host device name

(Raspberry Pi), serial port address, and baud rate

Page 65: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

50

4.4.1. Serial Communication Using MiniCom

Serial communication is well supported within the Raspberry Pi knowledge base,

especially with the use of hardware interface libraries such as [34]. However, for the

purpose of this project the library was not used within standalone C++ programs. To

test serial communication, the Teensy was programmed to write out incrementing

values and the same values were read from the serial port on the Pi. A simple serial

communication tool, called MiniCom, was installed and used to read the serial output

from the Teensy, and the Serial Console of the Arduino IDE was used to read out the

data written to the Teensy serial port. To use MiniCom, the command was input to the

SSH command line, and a screenshot of the data exchange is shown in Figure 24 (top

and bottom respectively).

1. minicom -b -o 9600 -D /dev/ttyAMA0

Figure 26: MiniCom used to input values to the serial port via SSH (left) and the Arduino

serial monitor echoing the data read (right).

The command allows the user to specify the baud rate of the serial interface initialised

(9600 in this case). The port action (i.e. open) is specified by the -o tag, whereas the

device address is specified using the -D tag. The Teensy was connected to serial port

(/dev/ttyAMA0), corresponding to the serial port on the Raspberry Pi.

4.4.2. Pan-Tilt from Keyboard

Having verified the serial functionality with MiniCom, it was desired to test the PWM

functionality of the Teensy, to verify its ability to send pan-tilt motor commands to

the pan-tilt servos. One of the first examples used was a basic program which coded

the WASD pad (‘w’, ‘a’, ‘s’ and ‘d’ keyboard keys) to servo turning commands. An

Page 66: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

51

example of how this implemented is shown in Figure 23, showing up and down

tilting.

37. // Up and down TILT 38. if(input == 's') 39. if((tilt + 5 ) < maxtilt && tilt_old != tilt) 40. 41. tilt_old = tilt; 42. tilt += 5; 43. 44. 45. // Pan UP 46. if(input == 'w') 47. if((tilt - 5 ) > mintilt && tilt_old != tilt) 48. tilt_old = tilt; 49. tilt -= 5; 50. 51.

Figure 27: sample code snippet showing vertical (up and down) tilt commands within the Pan

Tilt Serial script

This simple Arduino script (Appendix NO) reads the serial data written to it, then

executes a series of if-else statements to determine which motor to write to and in

which direction: the ‘w’ and ‘s’ keys were mapped to ‘up’ and ‘down’ (tilt)

commands (l. 37-51), whereas the ‘a’ and ‘d’ keys were assigned to ‘left’ and ‘right’

pan commands (l. 54-68). Each key press corresponds to 5 degrees of pan/tilt in the

direction specified.

Two other input options exist: the ‘p’ key executes an incremental pan command that

sweeps the maximum pan angle (l. 71-74), and a stop command executes when the ‘o’

key is pressed (l. 77-80). Once the command is executed, the program prints out the

values and re-enters the loop.

4.4.3. Actuating the Motors using MATLAB

To test the serial interface between the Teensy and MATLAB, the Instrument Control

Toolbox was used to establish a connection to the Teensy. Once the connection was

established, a write command was executed from the interface to write out two integer

values, corresponding to pan and tilt angles, to the device. To ensure the Teensy

correctly read the message written to it, a serial print command was used to echo the

values written. The results of this read-write exchange are pictured in Figure 28,

which shows the Instrument Control Interface, along with the values written to the

servos.

Page 67: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

52

Figure 28: the Instrument Control Application Interface in MATLAB, showing the data

read/write operations sent to the Teensy via serial.

4.5. Functionality Integration

The add-on was created with the goal of increasing the sensing capability of the robot

platform. Therefore, some sort of guided decision-making or task execution must be

demonstrated, ideally occurring between both components of the system; the tool

expander, and the Raspberry Pi. This section illustrates once such example, where a

simple integration experiment was conducted.

The goal of this experiment was to test the effectiveness of serial communication

between the Raspberry Pi and Teensy from MATLAB. However, invoking the serial

read-write operation from the Raspberry Pi’s serial ports had no effect on the motors.

Therefore, it was decided to use the PC as a means of routing the communication

between the Teensy and the Pi. This was done by connecting the Teensy to the PC via

USB and disconnecting it from the Pi serial interface. The serial object setup outlined

in section 4.4.3. was used to write the motor commands to the Teensy.

After turning on the camera, the program scans the scene for a green object. If not

found, the program pans to the left and right until an object is found. If not, the

program proceeds to tilt up and down until an object is found. If not, the program

exits and returns a message to the user. If the object is located, the program proceeds

to track it.

Page 68: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

53

The MATLAB program calculated the difference between the current x-and y-

position of the object centre and the previous value. A bounding box was

subsequently drawn around the image centroid (the red dot). If the new value lay

outside the bounding box, the program would command the motor to actuate in the

direction opposite to the difference, to re-focus within the box. For example, if the

difference between the current and previous points were greater than 20 pixels to the

left, a command would be sent to the motor to move to the left so the point was back

within the box. A sample screen output is shown in figure 29Error! Reference

source not found.. Appendix [G] details the sample code used for this experiment.

Figure 29: (left) object centre and bounding box surrounding object; (right) thresholded

version of the image

Page 69: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

54

Chapter 5 - Experimentation and Results

Once the main software features of the system were designated, it was required to

ensure they could be integrated together. This chapter introduces a model developed

for tracking an object with the pan-tilt mechanism. A brief analysis of this model is

also introduced. Finally, this chapter introduces the foundations of designing the

integration and testing code. The primary chosen development environment for this

code was Simulink, where simple blocks were combined to produce a basic controller

for the add-on.

5.1. The Target Tracking Problem: Approach

Target tracking in Computer Vision (CV) takes many forms. The Visual Servo (VS)

problem, a specific subset of target tracking problems, could be modelled using three

main hardware configurations: Eye-in-hand VS occurs when the camera is attached to

a 6 DOF robotic manipulator; eye-to-hand VS is when a camera, or multiple cameras,

are used to observe the robot or manipulator; and pan-tilt VS, where the camera is

controlled using a pan-tilt servo configuration, i.e. 2 DOF. For the case of pan-tilt

tracking, insights derived from eye-in-hand and pan-tilt VS approaches were studied.

5.2. The Target Tracking Problem: Framework

Computer vision makes extensive use of geometric transformations to map real-world

coordinates to equivalent points within an image plane. These coordinates could then

be used to, for example, estimate the pose of the object, or the camera with respect to

the object, or even the distance away from the object [37]. Several camera

configurations have been used for the purpose of object, as well as motion, tracking,

namely using multiple cameras simultaneously or using a system of stereo cameras to

triangulate objects. It was desired to explore using such methods to model pan-tilt

tracking. Specifically, given a position of a point in the real-world, the objective was

to find pan and tilt angles that would re-centre the view on that target point. A model

proposed in [38] introduces a mathematical set of procedures, or “forward pipeline”,

to map a 3D real-world point onto a corresponding 2D image plane, as well as an

“inverse” pipeline to obtain the reverse transformation.

Page 70: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

55

The model introduces a method of describing the position of a point T with

coordinates (xt, yt, zt) in real-world coordinates projected onto the plane of the image,

forming the point P (xp, yp, zp). The image plane is defined with a centre D (xp, yp, zp),

and the vector 𝑂𝐷 is the direction vector of the virtual camera view (where O is the

origin). The pan and tilt angles, 𝜑𝑑 and 𝜃𝑑, determine the direction of 𝑂𝐷 in world

coordinates. In most cases, the roll angle (the rotation about the y-axis) is ignored, as

the corresponding DOF could be replaced by a software algorithm that rotates the

corresponding image. The formulation of the model is represented in Figure 30Figure

30.

To re-centre the camera view on the target point P on the image plane, the view must

rotate by the pan and tilt angles that correspond to the distances between the point P

and the centre D. These rotations are executed by the pan-tilt motors.

Figure 30: the point T (left diagram) corresponds to an equivalent point on an image plane

EFGH (the top of a frustum). The equivalent real-world plane in which T lies maps out the

base of a frustum, E’F’G’H (right). The frustum EFGH-E’F’G’H defines the projection

volume. Reprinted from [38].

The angles 𝜃𝑑 and 𝜑𝑑 pertaining to the direction vector 𝑂𝐷 are simply calculated as

the ratio between the depth zd, and the vector norm (for the tilt angle), and the inverse

of the tangent of yd and xd.

A set of geometrical parameters could be used to describe the frustum and thus

required to define the pan-tilt model; the vertical field-of-view (FOV) α, the vector

Page 71: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

56

𝑂𝐷 , its normal distance (n) from the optic centre O, the aspect ratio, r, of the image

(where r = |EF/FG|), the vector norm of OD (i.e. the distance between O and the top

of the frustum), and the distance between OD and the base of the frustum.

Applying the inverse pipeline of the model developed in [34] yields a model which

relates the point (xp’’,yp’’,zp’’) to the point (x,y,z), by the inverse of the view

transformation matrix Mview. The convert the new centre of the target feature (u, v), to

the corresponding pan and tilt angles through the following trigonometric identities:

𝜃𝑑 = arccos (𝑧𝑑

√𝑥𝑑2+𝑦𝑑

2+𝑧𝑑2) ; 𝜑𝑑 = arctan (

𝑦𝑑

𝑥𝑑)

Equation 2: the pan and tilt angles obtained from the inverse pipeline method described in

[34]

5.3. Target Tracking: Experiments

As mentioned in the previous chapter, one of the reasons for choosing MATLAB as

the primary development environment for the add-on was the presence of built-in

hardware support for the Raspberry Pi. Through the use of the Simulink support

package for the Pi, two different modes of operation could be used: external mode,

where the tuning parameters of the model could be modified in real-time; and

standalone mode, where the model could be complied into source code and embedded

directly onto the Pi.

Before attempting to control the plant developed using the geometric model, it was

desired to verify the model to ensure it would yield principally sound results. A

simple experiment was therefore designed, using the following MATLAB code to

locate the centroids of the object tracked and, following which, plot its changes over

time.

5.3.1. Experimental Outline

Appendix G shows a MATLAB script which applied the calculated pan and tilt

transformations on a set of x- and y- coordinates acquired through the Raspberry Pi

camera, using the pan-tilt model. The function iterated through 500 samples,

calculated the pan-tilt angles, then calculated the difference between the current and

past pan and tilt angles, dφ and dθ. The target object, a green business card, was placed

Page 72: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

57

a fixed distance (400 mm) away from the Raspberry Pi camera. The script was then

run remotely. After 500 iterations, the following graphs were obtained respectively.

Figure 31: (top left) true-colour and thresholded image with tracked centre; (top right)

variation of x- and y- coordinates of tracked centre; (bottom left) variation of tilt and pan

angles; (bottom right) rate-of-change of pan-tilt angles

5.3.2. Results

The obtained graphs show a large discrepancy between the model and the real-life

acquired image coordinates. While there have been elements of noise in the image

capturing process due to variations in light intensity and the low accuracy of the

tracking algorithm, the effect of these changes on pan and tilt angles would not

warrant such an output waveform.

The nonlinear dynamics presented in the pan output response could be a direct

consequence of the nonlinear arctangent conversion between the pan and the

coordinates of the point (xp’’, yp’’, zp’’) (Equation 2). A re-derivation of the model

would provide improved insight to guide the process of designing an adequate

controller. This will be briefly discussed in the following chapter.

Page 73: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

58

The effects of noise could be further mitigated by varying the threshold specified in

the colour detection algorithm. The impact of reducing the colour threshold had a

detectable effect on some initial trials of the image acquisition loop. However, several

other parameters need to be further investigated.

In addition to re-evaluating the model, a simple simulation scheme could be used to

provide inputs to the model. The initial test could be providing a set of test image

coordinates at, for example, (x, y) = (160, 120) and the effect of providing this input

studied in closer detail. A further experiment would be to input to the model a

constant signal corrupted by zero-mean Gaussian noise at a low frequency, to

simulate the “jittering” resulting from the image acquisition process. Once the

simulation results yield closed-loop stable behaviour, further methods based on

velocity or position control of the states could be implemented. A simple illustration

of this simulation scheme is shown in figure 31.

Figure 32: Simulink scheme used to simulate the behaviour of the pan-tilt model

At the centre of the Simulink block is the pan-tilt optic model outlined previously,

input as a MATLAB M-function, acting as the plant to be controlled. Constant inputs

of (160,120) are input as the u and v values, to which “noise signals”, simulated by a

random number generator, are added. This effectively simulates a constant image

frame in which the position of the centre of the object varies sample-to-sample. The

initial pan and tilt angles are set to zero to simulate starting from a fixed point which

the image processing block attempts to find. Unit delay blocks are added between the

Page 74: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

59

random number signals and the model, and then again from the model to the feedback

pathway. These delays resemble real-time communication latency and serve to

prevent algebraic loops within the model, brought about from using a feedback loop

to provide new values of pan and tilt to the model.

The control scheme of choice could vary depending on the required performance

output. A basic methodology to develop such a control scheme, based on error

tracking, is outlined in the following section. Owing to time constraints, it was not

possible to implement and test a control strategy to verify the model.

5.4. Target Tracking: Controller Design

The tracking problem, in its simplest form, could be viewed as attempting to minimise

a time-varying error term,

𝑒(𝑡) = 𝑠[𝑚(𝑡), 𝑎] − 𝑠∗,

Equation 3: the visual servo (VS) problem as an error minimisation of the target

feature w.r.t. the current camera target. Reprinted from [39]

where e(t) denotes the error, s denotes the current observed feature in terms of its

pixel coordinates m(t) and additional knowledge extracted from the frame (denoted by

a), and s* denotes the desired target feature. The two general techniques that exist,

Image-Based (IBVS) and Position-Based (PBVS), differ mainly in how s is defined.

In IBVS, s is specified as a set of coordinates within the target frame, whereas in

PBVS, s corresponds to a set of 3D parameters estimated from image measurements

[39].

The interaction matrix for a given point is generated from a mapping between the

point in real world coordinates and the camera’s focal length. The parameters within

this matrix are obtained directly from the image or can be measured during the camera

calibration. The interaction matrix is defined as

𝐿𝒙 = (−1 𝑍⁄ 0 𝑥 𝑍⁄

0 −1 𝑍⁄ 𝑦 𝑍⁄𝑥𝑦 −(1 + 𝑥2) 𝑦

1 + 𝑦2 −𝑥𝑦 −𝑥) .

Equation 4: the interaction matrix of the point x. Reprinted from [39]

Page 75: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

60

In this matrix, X, Y, and Z correspond to the real-world coordinates of a point x,

whereas x and y are projections of these points on the image plane.

The projections x and y are obtained using the following equations:

𝑥 =𝑋

𝑍= (𝑢 − 𝑐𝑢) 𝑓𝛼⁄ ; 𝑦 =

𝑌

𝑍= (𝑣 − 𝑐𝑣) 𝑓⁄ .

Equation 5: image coordinates x and y defined in terms of the pixel coordinates (u

and v), the focal length (f), the camera centre (cu and cv), and the pixel ratio α.

Reprinted from [39]

The velocity of the point (x, y) in terms of linear and angular components, is

therefore:

𝑥 = −𝑣𝑥/𝑍 + 𝑥𝑣𝑧/𝑍 + 𝑥𝑦𝜔𝑥 − (1 + 𝑥2)𝜔𝑦 + 𝑦𝜔𝑧 ;

𝑦 = −𝑣𝑦/𝑍 + 𝑦𝑣𝑧/𝑍 + (1 + 𝑦2)𝜔𝑥 − 𝑥𝑦𝜔𝑦 − 𝑥𝜔𝑧 .

Equation 6: the velocity of x in terms of the linear and angular components of the

target’s motion

The shorthand notation for equation 5 is = 𝑳𝒙𝑣𝑐, i.e. the same form as equation 2.

In PBVS and IBVS, the value of the depth of the target point must be known, either

through direct measurement or estimation [39]. In the estimation case, Chaumette and

several others propose Kalman filters to obtain estimates of the robot’s trajectory.

However, equivalent methods could be applied to estimate the position of the image

x- and y-coordinates. This would involve describing the states of the system as the x-

and y-coordinates of the point on the target feature (i.e. the object’s centroid), rather

than the coordinates of the moving robot.

5.5. Camera Calibration

To use either model for VS, camera calibration is essential to identify the intrinsic

camera parameters: the focal length, f, the principal point coordinates (cu, cv), the

camera skew α, and the pixel ratio of the camera [37]. MATLAB provides an in-built

tool to calibrate different camera configurations; it could be used with either a stereo

mode camera with two lenses pointing towards the same point with a fixed angle in

Page 76: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

61

between, or a single camera capturing the same image from different angles. The

latter approach was used as only one camera was included within the add-on.

The camera calibration procedure involved capturing the same image frame pattern, a

64-square checkerboard, whose square sizes are calculated beforehand. The pattern

recognition algorithm marked the crossings between the black and white squares and

calculated the pixel coordinates of a subset of those points, returning the indices of the

rejected points and marking them with a different marker. The algorithm then

calculated the distance of the camera away from the pattern object and generated a

host of other parameters pertaining to the camera.

The camera calibration was conducted as follows: the checkerboard, with squares

measuring 36.14 mm × 36.14 mm, was laid on the table. The add-on was then lifted to

various positions above the checkerboard, and snapshots of these frames were

recorded using a MATLAB script. The sample script was used to generate the images,

which were then imported within the calibration app. The app selected a subset of the

images generated and used those for the calibration procedures.

cameraParams1 = cameraParameters with properties: Camera Intrinsics IntrinsicMatrix: [3x3 double] FocalLength: [799.0569 812.6253] PrincipalPoint: [223.1936 234.0062] Skew: 0 Lens Distortion RadialDistortion: [0.0817 -0.3076] TangentialDistortion: [0 0] Camera Extrinsics RotationMatrices: [3x3x3 double] TranslationVectors: [3x3 double] Accuracy of Estimation MeanReprojectionError: 0.1986 ReprojectionErrors: [54x2x3 double] ReprojectedPoints: [54x2x3 double] Calibration Settings NumPatterns: 3 WorldPoints: [54x2 double] WorldUnits: 'mm' EstimateSkew: 0 NumRadialDistortionCoefficients: 2 EstimateTangentialDistortion: 0

Figure 33: the camera calibration parameters

Page 77: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

62

Figure 34: The camera calibration session in MATLAB. The checkerboard is used to train the

calibrator (top), which generates a pattern-centric view that shows the position of the camera

in the different frames (bottom).

In addition to the intrinsic matrix of the camera, the calibration app calculates camera

distortion coefficients based on its inability to corroborate the target points. The app

also returns the X-Y-Z coordinates of the camera relative to the pattern. This could be

visualised with two modes: pattern-centred (visualising the pattern as a stationary

object and the various camera frames used to capture it), and camera-centred

(visualising the camera as stationary with the object placed at various points).

Identifying the intrinsic parameters of the camera is an essential step towards

developing an accurate VS model. Chapter 6 presents some insights on how this could

be realised on-board the add-on, to improve the equipped MSR reactiveness to its

environment and to ultimately achieve exteroception, fulfilling the purpose of the

hardware add-on.

Page 78: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

63

Chapter 6 - Conclusion

This project aimed to produce a prototype add-on for a self-reconfigurable modular

robot system. Research into Modular Self-Reconfigurable Robots was conducted.

This involved investigating existing implementations of MSR systems and identifying

their key traits. From there, MSR systems with add-ons were identified, specifically

those with vision-based extensions. Next, the HiGen module was introduced, and the

system architecture of the MSR illustrated. This was used as a framework to identify

hardware and software design requirements, which were then implemented and

combined into a prototype device. Software was then developed to integrate the add-

on components, which could then be integrated with the HiGen connector. Finally, the

add-on’s potential as a means of collecting and processing data was explored.

6.1. Further Work

The final step to ensure compatibility with the MSR would be to fully integrate the

HiGen connector into the add-on in terms of hardware and software. This would

involve programming and testing CAN bus communication between the add-on and

the tool controller, so information could be exchanged throughout the MSR system.

As much of the infrastructure has been developed, the task of integrating these

communication methods would be relatively simple. It is worth noting that since the

HiGen connector was not supplied during the project, progress on this development

stage could not commence.

The initial project proposal involved exploring the potential to develop a secondary

add-on based on a gripper tool which has been partially developed by C. Parrott. This,

although not fully implemented, was considered throughout the design stages. For

example, the Tool Expander board incorporated attachment points for various sensors

and actuators that could be used to further enhance the gripper. Load sensors and IR

sensors have been purchased, but the time budget for the project was too limited to

attempt and implement these enhancements.

Page 79: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

64

Through designing the vision add-on using the techniques outlined, this research

proposes the first implementation of an MSR extension that utilises a Single-Board

Computer as its on-board controller. This adds many beneficial qualities to the add-on

which no other MSR add-on design has yet explored, to the extent of the research

conducted. The use of the SBC’s computational power not only allows much more

complex decision-making, but also more robust real-time data acquisition and more

powerful system-wide communication.

In addition, since MATLAB was chosen as the primary software development

environment for the add-on, a variety of powerful software packages could be readily

integrated and used on the add-on. This could be used to develop system-level control

strategies that govern the behaviour of the MSR as a whole. While this was not

demonstrated in practice, research was conducted into autonomous control design for

SBC-based robots implemented within Simulink [41]. It was attempted to implement

similar methods on-board the add-on; however, they could not be fully verified during

this project.

Both MATLAB and Simulink have immense potential to further benefit this add-on.

For instance, functionality to live-stream data to the Internet of Things (IoT), through

a package named ThingSpeak [42], was briefly investigated. In addition, the prospect

of controlling the entire add-on behaviour using these tools render them interesting

progression points for the overall project. This serves to remove many of the software

development constraints, as the user would be in full control of the controller model

developed, be it at the individual add-on scale, or its complete integration within the

system.

It is hoped that this research paves the way for potential expansions upon both the

HiGen and its add-ons, and this SBC-based add-on. Much of the ground has already

been covered in this development: it is up to the successor to steer this add-on, and

consequently the MSR as a whole, in the direction of real-world application.

Page 80: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

65

Page 81: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

i

Appendix A: Project Task Sheet

Task Sheet - Legend

Blue – Completed

Orange - Partially Completed

Red – Abandoned/aborted due

to external factors

Strikethrough: Task no longer active

Page 82: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

ii

Appendix B: Project Gantt Chart

Gantt Chart - Legend

Page 83: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

iii

Appendix C: Project Resources Collected

Name ID Description

URL QTY

UNIT PRICE (£)

NET (£)

Teensy

TEENSYV32

Teensy 3.2

http://www.hobbytronics.co.uk/teensy-v32 1 16.25

16.25

Jumper Wire Kit

312 140 piece Jumper Wire Kit

http://www.hobbytronics.co.uk/jumper-kit-140 1 3.20

3.20

Male Header

36HEAD

36 Way Single Row Header 0.1 inch pitch

http://www.hobbytronics.co.uk/36way-header-254 1 0.17

0.17

Pi Breakout

2029 Pi Cobbler plus Breakout Cable for Raspberry Pi B+

http://www.hobbytronics.co.uk/raspberry-pi/rpi-connectors-cables/pi-cobbler-b-plus

1 16.23

16.23

M/F Jumpers

PRT-12794

Jumper Wires - Male/Female 6" (20 pack)

http://www.hobbytronics.co.uk/jumper-wires-mf-6in 1 1.2 1.20

Raspberry Pi A+

2447906

RASPBRRY-MODA+-256M SBC, RASPBERRY PI, MODEL A+, 256MB

http://uk.farnell.com/raspberry-pi/raspbrry-moda-256m/sbc-raspberry-pi-model-a-256mb/dp/2447906

1 16.23

16.23

Raspberry Pi NoIR Camera

2357308

RPI NOIR CAMERA BOARD RASPBERRY PI NOIR CAMERA BOARD

http://uk.farnell.com/raspberry-pi/rpi-noir-camera-board/raspberry-pi-noir-camera-board/dp/2357308

1 18.52

18.52

Raspbian Micro SD

1525 Pre-Loaded Raspbian microSD

https://www.coolcomponents.co.uk/pre-loaded-raspbian-microsd-card-for-raspberry-pi-b.html

1 9.99

9.99

Page 84: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

iv

Card Card for Raspberry Pi B+

Mini WiFi Dongle

1129 Miniature WiFi (802.11b/g/n) Dongle

https://www.coolcomponents.co.uk/miniature-wifi-802-11b-g-n-dongle.html

1 8.54

8.54

Hextronik HXT 900

HXT 900

Hextronik HXT 900 9 GR SERVOS

http://www.hobbyking.com/hobbyking/store/__662__HXT900_Micro_Servo_1_6kg_0_12sec_9g.html

2 4.38

8.76

PCB Tool Exp Ver 1

Tool Expander PCB

pcbtrain.co.uk 2 44.14

88.28

Breadboard

720 Breadboard Deluxe

https://www.coolcomponents.co.uk/breadboard-deluxe.html

1 4.5 4.50

Page 85: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

v

Appendix D:

/* Reading a serial ASCII-encoded string. and sending motor commands to the teensy. Adapted from Open Source code This example code is in the public domain. */ #include <Servo.h> #define HWSERIAL Serial1 int p0 = 90; Servo panServo; Servo tiltServo; void setup() // initialize serial: HWSERIAL.begin(9600); // make the pins outputs: panServo.attach(6); tiltServo.attach(5); panServo.write(p0); tiltServo.write(p0); // pinMode(bluePin, OUTPUT); void loop() // if there's any serial available, read it: if (HWSERIAL.available() > 0) // look for the next valid integer in the incoming serial stream: int pan = HWSERIAL.parseInt(); // do it again: int tilt = HWSERIAL.parseInt(); // look for the newline. That's the end of your // sentence: if (HWSERIAL.read() == '\n') // constrain the values to 50 <= (theta,phi) <= 150 pan = constrain(pan, 10, 170); tilt = constrain(tilt, 10, 170); // write the commands out to serial // print the two numbers HWSERIAL.print(pan, DEC); HWSERIAL.println(tilt, DEC); panServo.write(pan); tiltServo.write(tilt); else panServo.detach(); tiltServo.detach();

Page 86: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

vi

Appendix E: Source Code for Trackball

function [img, bw, xm, ym] = trackball_cpy(img, thresh) r = img(:,:,1); g = img(:,:,2); b = img(:,:,3); %% Calculate green justGreen = g - r/2 - b/2; %% Threshold the image bw = justGreen > thresh; %% Find center [x, y] = find(bw); if ~isempty(x) && ~isempty(y) xm = round(mean(x)); ym = round(mean(y)); %% Creat a red dot xx = max(1, xm-5):min(xm+5, size(bw, 1)); yy = max(1, ym-5):min(ym+5, size(bw, 2)); bwbw = zeros(size(bw), 'uint8'); bwbw(xx, yy) = 255; %% Create output image img(:,:,1) = uint8(r + bwbw); img(:,:,2) = uint8(g - bwbw); img(:,:,3) = uint8(b - bwbw); xm = xm+0; ym = ym+0; else xm = 0; ym = 0; end

Page 87: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

vii

Appendix F: Source Code for Teensy Serial WASD Pad

1. // CODE ADAPTED FROM OPEN SOURCE LIBRARIES 2. // AUTHOR DOES NOT CLAIM OWNERSHIP OF CODE DEVELOPED 3. // 4. #include <Servo.h> 5. 6. #define HWSERIAL Serial1 7. Servo panservo; 8. Servo tiltservo; 9. 10. const int maxpan = 150; 11. const int minpan = 10; 12. const int maxtilt = 150; 13. const int mintilt = 10; 14. int pan_old = 0; 15. int tilt_old = 0; 16. int pan = 0; 17. int tilt = 0; 18. int led = 13; 19. int flag = 0; 20. 21. void setup() 22. 23. panservo.attach(6); 24. tiltservo.attach(5); 25. panservo.write(pan); 26. tiltservo.write(tilt); 27. 28. Serial.begin(115200); 29. // HWSERIAL.begin(115200); 30. // HWSERIAL.println("ready"); 31. 32. pinMode(led,OUTPUT); 33. digitalWrite(led,HIGH); 34. 35. 36. 37. void loop() 38. if(Serial.available() > 0) 39. int input = Serial.read(); 40. 41. if(input == 'a') 42. if((pan + 10) < maxpan && pan_old != pan) 43. pan_old = pan; 44. pan += 10; 45. 46. 47. 48. 49. if(input == 'd') 50. if((pan - 10) > minpan && pan_old != pan) 51. 52. pan_old = pan; 53. pan -= 10; 54. 55. 56. 57. if(input == 's') 58. if((tilt + 10 ) < maxtilt && tilt_old != tilt) 59. 60. tilt_old = tilt; 61. tilt += 10;

Page 88: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

viii

62. 63. 64. 65. 66. if(input == 'w') 67. if((tilt - 10 ) > mintilt && tilt_old != tilt) 68. tilt_old = tilt; 69. tilt -= 10; 70. 71. 72. 73. if(input == 'p') 74. for(int i = minpan ; i < maxpan ; i++) 75. panservo.write(i); 76. 77. 78. 79. if(input == 'o') // turn off motors 80. panservo.detach(); 81. tiltservo.detach(); 82. // flag = 0; 83. 84. panservo.write(pan); 85. tiltservo.write(tilt); 86. Serial.println("pan: " + (String)pan + " tilt: " + (String)tilt); 87. 88.

89.

Page 89: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

ix

Appendix G: MATLAB Serial with Teensy

#include <Servo.h>

#define HWSERIAL Serial1

Servo panservo;

Servo tiltservo;

const int maxpan = 150;

const int minpan = 10;

const int maxtilt = 150;

const int mintilt = 10;

int pan_old = 0;

int tilt_old = 0;

int pan = 0;

int tilt = 0;

int led = 13;

int flag = 0;

void setup()

panservo.attach(6);

tiltservo.attach(5);

panservo.write(pan);

tiltservo.write(tilt);

Serial.begin(115200);

// HWSERIAL.begin(115200);

// HWSERIAL.println("ready");

pinMode(led,OUTPUT);

digitalWrite(led,HIGH);

void loop()

if(Serial.available() > 0)

int input = Serial.read();

if(input == 'a')

if((pan + 10) < maxpan && pan_old != pan)

pan_old = pan;

pan += 10;

if(input == 'd')

if((pan - 10) > minpan && pan_old != pan)

pan_old = pan;

pan -= 10;

if(input == 's')

if((tilt + 10 ) < maxtilt && tilt_old != tilt)

tilt_old = tilt;

tilt += 10;

if(input == 'w')

if((tilt - 10 ) > mintilt && tilt_old != tilt)

Page 90: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

x

tilt_old = tilt;

tilt -= 10;

if(input == 'p')

for(int i = minpan ; i < maxpan ; i++)

panservo.write(i);

if(input == 'o') // turn off motors

panservo.detach();

tiltservo.detach();

// flag = 0;

panservo.write(pan);

tiltservo.write(tilt);

Serial.println("pan: " + (String)pan + " tilt: " + (String)tilt);

Page 91: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xi

Appendix H: Pan-Tilt Model in MATLAB

% MATLAB MODEL OF PAN-TILT TRACKING

% Adapted from work by Chen et al.

% Mohamed Marei, 29/04/2016

function [theta,phi,rho,XYZ] = pan_tilt(u,v,theta_d,phi_d) % Predefine n,r,alpha,w,h n = 0.40; % left as a small value for now; w = 320; h = 240; r = w/h; alpha=deg2rad(41.4); % Compute theta_d and phi_d, i.e. pan and tilt angles tdp = - pi + deg2rad(theta_d) ; pdp = pi/2 - deg2rad(phi_d); % compute the angles ctp = cos(tdp); % cos(theta-pi) stp = sin(tdp); % sin(theta-pi) cpp = cos(pdp); % cos(pi/2 - phi) spp = sin(pdp); % sin(pi/2 - phi) % Rotation matrix of theta about x-axis Rxt = [1 0 0; % Rotation of theta-pi about x-axis;

i.e. a tilt motion 0 ctp -stp; 0 stp ctp]; % Rotation matrix of phi about z-axis Rzp = [cpp -spp 0; % Rotation of pi/2-phi about z-axis; VT 1 spp cpp 0; 0 0 1]; % inverse of M_view Mview = Rxt*Rzp; Mview_inv= inv(Mview); % Compute image information xp_dd = 2*n*r*(-0.5+(u/w))*tan(0.5*alpha); yp_dd = 2*n*(-0.5+(v/h))*tan(0.5*alpha); zp_dd = -n; % Compute point x,y,z XYZ=Mview_inv*[xp_dd yp_dd zp_dd]'; X = XYZ(1); Y = XYZ(2); Z = XYZ(3); theta = round(rad2deg(acos(Z/sqrt(X^2+Y^2+Z^2)))); % theta_d is

defined as the (tilt angle) phi = round(rad2deg(atan(Y/X))); % phi_d is defined

as the pan angle end

Page 92: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xii

Appendix I: Source Code for Simple Tracking Controller

1 % %% Simple code to track a green tennis ball, find its centre, then issue

2 % % motor commands from the Pi to an Arduino.

3 % % Created by Mohamed Marei on 05/04/2016

4 % clear mypi cam teensy

5 ip = '192.168.27.167'; % Raspberry Pi's IP address

6 usr = 'pi'; % Pi username

7 pswd = 'raspberry'; % Password

8 %

9 mypi = raspi(ip,usr,pswd); % initialises Pi object

10 cam = cameraboard(mypi, 'Resolution','640x480');% initialises camera object

11 if cam.Rotation == 0 % Ensure camera faces right

12 cam.Rotation = 180; % way up

13 end

14 % A simple loop to track the position of the ball throughout the

15 % figure(1); clf reset

16 cam = cameraboard(mypi,'Resolution','320x240','Quality',40,'Brightness',60)

17 tb =0;

18 pb =0;

19 time = [1:500];

20 tlen = length(time);

21 tb_delta = zeros(1,tlen);

22 pb_delta = zeros(1,tlen);

23 theta_array = zeros(1,tlen);

24 phi_array = zeros(1,tlen);

25 xm_array = zeros(1,tlen);

26 ym_array = zeros(1,tlen);

27 XYZ_array = zeros(3,tlen);

28 %teensy = serialdev(mypi,'dev/ttyAMA0',115200);

29 teensy = serial ('Com6','BaudRate', 9600);

30 % substitute for teensy = serial('Com6') with the same board parameters

31 u = 160 + 1.*rand(1,500);

32 v = 160 + 1.*rand(1,500);

33 %%

34 for i=1:tlen

35 [img, bw, xm, ym] = trackball_cpy(snapshot(cam),40);

36 figure(1);

37 subplot(211);

38 imagesc(img);

39 rectangle(‘Position’.[xm-30 ym-27 60 54])

40 subplot(212);

41 imagesc(bw);

42 drawnow;

43 pause(0.05)

44 % theta is the pan angle, phi is the tilt angle

45 [theta,phi,XYZ]=pan_tilt(u(i),v(i),tb,pb);

46 theta_array(i)=theta;

47 phi_array(i)=phi;

48 xm_array(i) = xm;

49 ym_array(i) = ym;

50 tb_delta(i) = (abs(theta)-abs(tb));

51 pb_delta(i) = (abs(phi)-abs(pb));

52 tb=theta;

53 pb=phi;

54 XYZ_array(:,i)=XYZ;

55 % IF THE NEW COORDINATES are larger than 0.5* the width of the rectangle,

56 % pan; similarly for tilt

57 if phi*0.32 < 160-u(i) || theta*0.32 < 120-v(i)

58 write(teensy,[pan tilt],'uint8') % sends pan and tilt angles to teensy

59 end

60 end

61 figure(2); clf reset;

62 subplot(2,1,1);

63 plot(time,xm_array);

64 xlabel('time (ms)');

65 ylabel('x (pixels)')

66 subplot(2,1,2);

67 plot(time,ym_array);

Page 93: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xiii

68 xlabel('time (ms)');

69 ylabel('y (pixels)')

70 figure(3); clf reset;

71 subplot(2,1,1);

72 plot(time,theta_array);

73 xlabel('time (ms)');

74 ylabel('tilt (degrees)');

75 subplot(2,1,2);

76 plot(time,phi_array);

77 xlabel('time (ms)');

78 ylabel('pan (degrees)');

79 figure(4); clf reset;

80 subplot(2,1,1);

81 plot(time,tb_delta);

82 xlabel('time (ms)');

83 ylabel('d_\theta');

84 subplot(2,1,2);

85 plot(time,pb_delta);

86 xlabel('time (seconds)');

87 ylabel('d_\phi');

Page 94: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xiv

Appendix J: Design Sketches and SolidWorks Prototypes

(a) (b) (c)

Prototype of the enclosure

base. Scrapped due to

number of assembly points

required.

Prototype of the Pan

motor subassembly. Too

bulky to attach to a

bottom base.

Prototype of the pan motor

base. Scrapped but provided

base for subsequent designs.

(d) (e) (f)

Top bracket assembly

prototype. Scrapped due to

complexity.

Prototype V2 of the

assembly. Modified to

become octagonal,

thicker walls, and

detachable top “lid”

Prototype V1 of the

assembly, excluding tilt

motors or cameras.

Scrapped due to

infeasibility.

Page 95: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xv

References

[1] K. Stoy, D. Brandt and D. J. Christiensen, Self-Reconfiguring Robots: An

Introduction, Cambridge, Massachusetts: The MIT Press, 2010.

[2] C. Parrott, T. J. Dodd and a. R. Groß, “HiGen: A high-speed genderless

mechanical connection mechanism with single-sided disconnect for self-

reconfigurable modular robots,” in IEEE/RSJ International Conference on

Intelligent Robots and Systems (IROS), Chicago, IL, 2014.

[3] T. Fukuda and S. Nakagawa, “Dynamically reconfigurable robotic system,”

Proc. IEEE Conf. Robotics and Automation, vol. 3, pp. 1581-1586, 1988.

[4] M. W. Jorgensen, E. H. Ostergaard and H. H. Lund, “Modular ATRON: Modules

of a Self-Reconfiguring Robot,” in IEEE/RSJ Int. Conf. on Robots and Systems,

Sendai, Japan, 2004.

[5] S. Murata, H. Kurokawa and S. Kokaji, “Self-assembling machine,” in Proc.

IEEE Int Conf. on Robotics and Automation, San Diego, CA, 1994.

[6] G. S. Chirikjian, “Kinematics of a metamorphic robotic system,” in Proc. 1994

IEEE Conf. Robotics and Automation, San Diego, CA, 1994.

[7] B. Salemi, M. Moll and W.-M. Shen, “SuperBot: A deployable, multi-functional,

and modular self-reconfigurable robotic system,” in IEEE/RSJ Intl. Conf. on

Intelligent Robots and Systems, Beijing, China, 2006.

[8] K. Kotay and D. Rus, “Locomotion versatility through self-reconfiguration,”

Robotics and Autonomous Systems, vol. 26, pp. 217-232, 1999.

[9] M. D. Kutzer, M. S. Moses, C. Y. Brown, D. H. Scheidt, G. S. Chirikjian and M.

Armand, “Design of a New Independently Mobile Reconfigurable Modular

Page 96: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xvi

Robot,” in IEEE International Conference on Robotics and Automation,

Anchorage, Alaska, 2010.

[10] J. Davey, N. Kwok and M. Yim, “Emulating Self-reconfigurable Robots - Design

of the SMORES System,” in 2012 IEEE/RSJ Intl. Conf. on Intelligent Robots and

Systems, Vilamoura, Algrave, Portugal, 2012.

[11] S. Murata, E. Yoshida, A. Kamimura, H. Kurokawa, K. Tonita and S. Kokaji,

“M-TRAN: Self-reconfigurable modular robotic system,” IEEE/ASME

Transactions on Mechatronics, vol. 7, no. 4, pp. 432-441, 2002.

[12] M. Yim, K. Roufas, D. Duff, Y. Zhang, C. Eldershaw and S. Homans, “Modular

Reconfigurable Robots in Space Applications,” Autonomous Robots, vol. 14, no.

(2-3), pp. 225-237, 2003.

[13] M. Yim, D. G. Duff and K. D. Roufas, “PolyBot: a modular reconfigurable

robot.,” in Proceedings of the IEEE Intl. Conf. on Robotics and Automation, San

Francisco, USA, 2000.

[14] E. Yoshida, S. Matura, A. Kamimura, K. Tomita, H. Kurokawa and S. & Kokaji,

“A self-reconfigurable modular robot: Reconfiguration planning and

experiments,” The International Journal of Robotics Research, vol. 21, no. 10-

11, pp. 903-915, 2002.

[15] A. Shokri and E. Masehian, “A meta-module approach for cluster flow

locomotion of modular robots,” in Robotics and Mechatronics (ICROM), 2015

3rd RSI International Conference on, 2015.

[16] K. &. N. R. Stoy, “Self-reconfiguration using directed growth,” in In Proc. 7th

Int. Symp. on Distributed Autonomous Robotic Systems., 2004.

[17] Unknown, “MODLAB - the modular robotics laboratory at the university of

pennsylvania,” MODLAB, 2012. [Online]. [Accessed 11 November 2015].

[18] R. Gross, M. Bonani, F. Monanda and M. Dorgio, “Autonomous self-assembly in

swarm-bots,” IEEE Transactions on Robotics, vol. 22, no. 6, pp. 1115-1130,

Page 97: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xvii

2006.

[19] K. Payne, J. Everist, F. Hou and W.-M. Shen, “Single-Sensor Probabilistic

Localization on the SeReS Self-Reconfigurable Robot,” IAS, 2006.

[20] S. Babak, C. J. Taylor, M. Yim, J. Sastra and M. Park, “Using smart cameras to

localize self-assembling modular robots,” Distributed Smart Cameras, 2007.

ICDSC'07. First ACM/IEEE International Conference on, pp. 76-80, 2007.

[21] B. Shirmohammadi, C. J. Taylor, M. Yim, J. Sastra and M. Park, “Towards

robotic self-reassembly after explosion,” Departmental Papers (MEAM), p. 147,

2007.

[22] A. Castano, A. Behar and P. M. Will, “The Conro Modules for Reconfigurable

Robots,” IEEE/ASME TRANSACTIONS ON MECHATRONICS, vol. 7, no. 4, p.

403, 2002.

[23] B. Li, S. Ma, J. Liu, M. Wang, T. Liu and Y. Wang, “AMOEBA-I: A Shape-

Shifting Modular Robot for Urban Search and Rescue,” Advanced Robotics, vol.

23, pp. 1025-1056, 2009.

[24] D. L. Akin, B. Roberts, S. Roderick, W. Smith and J.-M. Henriette,

“MORPHbots: Lightweight Modular Self-Reconfigurable Robotics for Space

Assembly, Inspection, and Servicing,” in Space, San Jose, California, 2006.

[25] A. Lyder, R. F. M. Garcia and K. Stoy, “Genderless connection mechanism for

modular robots introducing torque transmission between modules,” in

Proceedings of the ICRA Workshop on Modular Robots, State of the Art, 2010.

[26] M. Yim, W. M. Shen, B. Salemi, D. Rus, M. Moll, H. Lipson and G. S.

Chirikijian, “Modular self-reconfigurable robot systems [grand challenges of

robotics],” Robotics & Automation Magazine, IEEE, vol. 14, no. 1, pp. 43-52,

2007.

[27] I.-M. Chen, “Rapid response manufacturing through a rapidly reconfigurable

robotic workcell,” Robotics and Computer Integrated Manufacturing, vol. 17,

Page 98: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xviii

pp. 199-213, 2000.

[28] Y. Koren and M. Shpitalni, “Design of reconfigurable manufacturing systems,”

Journal of Manufacturing Systems, vol. 29, no. 4, pp. 130-141, 2010.

[29] C. Parrott, T. J. Dodd and R. Gross, “Towards A 3-DOF Mobile and

Reconfigurable Modular Robot,” 2014.

[30] G. Sims, “Showdown: Raspberry Pi 2 vs ODROID C1 vs HummingBoard vs

MIPS Creator CI20 (updated),” 30 04 2016. [Online]. Available:

http://www.androidauthority.com/raspberry-pi-2-vs-odroid-c1-vs-

hummingboard-vs-mips-creator-ci20-599418/.

[31] “Pi NoIR Camera,” The Raspberry Pi Foundation, 2014. [Online]. Available:

https://www.raspberrypi.org/products/pi-noir-camera/. [Accessed 24 November

2015].

[32] “Teensy USB Development Board,” PJRC, [Online]. Available:

https://www.pjrc.com/teensy/. [Accessed 12 December 2015].

[33] “Raspberry Pi Model A+ on Sale,” Raspberry Pi Foundation, November 2014.

[Online]. Available: https://www.raspberrypi.org/blog/raspberry-pi-model-a-

plus-on-sale/. [Accessed 11 November 2015].

[34] J. Drogon, “wiringPi,” wiringPi, none. [Online]. Available:

http://wiringpi.com/reference/serial-library/. [Accessed 26 January 2016].

[35] “OpenCV: an Open Source Computer Vision Library,” OpenCV, 2008. [Online].

Available: http://opencv.org/. [Accessed 11 February 2016].

[36] A. Rosebrock, “Install OpenCV and Python on your Raspberry Pi 2 and B+,”

PyImageSearch, [Online]. Available:

http://www.pyimagesearch.com/2015/02/23/install-opencv-and-python-on-your-

raspberry-pi-2-and-b/. [Accessed 11 February 2016].

[37] D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Essex:

Page 99: Mohamed Marei 120126864 Dissertation Design and Construction of Hardware Add-ons for a Modular Self-Reconfigurable Robot System

xix

Pearson Education Limited, 2012.

[38] G. Chen, P. L. St-Charles, W. Bouachir, G. A. Bilodeau and R. & Bergevin,

“Reproducible Evaluation of Pan-Tilt-Zoom Tracking,” International Conference

on Image Processing (ICIP), pp. 2055-2059, 2015.

[39] F. Chaumette and S. Hutchinson, “24. Visual Servoing and Visual Tracking,” in

Springer Handbook of Robotics, Heidleberg, Germany, Springer-Verlag, 2008,

pp. 563-584.

[41] “Raspberry Pi Robot:,” MathWorks, 30 September 2014. [Online]. Available:

http://www.mathworks.com/matlabcentral/fileexchange/47376-control-a-

raspberry-pi-powered-robot-with-matlab-and-simulink. [Accessed 28 April

2016].

[42] “Internet of Things - ThingSpeak,” MathWorks, 2016. [Online]. Available:

https://thingspeak.com/. [Accessed 29 April 2016].