all rights reserved. · by wenqian nan, bin guo, shenlong huangfu, zhiwen yu, huihui chen, xingshe...

24
Copyright © 2014 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries may photocopy beyond the limits of US copyright law, for private use of patrons, those articles in this volume that carry a code at the bottom of the first page, provided that the per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Other copying, reprint, or republication requests should be addressed to: IEEE Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O. Box 133, Piscataway, NJ 08855-1331. The papers in this book comprise the proceedings of the meeting mentioned on the cover and title page. They reflect the authors’ opinions and, in the interests of timely dissemination, are published as presented and without change. Their inclusion in this publication does not necessarily constitute endorsement by the editors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc. IEEE Computer Society Order Number P5448 ISBN-13: 978-1-4799-7646-1 BMS Part # CFP1475H-PRT Additional copies may be ordered from: IEEE Computer Society IEEE Service Center IEEE Computer Society Customer Service Center 445 Hoes Lane Asia/Pacific Office 10662 Los Vaqueros Circle P.O. Box 1331 Watanabe Bldg., 1-4-2 P.O. Box 3014 Piscataway, NJ 08855-1331 Minami-Aoyama Los Alamitos, CA 90720-1314 Tel: + 1 732 981 0060 Minato-ku, Tokyo 107-0062 Tel: + 1 800 272 6657 Fax: + 1 732 981 9667 JAPAN Fax: + 1 714 821 4641 http://shop.ieee.org/store/ Tel: + 81 3 3408 3118 http://computer.org/cspress [email protected] [email protected] Fax: + 81 3 3408 3553 [email protected] Individual paper REPRINTS may be ordered at: <[email protected]> Editorial production by Juan E. Guerrero Cover art production by Mark Bartosik Printed in the United States of America by Applied Digital Imaging IEEE Computer Society Conference Publishing Services (CPS) http://www.computer.org/cps

Upload: others

Post on 16-Apr-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Copyright © 2014 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries may photocopy beyond the limits of US copyright law, for private use of patrons, those articles in this volume that carry a code at the bottom of the first page, provided that the per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Other copying, reprint, or republication requests should be addressed to: IEEE Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O. Box 133, Piscataway, NJ 08855-1331. The papers in this book comprise the proceedings of the meeting mentioned on the cover and title page. They reflect the authors’ opinions and, in the interests of timely dissemination, are published as presented and without change. Their inclusion in this publication does not necessarily constitute endorsement by the editors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc.

IEEE Computer Society Order Number P5448

ISBN-13: 978-1-4799-7646-1 BMS Part # CFP1475H-PRT

Additional copies may be ordered from:

IEEE Computer Society IEEE Service Center IEEE Computer Society Customer Service Center 445 Hoes Lane Asia/Pacific Office

10662 Los Vaqueros Circle P.O. Box 1331 Watanabe Bldg., 1-4-2 P.O. Box 3014 Piscataway, NJ 08855-1331 Minami-Aoyama

Los Alamitos, CA 90720-1314 Tel: + 1 732 981 0060 Minato-ku, Tokyo 107-0062 Tel: + 1 800 272 6657 Fax: + 1 732 981 9667 JAPAN Fax: + 1 714 821 4641 http://shop.ieee.org/store/ Tel: + 81 3 3408 3118

http://computer.org/cspress [email protected]

[email protected] Fax: + 81 3 3408 3553 [email protected]

Individual paper REPRINTS may be ordered at: <[email protected]>

Editorial production by Juan E. Guerrero Cover art production by Mark Bartosik

Printed in the United States of America by Applied Digital Imaging

IEEE Computer Society

Conference Publishing Services (CPS) http://www.computer.org/cps

UIC-ATC-ScalCom2014Sponsors

2014 IEEE 11th Intl Conf on UbiquitousIntelligence & Computing and 2014 IEEE 11thIntl Conf on Autonomic & Trusted Computing

and 2014 IEEE 14th Intl Conf on ScalableComputing and Communications and Associated

Symposia/WorkshopsDenpasar, Bali, Indonesia

9-12 December 2014

Conference InformationCopyright PageMessage from the UIC 2014 General ChairsMessage from the UIC 2014 Program ChairsMessage from the UIC 2014 Steering ChairsMessage from the UIC 2014 Workshop/Symposium ChairsMessage from the ATC 2014 General Chairs and Program ChairsMessage from the ATC 2014 Steering ChairsMessage from the ScalCom 2014 General ChairsMessage from the ScalCom 2014 Program ChairsMessage from the ScalCom 2014 Steering ChairsMessage from the ScalCom 2014 Workshop ChairsMessage from the BusinessClouds 2014 Workshop ChairsMessage from the FUSION 2014 Workshop ChairsMessage from the PUDA 2014 Workshop ChairsMessage from the UFirst 2014 Symposium ChairsMessage from the USDE 2014 Symposium ChairsUIC 2014 Conference OrganizationUIC 2014 Program CommitteeATC 2014 Conference OrganizationATC 2014 Program CommitteeScalCom 2014 Conference OrganizationScalCom 2014 Program CommitteeKeynotesAuthor Index

Papers By Session

UIC 2014 – Regular Papers

IoT Link: An Internet of Things Prototyping Toolkitby Ferry Pramudianto, Carlos Alberto Kamienski, Eduardo Souto, Fabrizio Borelli, Lucas L. Gomes,Djamel Sadok, Matthias Jarke

Situation Inference by Fusion of Opportunistically Available Contextsby Jiangtao Wang, Yasha Wang, Hongru Ren, Daqing Zhang

Detecting Cruising Flagged Taxis' Passenger-Refusal Behaviors Using Traffic Data andCrowdsourcingby Li Jin, Ming Han, Gangli Liu, Ling Feng

Handling Influence among Multiple Applications in a Smart Spaceby Ma Jun, Tao Xianping, Cao Chun, Lu Jian

Implementation of On-Demand Indoor Location-Based Service Using Ad Hoc WirelessPositioning Networkby Shigemi Ishida, Koki Tomishige, Akira Izumi, Shigeaki Tagashira, Yutaka Arakawa, Akira Fukuda

Shutter: Preventing Information Leakage Based on Domain Gateway for Social Networksby Tao Wu, Jianxin Li, Nannan Wu, Tao Ou, Borui Yang, Bo Li

Scalable Security Analysis Using a Partition and Merge Approach in an Infrastructure asa Service Cloudby Jin B. Hong, Taehoon Eom, Jong Sou Park, Dong Seong Kim

RPC: A Localization Method Based on Regional Partition and Cooperationby Dan Xu, Xiaojiang Chen, Weike Nie, Zhanyong Tang, Zhanglei Li, Dingyi Fang, Na An

Recognizing Semantic Locations from Smartphone Log with Combined Machine LearningTechniquesby Hu Xu, Sung-Bae Cho

A Run-Time Generic Decision Framework for Power and Performance Management onMobile Devicesby Martin Peres, Mohamed Aymen Chalouf, Francine Krief

Muclouds: Parallel Simulator for Large-Scale Cloud Computing Systemsby Jinzhao Liu, Yuezhi Zhou, Di Zhang, Yujian Fang, Wei Han, Yaoxue Zhang

A Management System for Cyber Individuals and Heterogeneous Databy Jun Ren, Jianhua Ma, Runhe Huang, Qun Jin, Zhigang Chen

Nanonetwork Minimum Energy Codingby Muhammad Agus Zainuddin, Eugen Dedu, Julien Bourgeois

Design of Lower Limb Chair Exercise Support System with Depth Sensorby Toshiya Watanabe, Naohiro Ohtsuka, Susumu Shibusawa, Masaru Kamada, Tatsuhiro Yonekura

Indoor Localization Utilizing Tracking Scanners and Motion Sensorsby Takumi Takafuji, Kazuhisa Fujita, Takamasa Higuchi, Akihito Hiromori, Hirozumi Yamaguchi,Teruo Higashino

AR Go-Kon: A System for Facilitating a Smooth Communication in the First Meetingby Yuma Akaike, Jun Komeda, Yuka Kume, Satoshi Kanamaru, Yutaka Arakawa

An Anypath Routing Protocol for Multi-hop Cognitive Radio Ad Hoc Networksby Chih-Min Chao, Hsiang-Yuan Fu, Li-Ren Zhang

Intuitive Appliance Control System Based on a High-Accuracy Indoor Positioning Systemby Jun Komeda, Yutaka Arakawa, Morihiko Tamai, Keiichi Yasumoto

Discovering Latent Structures for Activity Recognition in Smart Environmentsby Jiahui Wen, Jadwiga Indulska, Zhiying Wang

An Opportunistic Music Sharing System Based on Mobility Prediction and PreferenceLearningby Fei Yi, Zhiwen Yu, Hui Wang, Bin Guo, Xingshe Zhou

Who Move the Treasures: A RFID-based Approach for the Treasuresby Tianzhang Xing, Binbin Xie, Zhanyong Tang, Xia Zheng, Liqing Ren, Xiaojiang Chen, DingyiFang, Na An

Behaviors and Communications in Working Support through First Person VisionCommunicationby Yuichi Nakamura, Takahiro Koizumi, Kanako Obata, Kazuaki Kondo, Yasuhiko Watanabe

Human-Assisted Rule Satisfaction in Partially Observable Environmentsby Viktoriya Degeler, Edward Curry

A Cross-Space, Multi-interaction-Based Dynamic Incentive Mechanism for Mobile CrowdSensingby Wenqian Nan, Bin Guo, Shenlong Huangfu, Zhiwen Yu, Huihui Chen, Xingshe Zhou

GFRT-chord: Flexible Structured Overlay Using Node Groupsby Hiroya Nagao, Shudo Kazuyuki

Influencing Factors Analysis of People's Answering Behaviours on Social Network BasedQuestionsby Zhiwei Sun, Wenge Rong, Yikang Shen, Yuanxin Ouyang, Chao Li, Zhang Xiong

An Autonomic Approach to Real-Time Predictive Analytics Using Open Data and Internetof Thingsby Wassim Derguech, Eanna Bruke, Edward Curry

A Multi-armed Bandit Approach to Online Spatial Task Assignment

by Umair Ul Hassan, Edward Curry

Palantir: Crowdsourced Newsification Using Twitterby Prthvi Raj, Sumi Helal

Historical Trajectories Based Location Privacy Protection Queryby Liu Cao, Yuqing Sun, Haoran Xu

Near-Optimal Activity Prediction through Efficient Wavelet Modulus Maxima Partitioningand Conditional Random Fieldsby Roland Assam, Thomas Seidl

Taxi Exp: A Novel Framework for City-Wide Package Express Shipping via Taxi CrowdSourcingby Chao Chen, Daqing Zhang, Leye Wang, Xiaojuan Ma, Xiao Han, Edwin Sha

Towards Energy Optimization Based on Delay-Sensitive Traffic for WiFi Networkby Bo Chen, Xi Li, Xuehai Zhou, Tengfu Liu, Zongwei Zhu

BodyRC: Exploring Interaction Modalities Using Human Body as Lossy SignalTransmission Mediumby Yuntao Wang, Chun Yu, Lin Du, Jin Huang, Yuanchun Shi

Multi-source Broadcast Scheduling Algorithm of Barrage Relay Network in TacticalMANETby Junhua Yan, Chen Tian, Wenyu Liu, Lai Tu, Benxiong Huang

Discovering People's Life Patterns from Anonymized WiFi Scanlistsby Shao Zhao, Zhe Zhao, Yifan Zhao, Runhe Huang, Shijian Li, Gang Pan

uStitchHub: Stitching Multi-touch Trajectories on Tiled Very Large Tabletopsby Yongqiang Qin, Yue Shi, Yuanchun Shi

Design of a Sensing Service Architecture for Internet of Things with Semantic SensorSelectionby Yao-Chung Hsu, Chi-Han Lin, Wen-Tsuen Chen

Identifying Hot Lines of Urban Spatial Structure Using Cellphone Call Detail Record Databy Shu Chen, Hongwei Wu, Lai Tu, Benxiong Huang

Study on Complex Event Processing for CPS: An Event Model Perspectiveby Yuying Wang, Xingshe Zhou, Lijun Shan, Kejian Miao

Model-Based Solution for Personalization of the User Interaction in UbiquitousComputingby Rui Neves Madeira, Pedro Albuquerque Santos, André Vieira, Nuno Correia

UIC 2014 - Short Papers

Task Scheduling in Cyber-Physical Systemsby Chunyao Liu, Lichen Zhang, Daqiang Zhang

Data Collection Oriented Topology Control for Predictable Delay-Tolerant Networksby Hongsheng Chen, Ke Shi, Yao Lin

A Novel Email Virus Propagation Model with Local Groupby Qiguang Miao, Xing Tang, Yining Quan

Obstacle Avoidance for Visually Impaired Using Auto-Adaptive Thresholding on Kinect'sDepth Imageby Muhamad Risqi Utama Saputra, Widyawan, Paulus Insap Santosa

Making Business Environments Smarter: A Context-Adaptive Petri Net Approachby Estefanía Serral, Johannes De Smedt, Jan Vanthienen

Subtractive Clustering as ZUPT Detectorby Mohd Nazrin Muhammad, Zoran Salcic, Kevin I-Kai Wang

An Indoor Location-Tracking Using Wireless Sensor Networks Cooperated with RelativeDistance Finger Printingby Youn-Sik Hong, Sung-Hyun Han, Saemina Kim

Defining and Analyzing a Gesture Set for Interactive TV Remote on Touchscreen Phonesby Yuntao Wang, Chun Yu, Yuhang Zhang, Jin Huang, Yuanchun Shi

Privacy Perceptive Wireless Service Game Modelby Weiwei Li, Yuqing Sun

Development of Collaborative Video Streaming for Mobile Networks: From Overview toPrototypeby Mingyang Zhong, Jadwiga Indulska, Peizhao Hu, Marius Portmann, Mohan J. Kumar

MSR: Minimum-Stop Recharging Scheme for Wireless Rechargeable Sensor Networksby Lyes Khelladi, Djamel Djenouri, Noureddine Lasla, Nadjib Badache, Abdelmadjid Bouabdallah

Service Selection in Ubiquitous Environments: A Novel Approach Using CBR and SkylineComputingby Rim Helali, Nadia Ben Azzouna, Khaled Ghedira

Remote Sensing of Forest Stand Parameters for Automated Selection of Trees in Real-Time Mode in the Process of Selective Cuttingby Igor Petukhov, Luydmila Steshina, Ilya Tanryverdiev

The Device Cloud - Applying Cloud Computing Concepts to the Internet of Thingsby Thomas Renner, Andreas Kliem, Odej Kao

Studying Accessible States of User Interfaces on Tabletopsby Qiang Yang, Jie Liu, Yongqiang Qin, Chun Yu, Qing Yuan, Yuanchun Shi

Tailor-Made Gaussian Distribution for Intrusion Detection in Wireless Sensor Networksby Amrita Ghosal, Subir Halder

Segmentation of Urban Areas Using Vector-Based Modelby Si Zhao, Hongwei Wu, Lai Tu, Benxiong Huang

ATC 2014 - Regular Papers

A Machine Learning Approach for Self-Diagnosing Multiprocessors Systems under theGeneralized Comparison Modelby Mourad Elhadef

Failure Prediction for Cloud Datacenter by Hybrid Message Pattern Learningby Yukihiro Watanabe, Hiroshi Otsuka, Yasuhide Matsumoto

Self-Adaptive Containers: Interoperability Extensions and Cloud Integrationby Wei-Chih Huang, William Knottenbelt

Automatically Generating External OS Kernel Integrity Checkers for Detecting HiddenRootkitsby Hiromasa Shimada, Tatsuo Nakajima

A Privacy-Aware Architecture for Energy Management Systems in Smart Gridsby Fabian Rigoll, Christian Hirsch, Sebastian Kochanneck, Hartmut Schmeck, Ingo Mauser

Awareness and Control of Personal Data Based on the Cyber-I Privacy Modelby Li Tang, Jianhua Ma, Runhe Huang, Bernady O. Apduhan, He Li, Shaoyin Cheng

SW-POR: A Novel POR Scheme Using Slepian-Wolf Coding for Cloud Storageby Tran Phuong Thao, Lee Chin Kho, Azman Osman Lim

Distributed Routing Protocol Based on Biologically-Inspired Attractor Selection withActive Stochastic Exploration and a Short-Term Memoryby Tomohiro Nakao, Jun-Nosuke Teramae, Naoki Wakamiya

Towards a Trust Model for Trust Establishment and Management in Business-to-Consumer E-Commerceby Cong Cao, Jun Yan

An Efficient Trust-Oriented Trip Planning Method in Road Networksby Junqiang Dai, Guanfeng Liu, Jiajie Xu, An Liu, Lei Zhao, Xiaofang Zhou

Trust-E: A Trusted Embedded Operating System Based on the ARM Trustzoneby Xia Yang, Peng Shi, Bo Tian, Bing Zeng, Wei Xiao

An Adaptivity-Enhanced Multipath Routing Method for Secure Dispersed Data TransferMethod in Ad Hoc Networks with Varying Node Densityby Tetsuya Murakami, Eitaro Kohno, Yoshiaki Kakuda

Secure Third Party Auditor for Ensuring Data Integrity in Cloud Storageby Salah H. Abbdal, Hai Jin, Deqing Zou, Ali. A. Yassen

A Routing Scheme Based on Autonomous Clustering and P2P Overlay Network inMANETsby Shoma Nakahara, Tomoyuki Ohta, Yoshiaki Kakuda

Privacy Protection against Query Prediction in Location-Based Servicesby Zhengang Wu, Liangwen Yu, Jiawei Zhu, Huiping Sun, Zhi Guan, Zhong Chen

Forwarding Impact Aware Routing Protocol for Delay Tolerant Networkby Qaisar Ayub, M. Soperi Mohd Zahid, Sulma Rashid, Abdul Hanan Abdullah

An Architecture for Virtualization-Based Trusted Execution Environment on MobileDevicesby Young-Woo Jung, Hag-Young Kim, Sang-Wook Kim

An Autonomic Container for the Management of Component-Based Applications inPervasive Environmentsby Imen Ben Lahmar, Djamel Belaïd

An Efficient Algorithm for Deriving Mobility Scenarios from New Mobility ModelRepresenting Spatially and Temporally Biased Change of Node Mobility and NodeDensity for Mobile Ad Hoc Networksby Takahiro Shigeta, Eitaro Kohno, Yoshiaki Kakuda

Compatibility in Service-Oriented Revision Control Systemsby Jameel Almalki, Haifeng Shen

ATC 2014 - Short Papers

International Center for Monitoring Cloud Computing Providers (ICMCCP) for EnsuringTrusted Cloudsby Mohssen M.Z.E. Mohammed, Al-Sakib Khan Pathan

Virtual Machine Migration Methods for Heterogeneous Power Consumptionby Satoru Ohta, Atsushi Sakai

Partial Least Squares Improvement and Research Principal Component RegressionExtraction Methodsby Wangping Xiong, Jianqiang Du, Wang Nie

Toward Data-Centric Software Architecture for Automotive Systems - Embedded DataStream Processing Approachby Yukikazu Nakamoto, Akihiro Yamaguchi, Kenya Sato, Shinya Honda, Hiroaki Takada

An Anonymous Remote Attestation Protocol to Prevent Masquerading Attackby Anna Lan, Zhen Han, Dawei Zhang, Yichen Jiang, Tianhua Liu, Meihong Li

SecPlace: A Security-Aware Placement Model for Multi-tenant SaaS Environmentsby Eyad Saleh, Johannes Sianipar, Ibrahim Takouna, Christoph Meinel

Analysis of Virtual Machine Monitor as Trusted Dependable Systemsby Ganis Zulfa Santoso, Young-Woo Jung, Hag-Young Kim

On the Applicability of the Tree-Based Group ID Reassignment Routing Method forMANETsby Hiroaki Yagi, Eitaro Kohno, Yoshiaki Kakuda

Delay- and Disruption-Tolerant Bluetooth MANET-Based Dual-Purpose Systems forNormal and Disaster Situationsby Yuya Minami, Yuya Kitaura, Eitaro Kohno, Shinji Inoue, Tomoyuki Ohta, Yoshiaki Kakuda

A Study on Providing the Reliable and Secure SMS Authentication Serviceby Jaesik Lee, Youngseok Oh

Vacuuming XMLby Curtis E. Dyreson

ScalCom 2014

A Discrete Time Financial Option Pricing Model for Cloud Servicesby David Allenotor, Ruppa K. Thulasiram

Proposal of a Distributed Cooperative M2M System for Flood Disaster Preventionby Shiji Kitagami, Yohtaro Miyanishi, Yoshiyori Urano, Norio Shiratori

Trust in Mobile Cloud Computing with LTE-based Deploymentby Mohammed Hussain, Basel Mohamed Almourad

CAVE: Hybrid Approach for In-Network Content Cachingby Khaled Bakhit, Sirine Taleb, Ayman Kayssi, Imad Elhajj, Ali Chehab

Monitoring Hadoop by Using IEEE1888 in Implementing Energy-Aware ThreadScheduling

by Hiroaki Takasaki, Samih M. Mostafa?a Shigeru Kusakabe

Estimation of Sleep Quality of Residents in Nursing Homes Using an Internet-BasedAutomatic Monitoring Systemby Xin Zhu, Xina Zhou, Wenxi Chen, Kei-Ichiro Kitamura, Tetsu Nemoto

A Fine-Grained Cross-Domain Access Control Mechanism for Social Internet of Thingsby Jun Wu, Mianxiong Dong, Kaoru Ota, Jianhua Li, Bei Pei

UFirst 2014

Intelligent Human Fall Detection for Home Surveillanceby Hong Lu, Bohong Yang, Rui Zhao, Pengliang Qu, Wenqiang Zhang

More than Meets the Eye in Smart City Information Security: Exploring Security IssuesFar beyond Privacy Concernsby Felipe Silva Ferraz, Carlos André Guimarães Ferraz

Interactive Design and Simulation System for Deploying Wireless Sensor NetworksBased on Centriod Multi-touch Screenby Jingru Wei, Lin Lu, Chenglei Yang, Xu Yin, Xiangxu Meng

Case Study of Constructing Weather Monitoring System in Difficult Environmentby Masato Yamanouchi, Hideya Ochiai, Y.K. Reddy, Hiroshi Esaki, Hideki Sunahara

Sensor Based Vehicle Environment Perception Information Systemby Kashif Naseer Qureshi, Abdul Hanan Abdullah, Ghufran Ullah

Vision Based Mapping and Localization in Unknown Environment for Intelligent MobileRobotby Xiaoxin Qiu, Hong Lu, Wenqiang Zhang, Yunhan Bai, Qianzhong Fu

Anti-copy of 2D Barcode Using Multi-encryption Techniqueby Suwilai Phumpho, Poomyos Payakkawan, Anurak Jansri, Direk Tongaram, ChirasakPromprayoon, Pithuk Keattipun, Boonchu Ruengpongsrisuck, Chanachai Punnua, Satree Areejit,Pitikhate Sooraksa

A Novel Automated Software Test Technology with Cloud Technologyby Zhenyu Liu, Mingang Chen, Lizhi Cai

Restful Design and Implementation of Smart Appliances for Smart Homeby Sehoon Kim, Jin-Young Hong, Seil Kim, Sung-Hoon Kim, Jun-Hyung Kim, Jake Chun

The Latent Appreciation Effect of Interactive Design in Internet Communicationby Wang Ning

The Key Features and Applications of Newmedia Interactive Designby Wang Ning

A Realtime Framework for Video Object Detection with Stormby Weishan Zhang, Pengcheng Duan, Qinghua Lu, Xin Liu

A Service Composition Environment Based on Enterprise Service Busby Fu Ning, Duan Junhua, Guo Yan

A Secure and Efficient Electronic Service Book Using Smart Cardsby Hippolyte Djonon Tsague, Johan Van Der Merwe, Samuel Lefophane

USDE 2014

A Wearable Internet of Things Mote with Bare Metal 6LoWPAN Protocol for PervasiveHealthcareby Kevin I-Kai Wang, Ashwin Rajamohan, Shivank Dubey, Samuel A. Catapang, Zoran Salcic

Private Smart Space: Cost-Effective ADLs (Activities of Daily Livings) Recognition Basedon Superset Transformationby Xiaohu Fan, Hao Huang, Changsheng Xie, Zhigang Tang, Jing Zeng

A Proactive Approach for Information Sharing Strategies in an Environment of MultipleConnected Ubiquitous Devicesby Remus-Alexandru Dobrican, Denis Zampunieris

Shadow VoD: Performance Evaluation as a Capability in Production P2P-CDN Hybrid VoDNetworksby Hanzi Mao, Chen Tian, Jingdong Sun, Junhua Yan, Weimin Wu, Benxiong Huang

Estimate Dynamic Road Travel Time Based on Uncertainty Feedback

by Xiao Zhang, Yufeng Dou, Junfeng Zhan, Yinchuang Xie, Xuejin, Wan

Mobiscan3D: A Low Cost Framework for Real Time Dense 3D Reconstruction on MobileDevicesby Brojeshwar Bhowmick, Apurbaa Mallik, Arindam Saha

Computing Realistic Images for Audience Interaction in Projection-Based Multi-viewDisplay Systemby Wei Gai, Lin Lu, Chenglei Yang, Shuo Feng, Tingting Cui, Xiangxu Meng

FUSION 2014

Human Computer Interaction Advancement by Usage of Smart Phones for MotionTracking and Remote Operationby Jega Anish Dev

Motion Detection and Evaluation of Chair Exercise Support System with Depth ImageSensorby Toshiya Watanabe, Naohiro Ohtsuka, Susumu Shibusawa, Masaru Kamada, Tatsuhiro Yonekura

Wireless Multihop Transmissions for Secret Sharing Communicationby Tetsuya Kanachi, Hiroaki Higaki

DTN Data Message Transmission by Inter-vehicle Communication with Help of Road Mapand Statistical Traffic Information in VANETby Hiroki Hanawa, Hiroaki Higaki

BDMap: A Heuristic Application Mapping Algorithm for the Big Data Eraby Thomas Canhao Xu, Jussi Toivonen, Tapio Pahikkala, Ville Leppänen

PUDA 2014

Network Traffic Prediction Based on LSSVM Optimized by PSOby Yi Yang, Yanhua Chen, Caihong Li, Xiangquan Gui, Lian Li

Cloud Wave Smart Middleware for DevOps in Cloudsby Boris Moltchanov, Oscar Rodríguez Rocha

Entity Linking and Name Disambiguation in Chinese Micro-Blogsby Li Li, YunLong Guo, Yu Xiang, Xiao Xu, WeiGang Zeng

Detecting Suicidal Ideation in Chinese Microblogs with Psychological Lexiconsby Xiaolei Huang, Lei Zhang, David Chiu, Tianli Liu, Xin Li, Tingshao Zhu

Personalized Activity Recognition Using Molecular Complex Detection Clusteringby Jun Zhong, Li Liu, Ye Wei, Dashi Luo, Letain Sun, Yonggang Lu

Improving the Architecture of an Autoencoder for Dimension Reductionby Changjie Hu, Xiaoli Hou, Yonggang Lu

BusinessClouds 2014

New Methods to Ensure Security to Increase User's Sense of Safety in Cloud Servicesby Yohtaro Miyanishi, Akira Kanaoka, Fumiaki Sato, Xiaogong Han, Shinji Kitagami, YoshiyoriUrano, Norio Shiratori

A Trade Gap Scalability Model for the Forex Marketby David Ademola Oyemade, David Allenotor

Multi-tenant Oriented Elastic Data-Centric Cloud Service Based on Resource Meta-Modelby Hongyun Yu, Hongming Cai, Cheng Xie, Lihong Jiang

This site and all contents (unless otherwise noted) are Copyright ©2014 IEEE. All rights reserved.

Message from the UIC 2014

General Chairs UIC-ATC-ScalCom 2014

It is our great pleasure to welcome you to the 11th IEEE International Conference on Ubiquitous

Intelligence and Computing (IEEE UIC-14), held on December 9-12, 2014, in Ayodya Resort, Bali,

Indonesia. IEEE UIC-14 is the 11th edition of this conference series, previously held as USW-05 (Taipei,

Taiwan), UISW-05 (Nagasaki, Japan), UIC-06 (Wuhan, China), UIC-07 (Hong Kong, China), UIC-08

(Oslo, Norway), UIC-09 (Brisbane, Australia), UIC-10 (Xi’an, China), UIC-11 (Banff, Canada), UIC-12

(Fukuoka, Japan), and UIC-13 (Vietri sul Mare, Italy).

UIC is recognized for its unique coverage of both ubiquitous computing and machine learning under

one carefully integrated research forum. Its coverage encompasses multiple dimensions of ubiquitous

intelligent computing, smart environment and systems, smart objects and social- and cyber-physical

systems. UIC-14 consists of a main conference and three symposia/workshops from more than 20

countries around the world. In addition, the conference program includes distinguished keynotes and one

panel.

For the successful organization of the conference, we owe an enormous debt of thanks. We would like

to sincerely thank Jianhua Ma and Laurence T. Yang, the Steering Chairs for giving us the opportunity to

organize the conference and for their support and guidance. We would like to express our appreciation to

Daqing Zhang for accepting our invitation to be the keynote speaker. We would like to give our special

thanks to the Program Chairs Yu Zheng, Takahiro Hara, and Gregor Schiele, as well as the Program Vice

Chairs, Wen-Chih Peng and Chiu C. Tan, for their excellent work and great efforts in organizing an

outstanding program committee, conducting a rigorous reviewing process and selecting high quality

papers from a large number of submissions, and for preparing an excellent conference program. We are

also indebted to the members of the program committee, who have put in hard work and long hours to

review each paper in a professional way. We are grateful to the workshop chairs Mike Chieh-Jan Liang

and Yuqing Sun who attracted 3 interesting symposia and workshops. Special thanks go to Bernady O.

Apduhan and Abdul Hanan Abdullah, executive chairs, for their great help in many of the critical details.

We hope that you will thoroughly enjoy your conference experience with us in Bali, Indonesia at IEEE

UIC-14.

Zhiwen Yu, Northwestern Polytechnical University, China

Christian Becker, University of Mannheim, Germany

Kenji Mase, Nagoya University, Japan UIC 2014 General Chairs

xvii

Message from the UIC 2014 Program Chairs

UIC-ATC-ScalCom 2014

It is our great pleasure to welcome you to the 11th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC-14) held on December 9-12, 2014, in Ayodya Resort, Bali, Indonesia. The UIC-14 conference, sponsored by the Institute of Electrical and Electronic Engineers (IEEE), IEEE Computer Society, IEEE Technical Committee on Scalable Computing (TCSC), is well established in its 11th edition as a highly reputed conference in the field.

This year, the conference theme is aimed at the topic of “Building Smart Worlds in Real and

Cyber Spaces” including four sub-topics: ubiquitous intelligent/smart systems, ubiquitous intelligent/smart environments, ubiquitous intelligent/smart objects, and personal/social/physical aspects. There are many high quality paper submissions from Asia, Asia Pacific, Europe, and all over the world. A high quality review process was done by the highly qualified program committee members, and each paper was reviewed by at least three independent reviewers (and about four review reports in average). The main program of UIC-14 selects 41 high quality regular papers, as well as 17 short papers. We would like to thank the authors of all submitted papers for choosing UIC-14 as the venue to present their high quality research work.

First of all, we are fortunate and delighted to work in coordination with the members of the

Steering Committee, Jianhua Ma (chair, Hosei University, Japan), Laurence T. Yang (chair, St. Francis Xavier University, Canada), Sumi Helal (University of Florida, USA), Theo Ungerer (University of Augsburg, Germany), Jadwiga Indulska (University of Queensland, Australia), and Daqing Zhang (Institut Telecom SudParis, France and Peking University, China) for the success of UIC-14. We sincerely appreciate their constant support and guidance. Additionally, we would like to thank General Chairs, Zhiwen Yu (Northwestern Polytechnical University, China), Christian Becker (University of Mannheim, Germany) and Kenji Mase (Nagoya University, Japan), as well as General Executive Chair, Bernady O. Apduhan (Kyushu Sangyo University, Japan) and Abdul Hanan Abdullah (UTM, Malaysia) for their full supporting. It was a great pleasure to work with such an excellent team.

Next, we would like to express our gratitude to local team for managing the program information

in the conference website, and to Sazzad Hussain (St Francis Xavier University, Canada) for his efficient assistance in managing the web-based submission and reviewing system in Canada. They all greatly assisted us during the conference organization, from setting up the program committees to the reviewing process and paper selection for the program of the conference with reputed experts in their fields.

Finally, we would like to extend our thanks to program vice-chairs: Wen-Chih Peng (National

Chiao Tung University, Taiwan) and Chiu C. Tan (Temple University, USA), to program committee members and to additional reviewers for providing tremendous valuable expertise and constructive comments to take the responsibility for the quality of paper reviewing process in a narrow time schedule.

The UIC-14 conference is a highly stimulating event to foster interesting discussions as well as

useful interaction between researchers, and provides an excellent forum for exchanging and developing new ideas regarding advancements in the state of the art and practice of ubiquitous

xviiixviiixviiixviiixviii

computing and communications, and intelligent computing as well as to identify the emerging research topics and define the future of ubiquitous and intelligent computing. We hope that all participants will take part in enjoyable research discussions in UIC-14 and will have lovely time while you stay in Bali Island, Indonesia.

Yu Zheng, Microsoft Research, China Takahiro Hara, Osaka University, Japan Gregor Schiele, DERI, Ireland UIC 2014 Program Chairs

xixxixxixxixxix

Message from the UIC 2014

Steering Chairs UIC-ATC-ScalCom 2014

Welcome to the 11th IEEE International Conference on Ubiquitous Intelligence and Computing

(UIC-14). It is our great pleasure and honor to hold UIC-14 in Ayodya Resort, Bali, Indonesia,

December 9-12, 2014. On behalf of the UIC steering committee and the UIC-14 organizing committee,

we would like to express to all the participants, especially our visitors from different countries, our

cordial welcome and high respect.

Ubiquitous sensors, devices, networks and information are paving the way towards a smart world in

which computational intelligence is distributed throughout the physical environment to provide reliable

and relevant services to people. This ubiquitous intelligence will change the computing landscape

because it will enable new breeds of applications and systems to be developed and the realm of

computing possibilities will be significantly extended. By enhancing everyday objects with

intelligence, many tasks and processes could be simplified, the physical spaces where people interact

like the workplaces and homes, could become more efficient, safer and more enjoyable. Ubiquitous

computing, or pervasive computing, uses these many "smart things or u-things" to create smart

environments, services and applications.

A smart thing can be endowed with different levels of intelligence, and may be context-aware,

active, interactive, reactive, proactive, assistive, adaptive, automated, sentient, perceptual, cognitive,

autonomic and/or thinking. Research on ubiquitous intelligence is an emerging research field covering

many disciplines. A series of grand challenges exist to move from the current level of computing

services to the smart world of adaptive and intelligent services. UIC-14 is the event in a series of highly

successful International Conferences on Ubiquitous Intelligence and Computing (UIC), previously held

as USW-05 (Taipei, Taiwan, March 2005), UISW-05 (Nagasaki, Japan, December 2005), UIC-06

(Three Gorges and Wuhan, China, September 2006), UIC-07 (Hong Kong, China, July 2007), UIC-08

(Oslo, Norway, July 2008), UIC-09 (Brisbane, Australia, July 2009), UIC-10 (Xian, China, October

2010), UIC-11 (Banff, Canada, September 2011), UIC-12 (Fukuoka, Japan, September 2012) and UIC-

13(Vietri sul Mare, Italy, December 2013).

An international conference can be organized by supports and great voluntary efforts of many people

and organizations and our main responsibility is to coordinate the various tasks carried out with other

willing and talented volunteers. We would like to thank the General Chairs: Zhiwen Yu (Northwestern

Polytechnical University, China), Christian Becker (University of Mannheim, Germany) and Kenji

Mase (Nagoya University, Japan), as well as General Executive Chairs: Bernady O. Apduhan (Kyushu

Sangyo University, Japan) and Abdul Hanan Abdullah (Universiti Teknologi Malaysia (UTM),

Malaysia) for very successful organization of UIC-14. We would like to express our special thanks to

the Program Chairs Yu Zheng (Microsoft Research, China), Takahiro Hara (Osaka University, Japan)

and Gregor Schiele (DERI, Ireland) with the Program Vice-chairs: Wen-Chih Peng (National Chiao

Tung University, Taiwan) and Chiu C. Tan (Temple University, USA) for making excellent technical

program.

We would like to greatly appreciate the Workshop/Symposium Chairs: Mike Chieh-Jan Liang

(Microsoft Research, China) and Yuqing Sun (Shandong University, China) for organizing excellent

UIC-14 workshops/symposia. We also thank all chairs and committee members of the UIC-14

workshops/symposia. Without workshops/symposia, we could not successfully organize UIC-14. We

xx

also would like to express our appreciation for the excellent local team under the leadership of Bernady

O. Apduhan (Kyushu Sangyo University, Japan), I Made Sarjana and Ni Ketut Dewi Ari Jayanti

(STIKOM Bali, Indonesia) for the perfect local arrangements, as well as for the hard work from Sazzad

Hussain (St Francis Xavier University, Canada) for his maintenance and help on UIC-14 submission

system and web site.

Finally, we also would like to take opportunity to thank all the members of the organizing committee,

the program vice-chairs and technical program committee as well as all of the authors who submitted

papers and reviewers who reviewed huge number of papers.

Jianhua Ma, Hosei University, Japan

Laurence T. Yang, St Francis Xavier University, Canada UIC 2014 Steering Chairs

xxi

Message from the UIC 2014

Workshop/Symposium Chairs UIC-ATC-ScalCom 2014

Welcome to the Workshops and Symposia held in conjunction with the 11th IEEE International

Conference on Ubiquitous Intelligence and Computing (UIC-14), December 09-12, 2014, Bali, Indonesia.

This year’s program includes two symposia and one workshop which comprise a total of 41 papers that

cover a wide range of hot research topics addressing issues at the frontiers of Ubiquitous and Intelligence

related research including System, Service, Security, Network, Green issue, Information Processing and

Management, Embedded System, and Applications.

•The 2014 International Symposium on UbiCom Frontiers - Innovative Research, Systems

and Technologies (UFirst-14)

•The 2014 International Symposium on Ubiquitous Systems and Data Engineering (USDE-14)

•The 2014 International Workshop on Pervasive and Ubiquitous Data Analytics (PUDA-14)

The workshops were selected by considering quality of the proposal, adequacy to the scope of UIC-14

and CV of the organizers. The workshops provide a focused treatment of specific topics in the UIC-14

field. We believe these workshops are an excellent complement to the overall scope of the main

conference, and give additional value and interest to UIC-14. We hope that all the selected papers will

have significant impact for future research in the respective research areas. We sincerely appreciate the

hard work of the workshop organizers in designing the call for papers, assembling the program

committee, managing the peer-review process for the selection of papers, and planning the workshop

program. Thanks are extended to the workshop program committees, external reviewers, session chairs,

contributing authors and attendees.

We also wish to acknowledge the support and guidance from our links with the conference

organization, specially the Steering Committee: Jianhua Ma (Hosei University, Japan), Laurence T. Yang

(St Francis Xavier University, Canada), Sumi Helal (University of Florida, USA), Daqing Zhang (Institut

Telecom SudParis, France), Jadwiga Indulska (University of Queensland, Australia) and Theo Ungerer

(University of Augsburg, Germany). It has been a pleasure to work with them. In addition, we would like

to express our deepest appreciation to Bernady O. Apduhan and Abdul Hanan Abdullah for arranging all

of matter of the great event. With their help, we believe that you are enjoying in UIC-14 and the event

will be success.

Finally, we hope that you find the symposium and the workshops quite interesting and stimulating. Also

enjoy UIC-14 and the nice island of Bali, Indonesia.

Mike Chieh-Jan Liang, Microsoft Research, China

Yuqing Sun, Shandong University, China UIC 2014 Workshop/Symposium Chairs

xxii

UIC 2014 Conference Organization UIC-ATC-ScalCom 2014

Honorary Chairs

Dadang Hermawan, STIKOM-Bali, Indonesia Xingshe Zhou, Northwestern Polytechnical University, China

General Chairs

Zhiwen Yu, Northwestern Polytechnical University, China Christian Becker, University of Mannheim, Germany

Kenji Mase, Nagoya University, Japan

Executive Chairs Bernady O. Apduhan, Kyushu Sangyo University, Japan

Abdul Hanan Abdullah, UTM, Malaysia

Program Chairs Yu Zheng, Microsoft Research, China

Takahiro Hara, Osaka University, Japan Gregor Schiele, DERI, Ireland

Program Vice-Chairs

Wen-Chih Peng, National Chiao Tung University, Taiwan Chiu C. Tan, Temple University, USA

Workshop Chairs

Mike Chieh-Jan Liang, Microsoft Research, China Yuqing Sun, Shandong University, China

Demo/Exhibit Chairs

Sozo Inoue, Kyushu Institute of Technology, Japan Feilong Tang, Shanghai Jiao Tong University, China

Advisory Committee

Stephen S. Yau (Chair), Arizona State University, USA Beniamino Di Martino, Second University of Naples, Italy

Ahhwee Tan, Nanyang Technological University, Singapore Chung-Ming Huang, National Cheng Kung University, Taiwan Hai Jin, Huazhong University of Science & Technology, China Jiannong Cao, Hong Kong Polytechnic University, Hong Kong

Max Muehlhaeuser, Darmstadt University of Technology, Germany Mohan Kumar, University of Texas at Arlington, USA

Yuanchun Shi, Tsinghua University, China Zhaohui Wu, Zhejiang University, China

xxxixxxxixxxxixxxxixxl

Steering Committee Jianhua Ma (Chair), Hosei University, Japan

Laurence T. Yang (Chair), St. Francis Xavier University, Canada Sumi Helal, University of Florida, USA

Daqing Zhang, Institut Telecom SudParis and Peking University, France Jadwiga Indulska, University of Queensland, Australia

Theo Ungerer, University of Augsburg, Germany

Publicity Chairs Zhiyong Yu, Fuzhou University, China

Al-Sakib Khan Pathan, International Islamic University, Malaysia Konstantinos Pelechrinis, University of Pittsburgh, USA

Armin Lawi, Hasanuddin University, Indonesia

Panel Chairs Robert C. Hsu, Chung Hua University, Taiwan

Sung-Bae Cho, Yonsei University, Korea

Award Chairs Frode Eika Sandnes, Oslo University College, Norway

Runhe Huang, Hosei University, Japan

International Liaison Chairs Yo-Ping Huang, National Taipei University of Technology, Taiwan

Yuichi Nakamura, Kyoto University, Japan Guanling Chen, University of Massachusetts Lowell, USA

Artur Lugmayr, Tampere University of Technology, Finland Antonio Liotta, Eindhoven University of Technology, the Netherlands

Industrial Liaison Chair

Gang Pan, Zhejiang University, China

Local Arrangements Committee I Made Sarjana (Chair), STIKOM Bali, Indonesia Ni Luh Putri Srinadi, STIKOM-Bali, Indonesia Dedy Panji Agustino, STIKOM-Bali, Indonesia

A.A. Sg. Putri Rani Prihastini, STIKOM-Bali, Indonesia I Gede Harsemadi, STIKOM-Bali, Indonesia

Ni Ketut Dewi Ari Jayanti, STIKOM-Bali, Indonesia Candra Ahmadi, STIKOM-Bali, Indonesia

Ricky Aurelius Nurtanto Diaz, STIKOM-Bali, Indonesia I Gede Putu Krisna Juliharta, STIKOM-Bali, Indonesia

I Ketut Dedy Suryawan, STIKOM-Bali, Indonesia Ni Kadek Sumiari, STIKOM-Bali, Indonesia

Dian Rahmani Putri, STIKOM-Bali, Indonesia Amos Lillo, STIKOM-Bali, Indonesia

Web Chair

Sazzad Hussain, St. Francis Xavier University, Canada

xlxlxlxlxli

UIC 2014 Program Committee UIC-ATC-ScalCom 2014

Jie Bao, Microsoft Research, China

Miriam Capretz, University of Western Ontario, Canada Ben-Jye Chang, National Yunlin University of Science and Technology, Taiwan

Chin-Chih Chang, Chung Hua University, Taiwan Yi-Chun Chang, Hungkuang University, Taiwan Chao Chen, Institut TELECOM SudParis, France

Tzung-Shi Chen, National University of Tainan, Taiwan Yen-Da Chen, Lunghwa University of Science and Technology, Taiwan

Andreas Emrich, DFKI, Germany Weiwei Fang, Beijing Jiaotong University, China

Nicolas Ferry, SINTEF, Norway Giancarlo Fortino, University of Calabria, Italy

Qiang Fu, Victoria University of Wellington, New Zealand Bin Guo, Northwestern Polytechnical University, China

Fei Hao, Huazhong University of Science and Technology, China Amir Hoseinitabatabaei, University of Surrey, United Kingdom

Bin Hu, Lanzhou University, China Chi-Fu Huang, National Chung Cheng University, Taiwan

Yu Huang, Nanjing University, China Che-Lun (Allen) Hung, Providence University, Taiwan

Jason Hung, Overseas Chinese University, Taiwan Ren-Hung Hwang, Taiwan Network Information Center, Taiwan

Fuyuki Ishikawa, National Institute of Informatics, Japan Beihong Jin, Chinese Academy of Sciences, China

Yuka Kato, Tokyo Woman's Christian University, Japan Youngho Lee, Mokpo National University, South Korea

Hong Va Leong, Hong Kong Polytechnic University, China Jian-Wei Li, Chaoyang University of Technology, Taiwan

Chiu-Kuo Liang, Chung Hua University, Taiwan Yunji Liang, University of Arizona, USA

Antonio Liotta, Technische Universiteit Eindhoven, the Netherlands Yong Liu, Microsoft, USA

Seng Loke, La Trobe University, Australia Sanjay Madria, Missouri University of Science and Technology, USA

Alessandra Mileo, Insight, National University of Ireland, Galway, Ireland Rajib Rana, CSIRO, Australia

Choonsung Shin, Carnegie Mellon University, USA Francois Siewe, De Montfort University, United Kingdom

Stephan Sigg, Technische Universität Braunschweig, Germany Takuo Suganuma, Tohoku University, Japan

Guangzhong Sun, University of Science and Technology of China, China Chiu Chiang Tan, Temple University, USA

Vahid Taslimi, Wright State University, USA Jilei Tian, Nokia Research Center, China

Akira Uchiyama, Osaka University, Japan Athanasios Vasilakos, National Technical University of Athens, Greece

xlixlixlixlixlii

Chun-Hsin Wang, Chung Hua University, Taiwan Yufeng Wang, Nanjing University of Posts & Telecommunications, China

Zhu Wang, Northwestern Polytechnical University, China I-Chen Wu, National Chiao Tung University, Taiwan

Lei Xie, Nanjing University, China Hirozumi Yamaguchi, Osaka University, Japan

Xinqing Yan, North China University of Water Resources and Electric Power, China Chao-Tung Yang, Tunghai University, Taiwan

Dingqi Yang, Institut TELECOM SudParis, France Muhammad Younas, Oxford Brookes University, United Kingdom

Zhiyong Yu, Institut TELECOM SudParis, France Chang-Wu Yu (James), Chung Hua University, Taiwan

Nicholas Jing Yuan, Microsoft Research Asia, China Daqiang Zhang, Nanjing Normal University, China

Zhong Zhang, University of Texas at Arlington, USA

xliixliixliixliixliii

Obstacle Avoidance for Visually Impaired Using Auto-adaptive Thresholding on Kinect’s Depth Image

Muhamad Risqi Utama Saputra, Widyawan, Paulus Insap Santosa Department of Electrical Engineering and Information Technology

Universitas Gadjah Mada, Yogyakarta, Indonesia [email protected], [email protected], [email protected]

Abstract—Visually impaired people need assistance to navigate safely, especially in indoor environment. This research developed an obstacle avoidance system for visually impaired using Kinect’s depth camera as the main vision device. A new approach called auto-adaptive thresholding is proposed to detect and to calculate the distance of obstacle from the user. The proposed method divides equally a depth image into three areas. It finds the most optimal threshold value automatically (auto) and vary among each of those areas (adaptive). Based on that threshold value, the distance of the closest obstacle for each area is determined by average function. To respond the existence of the obstacle, the system gives sound and voice feedback to the user through an earphone. The experimental result shows that execution time and error of the system in calculating the distance of the obstacle are 12.24 ms and 130.796 mm respectively. Evaluation with blind-folded persons indicates that the system could successfully guide them to avoid obstacles in real-time condition.

Keywords—obstacle avoidance; visually impaired; auto-adaptive thresholding; assistive technology; wearable device;

I. INTRODUCTION Based on World Health Organization (2012), there are 285

millions visual impairments in the world, which 39 millions of them are completely blind [1]. These people highly need assistant to carry out daily activities. One of the most difficult activities that must be conducted by visually impaired is indoor navigation. In indoor environment, visually impaired should be aware of obstacles in front of them and be able to avoid it. To help them to navigate safely, most of them use white cane or guide dog. White cane could explore environment at average distance 1.5 meter [2]. However, white cane has some drawbacks, namely (1) white cane ability to explore environment is limited to its length and only at position where the user pointing the cane (the other position is blind spot) and (2) white cane can’t show automatically which path that free from obstacles. On the other hand, the guide dog can show the user where the free path is, but it’s expensive [3] and need to be trained. Based on those reasons, the need to develop Assistive Technology (AT) to aid visually impaired in avoiding obstacle is quite high.

On many studies, researchers have developed AT based on laser or ultrasonic sensor that was fused with a cane. Wahab, et al. [4] combine a cane with ultrasonic sensor and voice alert system to create “Smart Cane”. Smart Cane emits ultrasonic signal and calculates the time interval between sending the signal and receiving the echo to determine the distance of obstacle. The system then alerts the user using voice feedback based on three different types of distance: far, medium, and close. But, sometimes the voice feedback is too repetitive if it’s not handled properly, so it makes the user confused. To avoid this shortcoming, instead of using voice feedback, Mitsuhiro Okayasu [5] uses vibrator to alert the user when an obstacle is detected.

Another research about AT for visually impaired was conducted by Benjamin [6]. He developed a tool called “C-5 Laser Cane”, a cane that is equipped with laser technology. C-5 Laser Cane emits pulses of infrared light and catches back the reflected signal by a photodiode placed behind a receiving lens. Based on the angle made by the diffuse reflected ray, the Triangulation method is used to calculate the distance of the object. To notify the user about detection of obstacle, the system will signal the user with a high-pitched “beep”. Resembling what Benjamin did, Ahlmark, et al. [7] use laser as well to determine the distance of the object, but the system is incorporated with a wheelchair to help blind people with motion impairment. The laser rangefinder itself is viewed as a virtual white cane. Not only that, the system also utilizes haptic technology (technology of the sense of touch) to inform the user about environment that has been explored. Many other researchers also used laser and ultrasonic, but instead of combining it with a cane, they merged it with wearable device [8], [9] or a robot [10].

Besides of laser and ultrasonic technology, recently, researchers have been starting to use depth camera as an option of AT. It has been happening since the affordable depth camera, such as Microsoft Kinect and Asus Xtion, was available in the market. Depth camera can be used to detect obstacle because it provides an image that contains depth information on each pixel, but it needs further process to be able to do that. Steve Mann, et al. [11] use 1-to-1 center-weighted mapping on depth image to develop collision avoidance system. The system divides depth image into 6 areas and it maps the weighted distance of each area with vibration to inform the user about the environment. The closer the distance is, the stronger the vibration is, so the user can avoid

2014 IEEE International Conference on Ubiquitous Intelligence and Computing/International Conference on Autonomic and Trusted

Computing/International Conference on Scalable Computing and Communications and Its Associated Workshops

978-1-4799-7646-1/14 $31.00 © 2014 IEEE

DOI 10.1109/UIC-ATC-ScalCom.2014.108

333

2014 IEEE 11th Intl Conf on Ubiquitous Intelligence & Computing and 2014 IEEE 11th Intl Conf on Autonomic & Trusted

Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops

978-1-4799-7646-1/14 $31.00 © 2014 IEEE

DOI 10.1109/UIC-ATC-ScalCom.2014.108

333

2014 IEEE 11th Intl Conf on Ubiquitous Intelligence & Computing and 2014 IEEE 11th Intl Conf on Autonomic & Trusted

Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops

978-1-4799-7646-1/14 $31.00 © 2014 IEEE

DOI 10.1109/UIC-ATC-ScalCom.2014.108

336

2014 IEEE 11th Intl Conf on Ubiquitous Intelligence & Computing and 2014 IEEE 11th Intl Conf on Autonomic & Trusted

Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops

978-1-4799-7646-1/14 $31.00 © 2014 IEEE

DOI 10.1109/UIC-ATC-ScalCom.2014.108

337

obstacle when the vibration at specific area become stronger than the others. Another identical research conducted by D. Bernabei, et al. [12] utilize depth camera to find the farthest point reachable by the visually impaired considering his/her height and width. The system produces low-resolution depth map contained quantization of the space in front of the user into the object in the scene. Based on that map, the system decides what feedback to give to the user, for example to change direction or to stop.

Using different approach, Atif Khan, et al. [3] also use depth camera but the system splits depth image into 15 areas and views each area into metric of obstacle. He created a recommendation system based on the smallest probability that an area has an obstacle. To improve efficiency, the system transforms 640x480 pixels of depth image into 32x40 pixels by calculating average depth value in each block. Then, the feedback is given to the user through voice message using text-to-speech technology. In order to find the closest object, Zollner, et al. [13] use moving depth window to examine whether an area in depth histogram exceeds certain threshold area or not. If the observed area surpasses 4% of a region, the system inform to the user through vibrotactile waist belt. Another method implements marching squares algorithm on depth image to detect obstacles [14]. The system works continuously by performing down-sampling of depth data into a low-resolution image, splitting depth data into isolated structures at different depth level, and sonifying obstacle information to the user’s headphone.

The objective of this research is similar with what was done by researchers mentioned above: to develop collision avoidance system for visually impaired. But, this research proposes a new approach to detect and to calculate the distance of obstacle using auto-adaptive thresholding on depth image. Depth camera was chosen in this research because it’s not expensive and easy to implement. The design and configuration of the system will be covered in the next section. The third section will explains about collision avoidance method. Then, the fourth section shows the experimental result. Finally, the last section sums up the result and discusses further research.

II. SYSTEM CONFIGURATION The components of the system consist of 3 main parts: (1)

depth camera, (2) notebook/computer tablet with USB hub, and (3) earphone. Depth camera that is used in this research is Microsoft Kinect. Kinect provides depth image with 640x480 resolution and 30 frame rates/second. The physical limits of Kinect to measure depth information is within range of 0.8 meter to 4 meter, with horizontal and vertical angle of vision are 57.50 and 43.50 respectively [15]. This physical limits is considered suitable to develop collision avoidance system because it will give enough time for visually impaired to avoid an obstacle when it’s detected at that range. In order to alert the user about the obstacle, the system will inform it through an earphone. Fig. 1 depicts how these 3 main parts will be used by visually impaired. Kinect is placed in front of stomach like a belt so that it can reach object at floor and at human high.

Kinect

Earphone

Laptop1

3

2

Fig. 1. The prototype of the system consists of 3 main parts: (1) Microsoft Kinect, (2) laptop, and (3) earphone. The blind-folded man in this picture employs the system while conducting experiment.

Fig. 2 describes data flow between Kinect, laptop, and the

user. Kinect transmits raw depth data to the system in the computer. Then, the system will process the raw depth data and converts it into meaningful information, namely sound alert notification and voice recommendation. Finally, the system sends those information to the user through an earphone.

Raw Depth Data

MicrosoftKinect

Notebook/Tablet

User

Sound Notification

Voice Recommendation

Fig. 2. Data flow between Kinect, computer, and visually impaired.

III. OBSTACLE AVOIDANCE

A. Basic Principle

RGB Image1 Depth Image2 Depth Histogram3

Fig. 3. The correlation between (1) colour image, (2) depth image, and (3) depth histogram. The area of depth histogram surrounded by black circle indicates the object (chair) in depth image.

The idea behind the proposed method in this research is depicted in Fig. 3. As previously mentioned, depth image provides depth information in each pixel and it can be seen as depth histogram. Fig. 3 describes the correlation between colour image, depth image, and depth histogram. Based on the picture, there are 2 main objects with different distance (indicated by different colour in depth image): wall and chair. If an image composed by those 2 objects and it’s converted into depth histogram, the result is 2 peaks with different local maxima, one corresponding to each object. So, the point is, a

334334337338

local maximum in the depth histogram generally means an object. In order to find the closest object, the process is simply to set a threshold value that will separate the closest object and another objects behind it. But, this method doesn’t always work when the scene on the picture is too complicated and cluttered. Therefore, this research used auto-adaptive thresholding method to deal with it.

B. Auto-adaptive Thresholding Auto-adaptive thresholding on depth image refers to a

method that can generate the most optimal threshold value automatically (auto) and vary among each different area of depth image (adaptive). This method yields more than one threshold values because sometimes it is too difficult to isolate the closest object with only single threshold value. So, depth image will be divided into several areas and each area has its threshold value. The complete process is described in Fig. 4.

Start

Depth Image Acquisition

Down-sampling andDepth Histogram

Conversion

Peak Detection and Selection

Dividing Depth ImageInto 3 Areas

Threshold Selection Using Otsu Method

Distance Calculation

Is the number of frames can be divided by 15?

Yes

No

Finish

Fig. 4. The flowchart of auto-adaptive thresholding method. The output of this method is the distance of the closest object from the user (in millimeter).

1) Depth Image Acquisition Auto-adaptive thresholding begins by performing depth

image acquisition from Kinect. Raw data from Kinect contains depth information for each pixel and it need to be converted into 8-bit grayscale image to visualize it into visible image. The conversion process is performed by equation (1):

( )( )��

���

� −×−=3200

0,800max255255 nn

di (1)

where in is the nth pixel in grayscale image and dn is the nth depth information in depth image. In order to reduce high error because of low accuracy of Kinect at range closer than 800 mm and further than 4000 mm, the value beyond that range will be changed to 0 mm by equation (2).

��

��

� �

><≤≤

= 4000,800,04000800,

ndndifndifnd

nd (2)

2) Dividing Depth Image into 3 Areas The second step is dividing depth image into 3 equal areas

(as shown in Fig. 5), that is, left area, middle area, and right area. Each area of depth image is associated with the path that can be traversed by visually impaired when the system is performing obstacle avoidance recommendation. For example, when there’s an obstacle in the middle area, the system gives recommendation to move to the right area to avoid it. The choice to divide depth image only into 3 areas is that the movements of visually impaired is imprecise [12], so that with only 3 options, it will make visually impaired understand and execute easily the command from the system. This process to divide depth image into 3 areas and other processes after it will be carried out twice per second. It is done to avoid burdening the system excessively. As previously mentioned, each area will has its threshold value. So, the next step will be processed separately to each area.

Fig. 5. Depth image is divided into 3 areas: left area, middle area, and right area. Each area represents the path that can be traversed by visually impaired.

3) Down-sampling and Depth Histogram Conversion Down-sampling is conducted to accelerate computing

process and to reduce inefficiency because of processing excessive unnecessary information of depth image. For each 2x2 pixel area on depth image, only 1 pixel will be used. So, the total number of pixels that will be processed is about 19200 pixels. Then, the group of data from each depth image area is converted into depth histogram by classifying it into 100 groups (the interval between each group is 40 mm).

4) Peak Detection and Selection To discover objects in each area of depth image, the

system must be able to find the local maxima in each depth histogram. Each of local maximum can be found by using contrast function as shown in equation (3):

� ��+

−=

+

++=

−−

−=

−−=ni

nik

ni

nikk

ni

nikkk pppnicontrast

2

1

1

2

),( (3)

where i is the observed position, n is the parameter for adding up the contrast of the peak and its neighbour, and pk is the value in position of k in the depth histogram. The number n is used to filter the noise and unexpected local peak positions [16]. The value of n that is used in this research is 1.25. This number is determined experimentally.

335335338339

All of local maxima discovered by contrast function is not always represent the object. To determine the object, the contrast value of local maximum has to meet these criterias:

• The minimum distance between one local maximum with another one in depth histogram is 4 (positive/negative), otherwise it will be treated as a part of an object.

• The minimum score of contrast value is 50, otherwise the system will consider it too small to be an obstacle. This number is determined based on observation.

After all of local maxima is found, the system will choose the two closest one from the Kinect (as shown in Fig. 6). Only these two local maxima that will be processed in the next step.

Fig. 6. The two closest local maxima from Kinect is marked using red circle. The others is marked with blue circle. The local maxima that will be processed in the next step is only those two closest one.

5) Threshold Selection Using Otsu Method In order to find the most optimal threshold value

automatically, the system will use Otsu Thresholding method. Otsu Thresholding is one of the oldest and the best automatic thresholding method [17]. It is unsupervised method which was developed by Nobuyuki Otsu in 1979. Based on Otsu method, the best threshold value is determined by discriminant criteria, that is, to maximize the separability between two classes which are yielded by the threshold value. It is done by finding the maximum variance between two classes (between-class variance) for all possible threshold values [18]. If there are two classes C0 and C1, so that the Otsu method will maximize the between-class variance using equation (4) as follows:

( ) ( )( ) Lkkk <≤= 1,max 2*2 σσ (4)

where ( )*2 kσ is the threshold value of k that maximize the between-class variance, ( )k2σ is the between-class variance for each threshold value of k, and k is every possible threshold value that exists within range 1 and the maximum value of L in the image. The between-class variance itself is computed by equation (5):

( ) ( )211

200

2TT μμωμμωσ −+−= (5)

where 2σ is the between-class variance, 0ω is the probability of class C0, 1ω is the probability of class C1, 0μ is the mean of

class C0, and 1μ is the mean of class C1. The value of Tμ can be easily computed by equation (6) because Tμ is the overall mean of the whole image.

1100 μωμωμ +=T (6)

This Otsu method is used to determine the best threshold value between two local maxima that is generated by peak detection and selection method described in the previous step. In this case, the C0 class of Otsu method is most likely the part of depth histogram that belongs to the closest object whereas the C1 class belongs to another object behind it. After this step, the system enables to separate the closest object at each area of depth image with other objects behind it.

Fig. 7. The example of calculated threshold value (in millimeter) and the distance of the closest object (in millimeter). The gold colour area in depth image indicates that the area is closer than the threshold value. In the left picture, the colour of left area of depth image and the distance value is turned into red when the distance is closer than 1000 mm.

6) Distance Calculation Finally, the distance of the closest object at each area of

depth image is determined by calculating the average value of depth information that is smaller than the threshold value. This process is done by using equation (7):

jk

n

k

kj ti

nix <=�

=

,1

(7)

where xj is average distance of the closest object at depth image area of j, ik is depth value at pixel position of k at depth image area of j, tj is threshold value at depth image area of j, and n is the total of ik. This equation is calculated at each area of depth image, so the result of this process is three distance values, one for each area of depth image. Fig. 7 shows the example result of calculated threshold value and distance of the closest object in millimeter.

336336339340

C. Feedback Mechanism

1 meter 1,5 meterDistance of obstacle

Soundnotification

Voicerecommendation

Visually impaired

Fig. 8. The system sends voice recommendation and sound notification to the user only when the obstacle reach specific distance from the user.

The distance of the closest object obtained from the auto-adaptive thresholding method is considered as an obstacle if the distance is closer than 1500 mm. The system will give sound “beep” notification every 1.5 second to the user when the distance of the closest object is between 1000 mm and 1500 mm. This is based on the data of walking speed of visually impaired, that is, 0.4 m/s if the visually impaired know that there’s an obstacle in front of him/her [19]. So, if there’s an obstacle at distance 1500 mm, there’s will be enough time and space for visually impaired to avoid it before the collision is happened. Then, when the distance of the obstacle reaches 1000 mm or closer, the system will give voice recommendation to the visually impaired using text-to-speech technology. Fig. 8 depicts when the system sends sound notification and voice recommendation to the user. Table 1 shows the voice feedback that will be sent to the visually impaired for each possible condition.

TABLE I. VOICE RECOMMENDATION FOR EACH CONDITION

Condition Voice Feedback

There’s no obstacle in the middle area “Go straight”

There’s an obstacle in the middle area, but the right area is free

“Move away to the right”

There’s an obstacle in the middle area, but the left area is free

“Move away to the left”

There’s no area which free from obstacle “Stop”

IV. EXPERIMENTAL RESULT The experiment has been conducted to measure (1) the

execution time and (2) the error that is found when comparing the distance obtained by the system and the real distance of the object in millimeter. The measurement is conducted in 9 positions (3 for each area of depth image) as shown in Fig. 9. For each position, there are 6 types of obstacle that will be used, i.e. human, chair type 1, chair type 2, trash type 1, trash type 2, and walking direction. Table 2 shows the result of the average execution time. As shown in that table, the average execution time is 12.24 ms with 4.295 standard deviation. This result indicates that the execution time is fast enough to be used in real-time application.

KINECT0 mm

4000 mm

800 mm1000 mm

2000 mm

3000 mm

MIDDLE AREA

LEFT AREA

RIGHT AREA

1

2

3

4

5

6

7

8

9

Left area, 1 meter1

Left area, 2 meter2

Left area, 3 meter3

Middle area, 1 meter4

Middle area, 2 meter5

Middle area, 3 meter6

Right area, 1 meter7

Right area, 2 meter8

Right area, 3 meter9

Fig. 9. The position of distance measurement that is used in the experiment. The total position is 9, each area of depth image get 3 positions. For each position, there are 6 types of obstacle that will be used.

TABLE II. AVERAGE EXECUTION TIME

Information Value

Average execution time (ms) 12.24

The fastest executiom time (ms) 7

The slowest execution time (ms) 28

Standard deviation 4.295

Fig. 10 shows the average error of distance calculation. As seen in the picture, at distance 1-2 meter, the average error of distance calculation is less than 50 mm, but at distance of 3 meter, the average error increases up to nearly 300 mm. It is happened because of two contributing factors, namely:

• The pixel accuracy of Kinect’s depth image decreases when the distance between the scene and the sensor increases, ranging from few millimeters at close range to about 4 cm at the maximum distance of the sensor [20]. So, basically, the further the distance is, the higher the error will be.

• The method proposed in this research still could not differentiate perfectly between the object and the floor, so that at distance further than 2500 mm, both the floor and the object will be detected. This will decrease the accuracy of the calculation but it’s not dangerous for the visually impaired because the distance of the object is still far away.

Fig. 10. The average error of distance calculation for each position of

measurement.

337337340341

Then, based on the data in Fig. 10, the average error of the system to calculate the distance of the closest object for all measurements is 130.796 mm. This average error is considered small for the case of visually impaired because their movement is imprecise. Beside that, the system will give sound notification at distance of 1500 mm. So, if the system fail to give the correct distance measurement because margin error of 130 mm, the visually impaired still has adequate time and space to avoid the obstacle.

To evaluate the overall system in real-time condition, initial trial was conducted with 10 blind-folded persons, with age range from 20 to 40 years old. The trial was done in walking corridor at third floor in Department of Electrical Engineering and Information Technology, Universitas Gadjah Mada. Firstly, the participants were informed about how the system worked. Then, they were asked to walk in indoor environment from one point to another point and followed the instructions from the system. While they were walking, there’s an obstacle that block their navigation path, so the system will be tested if it work correctly or not. The result of this initial evaluation showed promising outcome. All of blind-folded persons could avoid the obstacle without collide with it. Following the instruction from the system, 7 persons took the left path to avoid the obstacle whereas the other 3 persons were guided to pass the right path. So, basically, the system successfully guide them although they were directed to pass different path. For further development, we will be trying to improve the algorithm and the accuracy of the system in detecting and calculating the distance of the obstacle. We’ll be adding marker detection system as well so that the visually impaired can recognize some interest points in indoor environment.

V. CONCLUSION In this paper, an Assistive Technology (AT) to help

visually impaired in avoiding obstacles is developed based on Kinect’s depth image technology. A new approach called auto-adaptive thresholding is proposed to calculate the distance of the closest object. Auto-adaptive thresholding searches the most optimal threshold value automatically (auto) and vary among each different area of depth image (adaptive). Based on that threshold value, the distance of the closest object is determined by average function. The system then give sound notification and voice recommendation when the obstacle is at distance below than 1500 mm. The experimental result shows that the execution time in determining the closest object is 12.24 ms. It also shows that the average error of the system to calculate the closest object is 130.796 mm. Moreover, evaluation with 10 blind-folded persons indicates that the system could successfully guide them to avoid obstacles in real-time condition.

REFERENCES [1] World Health Organization, “Visual impairment and blindness, Fact

Sheet N°282,” 2012. [Online]. Available: http://www.who.int/mediacentre/factsheets/fs282/en/.

[2] H. Takizawa, S. Yamaguchi, M. Aoyagi, N. Ezaki, and S. Mizuno, “Kinect cane: An assistive system for the visually impaired based on three-dimensional object recognition,” in IEEE/SICE International Symposium on System Integration (SII), 2012, no. Dec. 2012, pp. 740–745.

[3] A. Khan, F. Moideen, and J. Lopez, “KinDectect: Kinect Detecting Objects,” in ICCHP’12 Proceedings of the 13th International Conference on Computers Helping People with Special Needs - Volume Part II, 2012, vol. 7383, pp. 588–595.

[4] M. H. A. Wahab, A. A. Talib, H. A. Kadir, A. Johari, A. Noraziah, R. M. Sidek, and A. A. Mutalib, “Smart Cane : Assistive Cane for Visually-impaired People,” International Journal of Computer Science, vol. 8, no. 4, pp. 21–27, 2011.

[5] M. Okayasu, “Newly developed walking apparatus for identification of obstructions by visually impaired people,” Journal of Mechanical Science and Technology, vol. 24, no. 6, pp. 1261–1264, Jun. 2010.

[6] J. M. Benjamin, “The Laser Cane,” Journal of Rehabilitation Research & Development, vol. BPR 10–22, pp. 443–450, 1974.

[7] D. I. Ahlmark, H. Fredriksson, and K. Hyyppä, “Obstacle Avoidance Using Haptics and a Laser Rangefinder,” in IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), 2013.

[8] E. B. Kaiser and M. Lawo, “Wearable Navigation System for the Visually Impaired and Blind People,” in IEEE/ACIS 11th International Conference on Computer and Information Science, 2012, no. 1, pp. 230–233.

[9] S. Koley and R. Mishra, “Voice Operated Outdoor Navigation System For Visually Impaired Persons,” International Journal of Engineering Trends and Technology, vol. 3, no. 2, pp. 153–157, 2012.

[10] V. Kulyukin, C. Gharpure, J. Nicholson, and S. Pavithran, “RFID in Robot-Assisted Indoor Navigation for the Visually Impaired,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), 2004, vol. 2, pp. 1979–1984.

[11] S. Mann, J. Huang, R. Janzen, R. Lo, V. Rampersad, A. Chen, and T. Doha, “Blind Navigation With a Wearable Range Camera and Vibrotactile Helmet,” in Proceedings of the 19th ACM International Conference on Multimedia - MM ’11, 2011, p. 1325.

[12] D. Bernabei, F. Ganovelli, M. Di Benedetto, M. Dellepiane, and R. Scopigno, “A Low-Cost Time-Critical Obstacle Avoidance System for the Visually Impaired,” in International Conference on Indoor Positioning and Indoor Navigation (IPIN), Portugal, 2011, no. September, pp. 21–23.

[13] M. Zöllner, S. Huber, H. Jetter, and H. Reiterer, “NAVI–A Proof-of-Concept of a Mobile Navigational Aid for Visually Impaired Based on the Microsoft Kinect,” in Human-Computer Interaction – INTERACT 2011, 2011, no. c, p. pp 584–587.

[14] M. Brock and P. O. Kristensson, “Supporting Blind Navigation using Depth Sensing and Sonification,” in ACM Conference on Pervasive and Ubiquitous Computing, 2013, pp. 255–258.

[15] Microsoft, Kinect for Windows, Human Interface Guidelines. 2013, pp. 1–135.

[16] L. Chen, X. Nguyen, and C. Liang, “Object segmentation method using depth slicing and region growing algorithms,” in International Conference on 3D Systems and Applications General, Tokyo, 2010, pp. 4–7.

[17] F. A. Jassim and F. H. Altaani, “Hybridization of Otsu Method and Median Filter for Color Image Segmentation,” International Journal of Soft Computing and Engineering (IJSCE), vol. 3, no. 2, pp. 69–74, 2013.

[18] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions On Systems, Man, And Cybernetics, vol. SMC-9, no. 1, pp. 62–66, 1979.

[19] A. Riener and H. Hartl, “‘ Personal Radar ’: A Self-governed Support System to Enhance Environmental Perception,” in Proceedings of the 26th Annual BCS Interaction Specialist Group Conference on People and Computers, 2012, pp. 147–156.

[20] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: a review,” IEEE Transactions on Cybernetics, vol. 43, no. 5, pp. 1318–1334, Oct. 2013.

338338341342