newbargaininggamemodelforcollaborativevehicular...

12
Research Article New Bargaining Game Model for Collaborative Vehicular Network Services Sungwook Kim Department of Computer Science, Sogang University, 35 Baekbeom-ro (Sinsu-dong), Mapo-gu, Seoul 121-742, Republic of Korea Correspondence should be addressed to Sungwook Kim; [email protected] Received 4 June 2018; Revised 14 July 2018; Accepted 7 February 2019; Published 7 March 2019 Academic Editor: Michael Vassilakopoulos Copyright © 2019 Sungwook Kim. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e wireless industry’s evolution from fourth generation (4G) to fifth generation (5G) will lead to extensive progress in new vehicular network environments, such as crowdsensing, cloud computing, and routing. Vehicular crowdsensing exploits the mobility of vehicles to provide location-based services, whereas vehicular cloud computing is a new hybrid technology that instantly uses vehicular network resources, such as computing, storage, and Internet for decision-making. In this study, novel crowdsensing, cloud computing, and routing algorithms are developed for a next-generation vehicular network, and they are combined into a hybrid scheme. To provide an effective solution toward an appropriate interactive vehicular network process, we focus on the paradigm of learning algorithms and game theory. Based on individual cooperation, our proposed scheme is designed as a triple-plane game model to adapt to the dynamics of a vehicular network system. e primary advantage of our game-based approach is to provide self-adaptability and responsiveness to current network environments. e performance of our hybrid scheme is evaluated and analyzed using simulation experiments in terms of the cloud service success ratio, normalized dis- semination throughput, and crowdsensing success probability. 1. Introduction With the rapid development of mobile communication and intelligent transportation technologies, Vehicular Ad hoc NETwork (VANET) has emerged as an important part of the intelligent transportation system (ITS). VANET is a unique type of Mobile Ad hoc NETwork (MANET). e similarity between VANET and MANET is characterized by the movement and self-organization of nodes. However, the primary difference is that the nodes in VANETs are mobile vehicles; they move at much higher speeds and possess substantial power resources, which is an advantage over traditional MANETs. As a newly introduced technology, the goal of VANET research is to develop vehicle-to-vehicle (V2V) communications to enable quick and cost-efficient data sharing for enhanced vehicle safety and comfort [1–3]. Typically, vehicles send and receive messages using an on- board unit (OBU), which is a portable data collector hardware installed on the vehicle. By a combination of information technology and transport infrastructure, VANETs support a wide range of applications including traffic control, road safety, route optimization, multimedia entertainment, and social interactions. With the advancement in the latest ve- hicular technology, automobile manufacturers and academia are heavily engaged in the blueprint of VANETs. Currently, 10% of moving vehicles on the road are wirelessly connected. By 2020, it is reported that 90% of vehicles will be connected [1, 4, 5]. In VANET systems, each vehicle can exchange messages in V2V communications while acting as a sender, receiver, and router. In addition, these vehicles perform vehicle-to- roadside (V2R) communications with roadside units (RSUs) [6]. e RSU operates as an access point to provide services to moving vehicles at anytime and anywhere while connecting to the backbone network. As Internet access points, RSUs provide the communication connectivity to passing vehicles. In VANET environments, RSUs and OBUs in vehicles re- peatedly interact with each other and perform control de- cisions individually without centralized coordination [1,4–9]. During VANET operations, services can be categorized into three fundamental classes: vehicular cloud service (VCS), vehicular sensing service (VSS), and vehicular routing service Hindawi Mobile Information Systems Volume 2019, Article ID 6269475, 11 pages https://doi.org/10.1155/2019/6269475

Upload: others

Post on 30-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

Research ArticleNew Bargaining Game Model for Collaborative VehicularNetwork Services

Sungwook Kim

Department of Computer Science Sogang University 35 Baekbeom-ro (Sinsu-dong) Mapo-gu Seoul 121-742 Republic of Korea

Correspondence should be addressed to Sungwook Kim swkim01sogangackr

Received 4 June 2018 Revised 14 July 2018 Accepted 7 February 2019 Published 7 March 2019

Academic Editor Michael Vassilakopoulos

Copyright copy 2019 Sungwook Kim is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

e wireless industryrsquos evolution from fourth generation (4G) to fifth generation (5G) will lead to extensive progress in newvehicular network environments such as crowdsensing cloud computing and routing Vehicular crowdsensing exploits themobility of vehicles to provide location-based services whereas vehicular cloud computing is a new hybrid technology thatinstantly uses vehicular network resources such as computing storage and Internet for decision-making In this study novelcrowdsensing cloud computing and routing algorithms are developed for a next-generation vehicular network and they arecombined into a hybrid scheme To provide an effective solution toward an appropriate interactive vehicular network process wefocus on the paradigm of learning algorithms and game theory Based on individual cooperation our proposed scheme is designedas a triple-plane game model to adapt to the dynamics of a vehicular network system e primary advantage of our game-basedapproach is to provide self-adaptability and responsiveness to current network environments e performance of our hybridscheme is evaluated and analyzed using simulation experiments in terms of the cloud service success ratio normalized dis-semination throughput and crowdsensing success probability

1 Introduction

With the rapid development of mobile communication andintelligent transportation technologies Vehicular Ad hocNETwork (VANET) has emerged as an important part of theintelligent transportation system (ITS) VANET is a uniquetype of Mobile Ad hoc NETwork (MANET) e similaritybetween VANET and MANET is characterized by themovement and self-organization of nodes However theprimary difference is that the nodes in VANETs are mobilevehicles they move at much higher speeds and possesssubstantial power resources which is an advantage overtraditional MANETs As a newly introduced technology thegoal of VANET research is to develop vehicle-to-vehicle(V2V) communications to enable quick and cost-efficientdata sharing for enhanced vehicle safety and comfort [1ndash3]

Typically vehicles send and receive messages using an on-board unit (OBU) which is a portable data collector hardwareinstalled on the vehicle By a combination of informationtechnology and transport infrastructure VANETs support awide range of applications including traffic control road

safety route optimization multimedia entertainment andsocial interactions With the advancement in the latest ve-hicular technology automobile manufacturers and academiaare heavily engaged in the blueprint of VANETs Currently10 of moving vehicles on the road are wirelessly connectedBy 2020 it is reported that 90 of vehicles will be connected[1 4 5]

In VANET systems each vehicle can exchange messagesin V2V communications while acting as a sender receiverand router In addition these vehicles perform vehicle-to-roadside (V2R) communications with roadside units (RSUs)[6]e RSU operates as an access point to provide services tomoving vehicles at anytime and anywhere while connecting tothe backbone network As Internet access points RSUsprovide the communication connectivity to passing vehiclesIn VANET environments RSUs and OBUs in vehicles re-peatedly interact with each other and perform control de-cisions individually without centralized coordination [14ndash9]

During VANET operations services can be categorizedinto three fundamental classes vehicular cloud service (VCS)vehicular sensing service (VSS) and vehicular routing service

HindawiMobile Information SystemsVolume 2019 Article ID 6269475 11 pageshttpsdoiorg10115520196269475

(VRS) Vehicular cloud service is a new paradigm that exploitscloud computing to serve VANETs with several computa-tional services [10] Currently computing requirements forvehicular applications are increasing rapidly particularly withthe growing interest in embedding a new class of interactiveapplications and services Within the communication rangeof each RSU the RSU controls the VCS through interactionswith the vehicles and provides computation capacity storageand data resources Vehicles requiring resources can obtainthe available resources via interconnected RSUs ereforedepending on the roadside infrastructure the VCS provides asignificant market opportunity and overcomes challengesmaking the technology more cost-effective [10 11]

With the rapid development of VANETs VSSs havebroad potential for enhancing the public sensing domainVehicles with OBUs can be considered as mobile sensorsthat collect the local information requested by the RSUs As aserver the RSU pays for the VSSrsquos participating vehiclesaccording to their sensing qualities As a type of vehicle-to-infrastructure communication examples of VSSs are loca-tion-based road monitoring real-time traffic conditionreporting pollution level measurements and public in-formation sharing Owning to the mobility of the vehiclesthe sensing coverage of location-based services can be ex-panded and VSSs will benefit a wide scope of consumers[7 8 12]

In VANETs vehicles are equipped with OBUs that cancommunicate with each other also known as VRSe VRSrequires the suitable V2V communication technologyespecially the routing method to address various challengesof VANETs Typically VANET is considered a subclass ofMANET and MANET-based routing protocols have beenextensively studied However the VANET has severalcharacteristics that distinguish it from MANET To obtainadaptive vehicular routing protocols high mobility rapidnetwork topology change and an unlimited power supplymust be considered to design an effective VRS algorithm[13ndash15]

In this study we propose a new VANETcontrol schemeby considering the issues of VCS VRS and VSS Based onthe joint design of different vehicular operations our in-tegrated approach can obtain a synergy effect while attainingan appropriate performance balance However it is an ex-tremely challenging task to combine the VCS VRS and VSSalgorithms into a holistic scheme erefore a new controlparadigm is required Currently game theory has beenwidely applied to analyze the performance of VANET sys-tems More specifically VANET system agents are assumedas game players and the interaction among players can beformulated as a game model [6]

11Motivation and Contribution e aim of this study is topropose novel VCS VRS and VSS algorithms based on thedifferent game models For the VANET operations gametheory serves as an effective model of interactive situationsamong rational VANET agents In the VCS algorithm ve-hicles share the cloud resource in the RSU to increase thequality of servicee interactions among vehicles and RSUs

are modeled as a non-cooperative game model In the VRSalgorithm vehicles attempt to increase the quality and re-liability of routing services In the VSS algorithm vehiclesact as mobile sensors to collect the required information Assystem servers RSUs operate to share and process severaltypes of collected information To collect information RSUsuse a learning-based incentive mechanism and individualvehicles adopt the cooperative bargaining approach amongtheir VCSs VSSs and VRSs to balance their performance

e proposed VCS VRS and VSS algorithmsmay need tocoexist and complement each other to meet the diverse re-quirements of VANETs To investigate the strategic in-teractions among different control algorithms we develop anew triple-plane game model across the VCS VRS and VSSalgorithms For effective VANETmanagement vehicles in theVSS algorithm select their strategies from the viewpoint of abargaining solution In a distributed manner the idea of theclassical Nash bargaining solution is used to implement atriple-plane bargaining process in each vehicle e proposedtriple-plane bargaining approach imitating the interactivesequential game process is suitable to negotiate their conflictsof interest while ensuring the system practicality

e main contributions of this study can be summarizedas follows (i) we developed novel VCS VRS and VSS al-gorithms for VANETs (ii) we employed a new game modelto provide satisfactory services (iii) we adopted a distributedtriple-plane bargaining approach to balance contradictoryrequirements (iv) self-adaptability is effectively imple-mented in each control decision process and (v) we cap-tured the dynamic interactions of game players dependingon their different viewpoints According to the reciprocalcombination of optimality and practicality our approachcan provide a desirable solution for real-world VANETapplications To the best of our knowledge little research hasbeen conducted on integrated-bargaining-based algorithmsfor VANET control problems

12 Organization e remainder of this article is organizedas follows In the next section we review some relatedVANET schemes and their problems In section 3 weprovide a detailed description of the VCS VRS and VSSalgorithms before defining the triple-plane bargaining gamemodel In particular this section provides fresh insights intothe design benefit of our game-based approach For con-venience the primary steps of the proposed scheme arelisted In section 4 we validate the performance of theproposed scheme by means of a comparison with someexisting methods Finally we present our conclusion anddiscuss some ideas for the future work

2 Related Work

ere have been considerable researches into the design ofVANET control schemes Cheng et al [16] investigate theopportunistic spectrum access for cognitive radio vehicular adhoc networks In particular the spectrum access process ismodeled as a non-cooperative congestion game ereforethe proposed scheme in [16] opportunistically accesses

2 Mobile Information Systems

licensed channels in a distributed manner and is designed toachieve a pure Nash equilibrium with high efficiency andfairness By using the statistics of the channel availability thisapproach can exploit the spatial and temporal access op-portunities of the licensed spectrum rough the simulationresults they conform that the proposed approach achieveshigher utility and fairness compared to a random accessscheme [16]

e paper [17] focuses on how to stimulate messageforwarding in VANETs To implement this mechanism it iscrucial to make sure that vehicles have incentives to forwardmessages for others Especially this study adopts the coa-litional game theory to solve the forwarding cooperationproblem and rigorously analyzing it in VANETs e maingoal is to ensure that whenever a message needs to beforwarded in a VANET all involved vehicles gets theirincentives to form a grand coalition In addition this ap-proach is extended to taking the limited storage space of eachnode Experiments on testbed trace data verify that theproposed method is effective for the stimulating cooperationof message forwarding in VANETs [17]

In [18] the main focus is to achieve a higher throughputby selecting the best data rate and the best network or channelbased on the cognitive radio VANET mechanism Bychanging wireless channels and data rate in heterogeneouswireless networks the scheme in [18] is designed as a gametheoretic model to achieve a higher throughput for vehicularusers In addition a new idea adopted to find optimal numberof APs for given VANET scenarios [18] Even though thepapers [16ndash18] have considered some game theory-basedVANET control algorithms they are designed as one-sidedprotocols while focusing a specific control issue ereforethe system interaction process is not fully investigated

Recently the vehicular microcloud technique has beenconsidered one of the solutions to address the challenges andissues of VANETs It is a new hybrid technology that has aremarkable impact on traffic management and road safetyBy instantly clustering cars help aggregating collected datathat are transferred to some backend [19 20] e paper [19]introduces the concept of vehicular microcloud as a virtualedge server for efficient connections between cars andbackend infrastructure To realize such virtual edge serversmap-based clustering technique is needed to cope with thedynamicity of vehicular networks is approach can op-timize the data processing and aggregation before sendingthem to a backend [19]

F Hagenauer et al propose an idea that vehicularmicrocloud clusters of parked cars act as RSUs by in-vestigating the virtual vehicular network infrastructure [20]Especially they focus on two control questions (i) the se-lection of gateway nodes to connect moving cars to thecluster and (ii) a mechanism for seamless handover betweengateways For the first question they select only a subset ofall parked cars as gateways is gateways selection helpssignificantly to reduce the channel load For the secondquestion they enable driving cars to find better suitedgateways while allowing the driving car to maintain con-nections [20] e ideas in [19 20] are very interestingHowever they can be used for special situations erefore

it is difficult to apply these ideas in general VANET oper-ation scenarios

During VANETsrsquo operations there are two fundamentaltechniques to disseminate data for vehicular applicationsVehicle-to-infrastructure (V2I) and vehicle-to-vehicle(V2V) communications Recently Kim proposes novel V2Iand V2V algorithms for the effective VANET management[6] For V2I communications a novel vertical game model isdeveloped as an incentive-based crowdsensing algorithmFor V2V communications a new horizontal game model isdesigned to implement a learning-based intervehicle routingprotocol Under dynamically changing VANET environ-ments the developed scheme in [6] is formulated as a dual-plane vertical-horizontal game model By considering thereal-world VANETenvironments this approach imitates theinteractive sequential game process by using self-monitoringand distributed learning techniques It is suitable to ensurethe system practicality while effectively maximizing theexpected benefits [6]

In this paper we develop a novel control scheme forvehicular network services Our proposed scheme also in-cludes sensing and routing algorithms for the operations ofVANETs erefore our proposed scheme may look similarto the method in [6] First of all the resemblance can besummarized as follows (i) in the vehicular routing algo-rithm link status and path cost estimationmethods and eachvehiclersquos utility function are designed in the same manner(ii) in the crowdsensing algorithm utility functions of ve-hicle and RSU are defined similarly and (iii) in the learningprocess strategy selection probabilities are estimated basedon the Boltzmann distribution equation While bothschemes have some similarities there are several key dif-ferences First the scheme in [6] is designed as a dual-planegame model but our proposed scheme is extended as atriple-plane game model Second a new cloud computingalgorithm is developed to share the cloud resource in theRSU while increasing the quality of service ird to reducecomputation complexity entropy-based routing route de-cision mechanism is replaced by an online route decisionmechanism Fourth the scheme in [6] is modeled as a non-cooperative game approach However our proposed schemeadopts a dynamic bargaining solution erefore in ourproposed sensing algorithm each individual vehicle decideshis strategy as a cooperative game manner Fifth our pro-posed scheme fully considers cloud computing sensing androuting payoffs and calculates the bargaining powers torespond to current VANET situations Most of all the maindifference between the scheme in [6] and our proposedscheme is a control paradigm Simply in the paper [6] V2Iand V2V algorithms are combined merely based on thecompetitive approach However in this paper sensingcloud computing and routing algorithms work together in acollaborative fashion and act cooperatively with each otherbased on the interactive feedback process

e Multiobjective Vehicular Interaction Game (MVIG)scheme [21] proposes a new game theoretic scheme tocontrol the on-demand service provision in a vehicularcloud Based on a game theoretic approach this scheme canbalance the overall game while enhancing vehiclesrsquo service

Mobile Information Systems 3

costs e game system in the MVIG scheme differs fromother conventional models as it allows vehicles to prioritizetheir preferences In addition a quality-of-experienceframework is also developed to provision various types ofservices in a vehicular cloud it is a simple but an effectivemodel to determine vehicle preferences while ensuring fairgame treatment e simulation results show that theMVIGscheme outperforms other conventional models [21]

e Prefetching-Based Vehicular Data Dissemination(PVDD) scheme [22] devises a vehicle route-based dataprefetching approach while improving the data transmissionaccessibility under dynamic wireless connectivity and lim-ited data storage environments To put it more concretelythe PVDD scheme develops two control algorithms to de-termine how to prefetch a data set from a data center toroadside wireless access points Based on the greedy ap-proach the first control algorithm is developed to solve thedissemination problem Based on the online learningmanner the second control algorithm gradually learns thesuccess rate of unknown network connectivity and de-termines an optimal binary decision matrix at each iterationFinally this study proves that the first control algorithmcould find a suboptimal solution in polynomial time and theoptimal solution of the second control algorithm convergesto a globally optimal solution in a certain number of iter-ations using regret analysis [22]

e Cooperative Relaying Vehicular Cloud (CRVC)scheme [23] proposes a novel cooperative vehicular relayingalgorithm over a long-term evolution-advanced (LTE-A)network To maximize the vehicular network capacity thisscheme uses vehicles for cooperative relaying nodes inbroadband cellular networks With new functionalities theCRVC scheme can (i) reduce power consumption (ii)provide a higher throughput (iii) lower operational ex-penditures (iv) ensure a more flexibility and (v) increase acovering area In a heavily populated urban area the CRVCscheme is useful owning to a large number of relaying ve-hicles Finally the performance improvement is demon-strated through the simulation analysis [23] In this paperwe compare the performance of our proposed scheme withthe existing MVIG [21] PVDD [22] CRVC [23] schemes

3 The Proposed VCS VRS and VSS Algorithms

In this section the proposed VCS VRS and VSS algorithmsare presented in detail Based on the learning algorithm andgame-based approach these algorithms form a new triple-plane bargaining game model to adapt the fast changingVANET environments

31 Game Models for the VCS VRS and VSS AlgorithmsFor the operation of a VANET system we develop threedifferent game models for the VCS VRS and VSS algo-rithms As game players vehicles and RSUs select theirstrategies based on the interactions of other players In ourproposed scheme the VCS and VRS algorithms are for-mulated as non-cooperative game models and the VSSalgorithm is formulated as a triple-plane bargaining model

by cooperation coordination and collaboration of VCSVRS and VSS processes First for the VCS algorithm weformally define the game model GVCS N C SR SV

Vi isinV

URUVVi isinV LPR

j isinSR

T at each time period of gameplay

(i) N is the finite set of VCS game players N RV where R represents a RSU and V V1 1113864

Vi Vk is a set of multiple vehicles in the Rrsquoscoverage area

(ii) C is the cloud computation capacity ie number ofCPU cycles per second in the R

(iii) SR PRmin PR

j PRmax is a set of Rrsquos

strategies where PRj means the price level to ex-

ecute one basic cloud service unit (BCSU) SVViisinV

is the Virsquos available strategies SVVi

[rVi

min

rVi

k rVimax] where rVi represents the amount of

cloud services of the Vi and is specified in terms ofBCSUs

(iv) UR is the payoff received by the RSU and UVVi isinV is

the payoff received by Vi during the VCS process(v) LPR

j isinSR

is the Rrsquos learning value for the strategyPR

j L is used to estimate the probability distri-bution (PR) for the next Rrsquos strategy selection

(vi) T H1 HtHt+1 1113864 1113865 denotes time which isrepresented by a sequence of time steps with im-perfect information for the VCS game process

For the VRS algorithm we define the game modelGVRS vNVi

AViisinv UViisinv T at each time period ofGVRS

gameplayGVRS can formulate the interactions of vehicles forVANET routing operations

(i) v is the finite set of game players v V1 Vn1113864 1113865where n is the number of vehicles for theGVRS game

(ii) NViis the set of Virsquos one-hope neighboring

vehicles(iii) AViisinv a

Vm

Vi∣Vm isinNVi

is the finite set of Virsquosavailable strategies where a

Vm

Virepresents the se-

lection of Vm to relay a routing packet(iv) UViisinv is the payoff received by the Vi during the

VRS process(v) T H1 HtHt+1 1113864 1113865 is the same as the T in

GVCS

For the VSS algorithm we formally define the game modelGVSS N I SR SVVi isinV uR uV

Vi isinV ZP

IsisinIi T at

each time period of gameplay

(i) N RV is the finite set of game players for theGVSS game N is the same as N in GVCS

(ii) I is the finite set of sensing tasks I I1 Is1113864 1113865

in R where s is the number of total sensing tasks(iii) SR [ηI1

R ηIj

R ηIs

R ] is a vector of Rrsquosstrategies where ηIj

R represents the strategy set forthe Ij In the GVSS game ηIj

R means the set ofprice levels for the crowdsensing work for the taskIj during a time period (H isin T) Like as the GVCS

4 Mobile Information Systems

game ηIj

R is defined as ηIj

R 1113937IjisinNPIj

i |PIj

i isin[P

Ij

min PIj

l PIj

max](iv) SVVi

[μI1Vi

μIj

Vi μIs

Vi] is the vector of Virsquos

strategies where μIj

Virepresents the strategy for

Ij In the GVSS game μIj

Vimeans the Virsquos active

VSS participation ie μIj

Vi 1 or not ie μIj

Vi 0

the Ij during H(v) uR is the payoff received by the RSU and uV

ViisinVis

the payoff received by Vi during the VSS process(vi) ZP

Ij

iisinη

Ij

R is the learning value for the strategyPIj

i Zis used to estimate the probability distribution (PR)for the next Rrsquos strategy selection

(vii) T H1 HtHt+1 1113864 1113865 is the same as T inGVCS and GVRS

32 4e VCS Algorithm Based on a Non-Cooperative GameModel In the next-generation VANET paradigm diverseand miscellaneous applications require significant compu-tation resources However in general the computationalcapabilities of the vehicles are limited To address this re-source restriction problem vehicular cloud technology iswidely considered as a new paradigm to improve theVANET service performance In the VCS process appli-cations can be offloaded to the remote cloud server toimprove the resource utilization and computation perfor-mance [24]

Vehicular cloud servers are located in RSUs and executethe computation tasks received from vehicles However theservice area of each RSU may be limited by the radiocoverage Due to the high mobility vehicles may passthrough several RSUs during the task offloading processerefore service results must be migrated to another RSUwhich can reach the corresponding vehicle From theviewpoints of RSUs and vehicles payoff maximization istheir main concern To reach a win-win situation theyrationally select their strategies During the VCS processeach RSU is a proposer and individual vehicles are

responders they interact with each other for their objectivesduring the VCS process

To design our VCS algorithm we consider the VCSplatform consisting of one RSU (R) and a set of mobilevehicles (V) they formulate the game model GVCS As aproposer the Rrsquos strategy (PR

j isin SR) indicates the offered

price for one BCSU process R has its own utility functionwhich represents its net gain while providing the VCSprocess eRrsquos utility function withPR

j strategy at the trsquothtime step Ht (UR(VPR

j (Ht))) is given by

UR

VPRj Ht( 11138571113872 1113873 1113944

ViisinV1113874ξVi

Ht( 1113857 times 11138741113882PRj Ht( 1113857

times rVi

k Ht( 11138571113883minusCRrVi

k Ht( 11138571113872 111387311138751113875

st PRj Ht( 1113857 isin SR

CR

rVi

k Ht( 11138571113872 1113873 ΘRBCSU times rVi

k Ht( 1113857

ΘRBCSU (θ times M)

C

ξViHt( 1113857

1 Vi selectsrVi

k isin SVVi

atHt

0 otherwise ie noVCS of Vi( 1113857

⎧⎪⎨

⎪⎩

(1)

whereCR(rVi

k (Ht)) is the cost function to execute theVirsquoscloud service (r

Vi

k (Ht)) at timeHtΘRBCSU is the processingcost to execute one BCSU and M is the currently usingcapacity ofC θ is the coefficient factor of cost calculation Inpractice the actual sensing cost CR(middot) is usually unknownby vehicles

As a responder the vehicleVirsquos strategy (SVVi

) representsthe amount of cloud service e payoff ofVi can be definedas a function of the task offload level (rVi) and service price(PR) erefore the Virsquos utility function with PR

j and rVi

k

strategies at time Ht (UVViisinV(PR

j (Ht)rVi

k (Ht))) iscomputed as follows

UVViisinV P

Rj Ht( 1113857r

Vi

k Ht( 11138571113872 1113873 9Vi Ht( 1113857 times H

Vi times rVi

k Ht( 11138571113872 1113873minus PRj Ht( 1113857 times r

Vi

k Ht( 11138571113872 11138731113872 1113873

st 9Vi Ht( 1113857

1 and ΓVi

Ht+1 ΓVi

Htminus PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 1113873 if ΓVi

Htge PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

and rVi

k isin SVVi

is selected1113872 1113873

0 otherwise ie ΓVi

Htlt PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(2)

where HVi is the Virsquos profit factor if one BCSU is pro-cessed and ΓVi

Htis the amount ofVirsquos virtual money at time

Ht if Vi has enough money to pay the VCS price Vi canrequest its cloud service (r

Vi

k ) As the GVCS game playersthe RSU and vehicles attempt to maximize their utilityfunctions Interactions among game players continue

repeatedly during the VCS process over time In partic-ular the RSU should consider the reactions from vehiclesto determine the price strategy (PR) In this study wedevelop a new learning method to decide an effective pricepolicy for cloud services If the strategy PR

j is selected attime Htminus1 the RSU updates the strategy PR

j rsquos learning

Mobile Information Systems 5

value for the next time step (LPR

j

Ht) according to the fol-

lowing method

LPR

j

Ht max

⎧⎨

⎩⎡⎣ (1minus χ) times L

PRj

Htminust1113874 1113875

+ χ times log2UR VPR

j Htminus1( 11138571113872 1113873

1113936PRkisinSR1113874L

PRk

Htminus1 SR

1113875

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎤⎦ 0⎫⎬

(3)where SR is the cardinality of SR and χ is a learning ratethat models how the L values are updated To implement theprice learning mechanism in the RSU a strategy selectiondistribution (PR isin PR) for the VCS is defined based on theL(middot) values During the GVCS game process we sequentiallydetermine PR(Ht) 1113864PR

PRmin

(Ht) PR

PRj

(Ht)

PR

PRmax

(Ht) as the probability distribution of Rrsquos strategy

selection at timeHtePRj strategy selection probability at

time Ht (PR

PRj

(Ht)) is defined as follows

PR

PRj

Ht( 1113857 EXP L

PRj

Htminus11113874 1113875

1113936maxkmin EXP L

PRk

Htminus11113874 11138751113874 1113875

(4)

33 4e VRS Algorithm Based on a Non-Cooperative GameModel e main goal of VRS is to transmit data from asource vehicle to a destination vehicle via wireless multihoptransmission techniques In the wireless multihop trans-mission technique the intermediate vehicles in a routing pathshould relay data as soon as possible from the source todestination [6 13 25] In this section we develop a non-cooperative game model (GVRS) for vehicular routing ser-vices As game players in GVRS vehicles dynamically decidetheir routing routes To configure the routing topology a linkcost (LC) is defined to relatively handle dynamic VANETconditions In this study we define a wireless link status(LC

Vj

Vi(Ht)) between Vi and Vj at time Ht as follows

LCVj

ViHt( 1113857 (1minus α) times

dVj

ViHt( 1113857

dMVi

⎛⎜⎝ ⎞⎟⎠⎛⎜⎝ ⎞⎟⎠

+ α timesvjrarr

Ht( 11138571113966 1113967

maxVhisinM

HtVi

vhrarr

Ht( 11138571113864 1113865

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

where dMVi

is the maximum coverage range of Vi anddVj

Vi(Ht) is the relative distance betweenVi andVj at time

Ht Let MHt

Vibe the set of the neighboring vehicles of Vi at

timeHt and vjrarr

(Ht) is the relative velocity ofVj at timeHtFor the adaptive LC

Vj

Vi(Ht) estimation the parameter α

controls the relative weight between the distance and thevelocity of corresponding vehicle

For the VRS a source vehicle configures a multihoprouting path by using the BellmanndashFord algorithm As arouting game player each source vehicle attempts to min-imize total path cost (PC) From the source vehicle Vs tothe destination vehicleVd the total path cost (PC

VdVs

(Ht))

at time Ht is computed as the sum of all relay link costs asfollows

PCVdVs

Ht( 1113857 min 1113944

LCVeh

Ht( )isinPathVdVs

LCVeh Ht( 1113857

⎧⎪⎪⎨

⎪⎪⎩

⎫⎪⎪⎬

⎪⎪⎭ (6)

where PathVdVs

is the routing path from Vs toVd Usually avehicle acting as a relay node has to sacrifice its energy andbandwidth erefore incentive payment algorithm shouldbe developed to guide selfish vehicles toward the cooperativeintervehicle routing paradigm [6 26] By paying the in-centive cost to relaying vehicles the developed routing al-gorithm stimulates cooperative actions among selfish relayvehicles As a GVRS game player a source vehicle pays itsvirtual money (Γ) to disseminate the routing packet If thesource vehicle Vi selects the PathVd

Vsaccording to (6) its

utility function at time Ht (UVi(PathVd

VsHt)) can be de-

fined as

UViPathVd

VsHt1113872 1113873

QVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 and ΓVi

Ht+1 ΓVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 if ΓVi

Htge J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873

0 otherwise ie ΓVi

Htlt J times PC

VdVs

Ht( 11138571113872 11138731113872 11138731113872 1113873

⎧⎪⎨

⎪⎩

(7)

where QVi

Htand ΓVi

Htare the outcome ofVirsquos routing and the

amount of Virsquos virtual money at time Ht respectively J isthe coefficient factor to estimate the incentive payment forrelaying vehicles

34 4e VSS Algorithm Based on the Triple-Plane BargainingGame Recently the VSS has attracted great interests and

becomes one of the most valuable features in the VANETsystem Some VANET applications involve generation ofhuge amount of sensed data With the advance of vehicularsensor technology vehicles that are equipped with OBUs areexpected to effectively monitor the physical world InVANET infrastructures RSUs request a number of sensingtasks while collecting the local information sensed by ve-hicles According to the requested tasks OBUs in vehicles

6 Mobile Information Systems

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 2: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

(VRS) Vehicular cloud service is a new paradigm that exploitscloud computing to serve VANETs with several computa-tional services [10] Currently computing requirements forvehicular applications are increasing rapidly particularly withthe growing interest in embedding a new class of interactiveapplications and services Within the communication rangeof each RSU the RSU controls the VCS through interactionswith the vehicles and provides computation capacity storageand data resources Vehicles requiring resources can obtainthe available resources via interconnected RSUs ereforedepending on the roadside infrastructure the VCS provides asignificant market opportunity and overcomes challengesmaking the technology more cost-effective [10 11]

With the rapid development of VANETs VSSs havebroad potential for enhancing the public sensing domainVehicles with OBUs can be considered as mobile sensorsthat collect the local information requested by the RSUs As aserver the RSU pays for the VSSrsquos participating vehiclesaccording to their sensing qualities As a type of vehicle-to-infrastructure communication examples of VSSs are loca-tion-based road monitoring real-time traffic conditionreporting pollution level measurements and public in-formation sharing Owning to the mobility of the vehiclesthe sensing coverage of location-based services can be ex-panded and VSSs will benefit a wide scope of consumers[7 8 12]

In VANETs vehicles are equipped with OBUs that cancommunicate with each other also known as VRSe VRSrequires the suitable V2V communication technologyespecially the routing method to address various challengesof VANETs Typically VANET is considered a subclass ofMANET and MANET-based routing protocols have beenextensively studied However the VANET has severalcharacteristics that distinguish it from MANET To obtainadaptive vehicular routing protocols high mobility rapidnetwork topology change and an unlimited power supplymust be considered to design an effective VRS algorithm[13ndash15]

In this study we propose a new VANETcontrol schemeby considering the issues of VCS VRS and VSS Based onthe joint design of different vehicular operations our in-tegrated approach can obtain a synergy effect while attainingan appropriate performance balance However it is an ex-tremely challenging task to combine the VCS VRS and VSSalgorithms into a holistic scheme erefore a new controlparadigm is required Currently game theory has beenwidely applied to analyze the performance of VANET sys-tems More specifically VANET system agents are assumedas game players and the interaction among players can beformulated as a game model [6]

11Motivation and Contribution e aim of this study is topropose novel VCS VRS and VSS algorithms based on thedifferent game models For the VANET operations gametheory serves as an effective model of interactive situationsamong rational VANET agents In the VCS algorithm ve-hicles share the cloud resource in the RSU to increase thequality of servicee interactions among vehicles and RSUs

are modeled as a non-cooperative game model In the VRSalgorithm vehicles attempt to increase the quality and re-liability of routing services In the VSS algorithm vehiclesact as mobile sensors to collect the required information Assystem servers RSUs operate to share and process severaltypes of collected information To collect information RSUsuse a learning-based incentive mechanism and individualvehicles adopt the cooperative bargaining approach amongtheir VCSs VSSs and VRSs to balance their performance

e proposed VCS VRS and VSS algorithmsmay need tocoexist and complement each other to meet the diverse re-quirements of VANETs To investigate the strategic in-teractions among different control algorithms we develop anew triple-plane game model across the VCS VRS and VSSalgorithms For effective VANETmanagement vehicles in theVSS algorithm select their strategies from the viewpoint of abargaining solution In a distributed manner the idea of theclassical Nash bargaining solution is used to implement atriple-plane bargaining process in each vehicle e proposedtriple-plane bargaining approach imitating the interactivesequential game process is suitable to negotiate their conflictsof interest while ensuring the system practicality

e main contributions of this study can be summarizedas follows (i) we developed novel VCS VRS and VSS al-gorithms for VANETs (ii) we employed a new game modelto provide satisfactory services (iii) we adopted a distributedtriple-plane bargaining approach to balance contradictoryrequirements (iv) self-adaptability is effectively imple-mented in each control decision process and (v) we cap-tured the dynamic interactions of game players dependingon their different viewpoints According to the reciprocalcombination of optimality and practicality our approachcan provide a desirable solution for real-world VANETapplications To the best of our knowledge little research hasbeen conducted on integrated-bargaining-based algorithmsfor VANET control problems

12 Organization e remainder of this article is organizedas follows In the next section we review some relatedVANET schemes and their problems In section 3 weprovide a detailed description of the VCS VRS and VSSalgorithms before defining the triple-plane bargaining gamemodel In particular this section provides fresh insights intothe design benefit of our game-based approach For con-venience the primary steps of the proposed scheme arelisted In section 4 we validate the performance of theproposed scheme by means of a comparison with someexisting methods Finally we present our conclusion anddiscuss some ideas for the future work

2 Related Work

ere have been considerable researches into the design ofVANET control schemes Cheng et al [16] investigate theopportunistic spectrum access for cognitive radio vehicular adhoc networks In particular the spectrum access process ismodeled as a non-cooperative congestion game ereforethe proposed scheme in [16] opportunistically accesses

2 Mobile Information Systems

licensed channels in a distributed manner and is designed toachieve a pure Nash equilibrium with high efficiency andfairness By using the statistics of the channel availability thisapproach can exploit the spatial and temporal access op-portunities of the licensed spectrum rough the simulationresults they conform that the proposed approach achieveshigher utility and fairness compared to a random accessscheme [16]

e paper [17] focuses on how to stimulate messageforwarding in VANETs To implement this mechanism it iscrucial to make sure that vehicles have incentives to forwardmessages for others Especially this study adopts the coa-litional game theory to solve the forwarding cooperationproblem and rigorously analyzing it in VANETs e maingoal is to ensure that whenever a message needs to beforwarded in a VANET all involved vehicles gets theirincentives to form a grand coalition In addition this ap-proach is extended to taking the limited storage space of eachnode Experiments on testbed trace data verify that theproposed method is effective for the stimulating cooperationof message forwarding in VANETs [17]

In [18] the main focus is to achieve a higher throughputby selecting the best data rate and the best network or channelbased on the cognitive radio VANET mechanism Bychanging wireless channels and data rate in heterogeneouswireless networks the scheme in [18] is designed as a gametheoretic model to achieve a higher throughput for vehicularusers In addition a new idea adopted to find optimal numberof APs for given VANET scenarios [18] Even though thepapers [16ndash18] have considered some game theory-basedVANET control algorithms they are designed as one-sidedprotocols while focusing a specific control issue ereforethe system interaction process is not fully investigated

Recently the vehicular microcloud technique has beenconsidered one of the solutions to address the challenges andissues of VANETs It is a new hybrid technology that has aremarkable impact on traffic management and road safetyBy instantly clustering cars help aggregating collected datathat are transferred to some backend [19 20] e paper [19]introduces the concept of vehicular microcloud as a virtualedge server for efficient connections between cars andbackend infrastructure To realize such virtual edge serversmap-based clustering technique is needed to cope with thedynamicity of vehicular networks is approach can op-timize the data processing and aggregation before sendingthem to a backend [19]

F Hagenauer et al propose an idea that vehicularmicrocloud clusters of parked cars act as RSUs by in-vestigating the virtual vehicular network infrastructure [20]Especially they focus on two control questions (i) the se-lection of gateway nodes to connect moving cars to thecluster and (ii) a mechanism for seamless handover betweengateways For the first question they select only a subset ofall parked cars as gateways is gateways selection helpssignificantly to reduce the channel load For the secondquestion they enable driving cars to find better suitedgateways while allowing the driving car to maintain con-nections [20] e ideas in [19 20] are very interestingHowever they can be used for special situations erefore

it is difficult to apply these ideas in general VANET oper-ation scenarios

During VANETsrsquo operations there are two fundamentaltechniques to disseminate data for vehicular applicationsVehicle-to-infrastructure (V2I) and vehicle-to-vehicle(V2V) communications Recently Kim proposes novel V2Iand V2V algorithms for the effective VANET management[6] For V2I communications a novel vertical game model isdeveloped as an incentive-based crowdsensing algorithmFor V2V communications a new horizontal game model isdesigned to implement a learning-based intervehicle routingprotocol Under dynamically changing VANET environ-ments the developed scheme in [6] is formulated as a dual-plane vertical-horizontal game model By considering thereal-world VANETenvironments this approach imitates theinteractive sequential game process by using self-monitoringand distributed learning techniques It is suitable to ensurethe system practicality while effectively maximizing theexpected benefits [6]

In this paper we develop a novel control scheme forvehicular network services Our proposed scheme also in-cludes sensing and routing algorithms for the operations ofVANETs erefore our proposed scheme may look similarto the method in [6] First of all the resemblance can besummarized as follows (i) in the vehicular routing algo-rithm link status and path cost estimationmethods and eachvehiclersquos utility function are designed in the same manner(ii) in the crowdsensing algorithm utility functions of ve-hicle and RSU are defined similarly and (iii) in the learningprocess strategy selection probabilities are estimated basedon the Boltzmann distribution equation While bothschemes have some similarities there are several key dif-ferences First the scheme in [6] is designed as a dual-planegame model but our proposed scheme is extended as atriple-plane game model Second a new cloud computingalgorithm is developed to share the cloud resource in theRSU while increasing the quality of service ird to reducecomputation complexity entropy-based routing route de-cision mechanism is replaced by an online route decisionmechanism Fourth the scheme in [6] is modeled as a non-cooperative game approach However our proposed schemeadopts a dynamic bargaining solution erefore in ourproposed sensing algorithm each individual vehicle decideshis strategy as a cooperative game manner Fifth our pro-posed scheme fully considers cloud computing sensing androuting payoffs and calculates the bargaining powers torespond to current VANET situations Most of all the maindifference between the scheme in [6] and our proposedscheme is a control paradigm Simply in the paper [6] V2Iand V2V algorithms are combined merely based on thecompetitive approach However in this paper sensingcloud computing and routing algorithms work together in acollaborative fashion and act cooperatively with each otherbased on the interactive feedback process

e Multiobjective Vehicular Interaction Game (MVIG)scheme [21] proposes a new game theoretic scheme tocontrol the on-demand service provision in a vehicularcloud Based on a game theoretic approach this scheme canbalance the overall game while enhancing vehiclesrsquo service

Mobile Information Systems 3

costs e game system in the MVIG scheme differs fromother conventional models as it allows vehicles to prioritizetheir preferences In addition a quality-of-experienceframework is also developed to provision various types ofservices in a vehicular cloud it is a simple but an effectivemodel to determine vehicle preferences while ensuring fairgame treatment e simulation results show that theMVIGscheme outperforms other conventional models [21]

e Prefetching-Based Vehicular Data Dissemination(PVDD) scheme [22] devises a vehicle route-based dataprefetching approach while improving the data transmissionaccessibility under dynamic wireless connectivity and lim-ited data storage environments To put it more concretelythe PVDD scheme develops two control algorithms to de-termine how to prefetch a data set from a data center toroadside wireless access points Based on the greedy ap-proach the first control algorithm is developed to solve thedissemination problem Based on the online learningmanner the second control algorithm gradually learns thesuccess rate of unknown network connectivity and de-termines an optimal binary decision matrix at each iterationFinally this study proves that the first control algorithmcould find a suboptimal solution in polynomial time and theoptimal solution of the second control algorithm convergesto a globally optimal solution in a certain number of iter-ations using regret analysis [22]

e Cooperative Relaying Vehicular Cloud (CRVC)scheme [23] proposes a novel cooperative vehicular relayingalgorithm over a long-term evolution-advanced (LTE-A)network To maximize the vehicular network capacity thisscheme uses vehicles for cooperative relaying nodes inbroadband cellular networks With new functionalities theCRVC scheme can (i) reduce power consumption (ii)provide a higher throughput (iii) lower operational ex-penditures (iv) ensure a more flexibility and (v) increase acovering area In a heavily populated urban area the CRVCscheme is useful owning to a large number of relaying ve-hicles Finally the performance improvement is demon-strated through the simulation analysis [23] In this paperwe compare the performance of our proposed scheme withthe existing MVIG [21] PVDD [22] CRVC [23] schemes

3 The Proposed VCS VRS and VSS Algorithms

In this section the proposed VCS VRS and VSS algorithmsare presented in detail Based on the learning algorithm andgame-based approach these algorithms form a new triple-plane bargaining game model to adapt the fast changingVANET environments

31 Game Models for the VCS VRS and VSS AlgorithmsFor the operation of a VANET system we develop threedifferent game models for the VCS VRS and VSS algo-rithms As game players vehicles and RSUs select theirstrategies based on the interactions of other players In ourproposed scheme the VCS and VRS algorithms are for-mulated as non-cooperative game models and the VSSalgorithm is formulated as a triple-plane bargaining model

by cooperation coordination and collaboration of VCSVRS and VSS processes First for the VCS algorithm weformally define the game model GVCS N C SR SV

Vi isinV

URUVVi isinV LPR

j isinSR

T at each time period of gameplay

(i) N is the finite set of VCS game players N RV where R represents a RSU and V V1 1113864

Vi Vk is a set of multiple vehicles in the Rrsquoscoverage area

(ii) C is the cloud computation capacity ie number ofCPU cycles per second in the R

(iii) SR PRmin PR

j PRmax is a set of Rrsquos

strategies where PRj means the price level to ex-

ecute one basic cloud service unit (BCSU) SVViisinV

is the Virsquos available strategies SVVi

[rVi

min

rVi

k rVimax] where rVi represents the amount of

cloud services of the Vi and is specified in terms ofBCSUs

(iv) UR is the payoff received by the RSU and UVVi isinV is

the payoff received by Vi during the VCS process(v) LPR

j isinSR

is the Rrsquos learning value for the strategyPR

j L is used to estimate the probability distri-bution (PR) for the next Rrsquos strategy selection

(vi) T H1 HtHt+1 1113864 1113865 denotes time which isrepresented by a sequence of time steps with im-perfect information for the VCS game process

For the VRS algorithm we define the game modelGVRS vNVi

AViisinv UViisinv T at each time period ofGVRS

gameplayGVRS can formulate the interactions of vehicles forVANET routing operations

(i) v is the finite set of game players v V1 Vn1113864 1113865where n is the number of vehicles for theGVRS game

(ii) NViis the set of Virsquos one-hope neighboring

vehicles(iii) AViisinv a

Vm

Vi∣Vm isinNVi

is the finite set of Virsquosavailable strategies where a

Vm

Virepresents the se-

lection of Vm to relay a routing packet(iv) UViisinv is the payoff received by the Vi during the

VRS process(v) T H1 HtHt+1 1113864 1113865 is the same as the T in

GVCS

For the VSS algorithm we formally define the game modelGVSS N I SR SVVi isinV uR uV

Vi isinV ZP

IsisinIi T at

each time period of gameplay

(i) N RV is the finite set of game players for theGVSS game N is the same as N in GVCS

(ii) I is the finite set of sensing tasks I I1 Is1113864 1113865

in R where s is the number of total sensing tasks(iii) SR [ηI1

R ηIj

R ηIs

R ] is a vector of Rrsquosstrategies where ηIj

R represents the strategy set forthe Ij In the GVSS game ηIj

R means the set ofprice levels for the crowdsensing work for the taskIj during a time period (H isin T) Like as the GVCS

4 Mobile Information Systems

game ηIj

R is defined as ηIj

R 1113937IjisinNPIj

i |PIj

i isin[P

Ij

min PIj

l PIj

max](iv) SVVi

[μI1Vi

μIj

Vi μIs

Vi] is the vector of Virsquos

strategies where μIj

Virepresents the strategy for

Ij In the GVSS game μIj

Vimeans the Virsquos active

VSS participation ie μIj

Vi 1 or not ie μIj

Vi 0

the Ij during H(v) uR is the payoff received by the RSU and uV

ViisinVis

the payoff received by Vi during the VSS process(vi) ZP

Ij

iisinη

Ij

R is the learning value for the strategyPIj

i Zis used to estimate the probability distribution (PR)for the next Rrsquos strategy selection

(vii) T H1 HtHt+1 1113864 1113865 is the same as T inGVCS and GVRS

32 4e VCS Algorithm Based on a Non-Cooperative GameModel In the next-generation VANET paradigm diverseand miscellaneous applications require significant compu-tation resources However in general the computationalcapabilities of the vehicles are limited To address this re-source restriction problem vehicular cloud technology iswidely considered as a new paradigm to improve theVANET service performance In the VCS process appli-cations can be offloaded to the remote cloud server toimprove the resource utilization and computation perfor-mance [24]

Vehicular cloud servers are located in RSUs and executethe computation tasks received from vehicles However theservice area of each RSU may be limited by the radiocoverage Due to the high mobility vehicles may passthrough several RSUs during the task offloading processerefore service results must be migrated to another RSUwhich can reach the corresponding vehicle From theviewpoints of RSUs and vehicles payoff maximization istheir main concern To reach a win-win situation theyrationally select their strategies During the VCS processeach RSU is a proposer and individual vehicles are

responders they interact with each other for their objectivesduring the VCS process

To design our VCS algorithm we consider the VCSplatform consisting of one RSU (R) and a set of mobilevehicles (V) they formulate the game model GVCS As aproposer the Rrsquos strategy (PR

j isin SR) indicates the offered

price for one BCSU process R has its own utility functionwhich represents its net gain while providing the VCSprocess eRrsquos utility function withPR

j strategy at the trsquothtime step Ht (UR(VPR

j (Ht))) is given by

UR

VPRj Ht( 11138571113872 1113873 1113944

ViisinV1113874ξVi

Ht( 1113857 times 11138741113882PRj Ht( 1113857

times rVi

k Ht( 11138571113883minusCRrVi

k Ht( 11138571113872 111387311138751113875

st PRj Ht( 1113857 isin SR

CR

rVi

k Ht( 11138571113872 1113873 ΘRBCSU times rVi

k Ht( 1113857

ΘRBCSU (θ times M)

C

ξViHt( 1113857

1 Vi selectsrVi

k isin SVVi

atHt

0 otherwise ie noVCS of Vi( 1113857

⎧⎪⎨

⎪⎩

(1)

whereCR(rVi

k (Ht)) is the cost function to execute theVirsquoscloud service (r

Vi

k (Ht)) at timeHtΘRBCSU is the processingcost to execute one BCSU and M is the currently usingcapacity ofC θ is the coefficient factor of cost calculation Inpractice the actual sensing cost CR(middot) is usually unknownby vehicles

As a responder the vehicleVirsquos strategy (SVVi

) representsthe amount of cloud service e payoff ofVi can be definedas a function of the task offload level (rVi) and service price(PR) erefore the Virsquos utility function with PR

j and rVi

k

strategies at time Ht (UVViisinV(PR

j (Ht)rVi

k (Ht))) iscomputed as follows

UVViisinV P

Rj Ht( 1113857r

Vi

k Ht( 11138571113872 1113873 9Vi Ht( 1113857 times H

Vi times rVi

k Ht( 11138571113872 1113873minus PRj Ht( 1113857 times r

Vi

k Ht( 11138571113872 11138731113872 1113873

st 9Vi Ht( 1113857

1 and ΓVi

Ht+1 ΓVi

Htminus PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 1113873 if ΓVi

Htge PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

and rVi

k isin SVVi

is selected1113872 1113873

0 otherwise ie ΓVi

Htlt PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(2)

where HVi is the Virsquos profit factor if one BCSU is pro-cessed and ΓVi

Htis the amount ofVirsquos virtual money at time

Ht if Vi has enough money to pay the VCS price Vi canrequest its cloud service (r

Vi

k ) As the GVCS game playersthe RSU and vehicles attempt to maximize their utilityfunctions Interactions among game players continue

repeatedly during the VCS process over time In partic-ular the RSU should consider the reactions from vehiclesto determine the price strategy (PR) In this study wedevelop a new learning method to decide an effective pricepolicy for cloud services If the strategy PR

j is selected attime Htminus1 the RSU updates the strategy PR

j rsquos learning

Mobile Information Systems 5

value for the next time step (LPR

j

Ht) according to the fol-

lowing method

LPR

j

Ht max

⎧⎨

⎩⎡⎣ (1minus χ) times L

PRj

Htminust1113874 1113875

+ χ times log2UR VPR

j Htminus1( 11138571113872 1113873

1113936PRkisinSR1113874L

PRk

Htminus1 SR

1113875

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎤⎦ 0⎫⎬

(3)where SR is the cardinality of SR and χ is a learning ratethat models how the L values are updated To implement theprice learning mechanism in the RSU a strategy selectiondistribution (PR isin PR) for the VCS is defined based on theL(middot) values During the GVCS game process we sequentiallydetermine PR(Ht) 1113864PR

PRmin

(Ht) PR

PRj

(Ht)

PR

PRmax

(Ht) as the probability distribution of Rrsquos strategy

selection at timeHtePRj strategy selection probability at

time Ht (PR

PRj

(Ht)) is defined as follows

PR

PRj

Ht( 1113857 EXP L

PRj

Htminus11113874 1113875

1113936maxkmin EXP L

PRk

Htminus11113874 11138751113874 1113875

(4)

33 4e VRS Algorithm Based on a Non-Cooperative GameModel e main goal of VRS is to transmit data from asource vehicle to a destination vehicle via wireless multihoptransmission techniques In the wireless multihop trans-mission technique the intermediate vehicles in a routing pathshould relay data as soon as possible from the source todestination [6 13 25] In this section we develop a non-cooperative game model (GVRS) for vehicular routing ser-vices As game players in GVRS vehicles dynamically decidetheir routing routes To configure the routing topology a linkcost (LC) is defined to relatively handle dynamic VANETconditions In this study we define a wireless link status(LC

Vj

Vi(Ht)) between Vi and Vj at time Ht as follows

LCVj

ViHt( 1113857 (1minus α) times

dVj

ViHt( 1113857

dMVi

⎛⎜⎝ ⎞⎟⎠⎛⎜⎝ ⎞⎟⎠

+ α timesvjrarr

Ht( 11138571113966 1113967

maxVhisinM

HtVi

vhrarr

Ht( 11138571113864 1113865

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

where dMVi

is the maximum coverage range of Vi anddVj

Vi(Ht) is the relative distance betweenVi andVj at time

Ht Let MHt

Vibe the set of the neighboring vehicles of Vi at

timeHt and vjrarr

(Ht) is the relative velocity ofVj at timeHtFor the adaptive LC

Vj

Vi(Ht) estimation the parameter α

controls the relative weight between the distance and thevelocity of corresponding vehicle

For the VRS a source vehicle configures a multihoprouting path by using the BellmanndashFord algorithm As arouting game player each source vehicle attempts to min-imize total path cost (PC) From the source vehicle Vs tothe destination vehicleVd the total path cost (PC

VdVs

(Ht))

at time Ht is computed as the sum of all relay link costs asfollows

PCVdVs

Ht( 1113857 min 1113944

LCVeh

Ht( )isinPathVdVs

LCVeh Ht( 1113857

⎧⎪⎪⎨

⎪⎪⎩

⎫⎪⎪⎬

⎪⎪⎭ (6)

where PathVdVs

is the routing path from Vs toVd Usually avehicle acting as a relay node has to sacrifice its energy andbandwidth erefore incentive payment algorithm shouldbe developed to guide selfish vehicles toward the cooperativeintervehicle routing paradigm [6 26] By paying the in-centive cost to relaying vehicles the developed routing al-gorithm stimulates cooperative actions among selfish relayvehicles As a GVRS game player a source vehicle pays itsvirtual money (Γ) to disseminate the routing packet If thesource vehicle Vi selects the PathVd

Vsaccording to (6) its

utility function at time Ht (UVi(PathVd

VsHt)) can be de-

fined as

UViPathVd

VsHt1113872 1113873

QVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 and ΓVi

Ht+1 ΓVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 if ΓVi

Htge J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873

0 otherwise ie ΓVi

Htlt J times PC

VdVs

Ht( 11138571113872 11138731113872 11138731113872 1113873

⎧⎪⎨

⎪⎩

(7)

where QVi

Htand ΓVi

Htare the outcome ofVirsquos routing and the

amount of Virsquos virtual money at time Ht respectively J isthe coefficient factor to estimate the incentive payment forrelaying vehicles

34 4e VSS Algorithm Based on the Triple-Plane BargainingGame Recently the VSS has attracted great interests and

becomes one of the most valuable features in the VANETsystem Some VANET applications involve generation ofhuge amount of sensed data With the advance of vehicularsensor technology vehicles that are equipped with OBUs areexpected to effectively monitor the physical world InVANET infrastructures RSUs request a number of sensingtasks while collecting the local information sensed by ve-hicles According to the requested tasks OBUs in vehicles

6 Mobile Information Systems

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 3: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

licensed channels in a distributed manner and is designed toachieve a pure Nash equilibrium with high efficiency andfairness By using the statistics of the channel availability thisapproach can exploit the spatial and temporal access op-portunities of the licensed spectrum rough the simulationresults they conform that the proposed approach achieveshigher utility and fairness compared to a random accessscheme [16]

e paper [17] focuses on how to stimulate messageforwarding in VANETs To implement this mechanism it iscrucial to make sure that vehicles have incentives to forwardmessages for others Especially this study adopts the coa-litional game theory to solve the forwarding cooperationproblem and rigorously analyzing it in VANETs e maingoal is to ensure that whenever a message needs to beforwarded in a VANET all involved vehicles gets theirincentives to form a grand coalition In addition this ap-proach is extended to taking the limited storage space of eachnode Experiments on testbed trace data verify that theproposed method is effective for the stimulating cooperationof message forwarding in VANETs [17]

In [18] the main focus is to achieve a higher throughputby selecting the best data rate and the best network or channelbased on the cognitive radio VANET mechanism Bychanging wireless channels and data rate in heterogeneouswireless networks the scheme in [18] is designed as a gametheoretic model to achieve a higher throughput for vehicularusers In addition a new idea adopted to find optimal numberof APs for given VANET scenarios [18] Even though thepapers [16ndash18] have considered some game theory-basedVANET control algorithms they are designed as one-sidedprotocols while focusing a specific control issue ereforethe system interaction process is not fully investigated

Recently the vehicular microcloud technique has beenconsidered one of the solutions to address the challenges andissues of VANETs It is a new hybrid technology that has aremarkable impact on traffic management and road safetyBy instantly clustering cars help aggregating collected datathat are transferred to some backend [19 20] e paper [19]introduces the concept of vehicular microcloud as a virtualedge server for efficient connections between cars andbackend infrastructure To realize such virtual edge serversmap-based clustering technique is needed to cope with thedynamicity of vehicular networks is approach can op-timize the data processing and aggregation before sendingthem to a backend [19]

F Hagenauer et al propose an idea that vehicularmicrocloud clusters of parked cars act as RSUs by in-vestigating the virtual vehicular network infrastructure [20]Especially they focus on two control questions (i) the se-lection of gateway nodes to connect moving cars to thecluster and (ii) a mechanism for seamless handover betweengateways For the first question they select only a subset ofall parked cars as gateways is gateways selection helpssignificantly to reduce the channel load For the secondquestion they enable driving cars to find better suitedgateways while allowing the driving car to maintain con-nections [20] e ideas in [19 20] are very interestingHowever they can be used for special situations erefore

it is difficult to apply these ideas in general VANET oper-ation scenarios

During VANETsrsquo operations there are two fundamentaltechniques to disseminate data for vehicular applicationsVehicle-to-infrastructure (V2I) and vehicle-to-vehicle(V2V) communications Recently Kim proposes novel V2Iand V2V algorithms for the effective VANET management[6] For V2I communications a novel vertical game model isdeveloped as an incentive-based crowdsensing algorithmFor V2V communications a new horizontal game model isdesigned to implement a learning-based intervehicle routingprotocol Under dynamically changing VANET environ-ments the developed scheme in [6] is formulated as a dual-plane vertical-horizontal game model By considering thereal-world VANETenvironments this approach imitates theinteractive sequential game process by using self-monitoringand distributed learning techniques It is suitable to ensurethe system practicality while effectively maximizing theexpected benefits [6]

In this paper we develop a novel control scheme forvehicular network services Our proposed scheme also in-cludes sensing and routing algorithms for the operations ofVANETs erefore our proposed scheme may look similarto the method in [6] First of all the resemblance can besummarized as follows (i) in the vehicular routing algo-rithm link status and path cost estimationmethods and eachvehiclersquos utility function are designed in the same manner(ii) in the crowdsensing algorithm utility functions of ve-hicle and RSU are defined similarly and (iii) in the learningprocess strategy selection probabilities are estimated basedon the Boltzmann distribution equation While bothschemes have some similarities there are several key dif-ferences First the scheme in [6] is designed as a dual-planegame model but our proposed scheme is extended as atriple-plane game model Second a new cloud computingalgorithm is developed to share the cloud resource in theRSU while increasing the quality of service ird to reducecomputation complexity entropy-based routing route de-cision mechanism is replaced by an online route decisionmechanism Fourth the scheme in [6] is modeled as a non-cooperative game approach However our proposed schemeadopts a dynamic bargaining solution erefore in ourproposed sensing algorithm each individual vehicle decideshis strategy as a cooperative game manner Fifth our pro-posed scheme fully considers cloud computing sensing androuting payoffs and calculates the bargaining powers torespond to current VANET situations Most of all the maindifference between the scheme in [6] and our proposedscheme is a control paradigm Simply in the paper [6] V2Iand V2V algorithms are combined merely based on thecompetitive approach However in this paper sensingcloud computing and routing algorithms work together in acollaborative fashion and act cooperatively with each otherbased on the interactive feedback process

e Multiobjective Vehicular Interaction Game (MVIG)scheme [21] proposes a new game theoretic scheme tocontrol the on-demand service provision in a vehicularcloud Based on a game theoretic approach this scheme canbalance the overall game while enhancing vehiclesrsquo service

Mobile Information Systems 3

costs e game system in the MVIG scheme differs fromother conventional models as it allows vehicles to prioritizetheir preferences In addition a quality-of-experienceframework is also developed to provision various types ofservices in a vehicular cloud it is a simple but an effectivemodel to determine vehicle preferences while ensuring fairgame treatment e simulation results show that theMVIGscheme outperforms other conventional models [21]

e Prefetching-Based Vehicular Data Dissemination(PVDD) scheme [22] devises a vehicle route-based dataprefetching approach while improving the data transmissionaccessibility under dynamic wireless connectivity and lim-ited data storage environments To put it more concretelythe PVDD scheme develops two control algorithms to de-termine how to prefetch a data set from a data center toroadside wireless access points Based on the greedy ap-proach the first control algorithm is developed to solve thedissemination problem Based on the online learningmanner the second control algorithm gradually learns thesuccess rate of unknown network connectivity and de-termines an optimal binary decision matrix at each iterationFinally this study proves that the first control algorithmcould find a suboptimal solution in polynomial time and theoptimal solution of the second control algorithm convergesto a globally optimal solution in a certain number of iter-ations using regret analysis [22]

e Cooperative Relaying Vehicular Cloud (CRVC)scheme [23] proposes a novel cooperative vehicular relayingalgorithm over a long-term evolution-advanced (LTE-A)network To maximize the vehicular network capacity thisscheme uses vehicles for cooperative relaying nodes inbroadband cellular networks With new functionalities theCRVC scheme can (i) reduce power consumption (ii)provide a higher throughput (iii) lower operational ex-penditures (iv) ensure a more flexibility and (v) increase acovering area In a heavily populated urban area the CRVCscheme is useful owning to a large number of relaying ve-hicles Finally the performance improvement is demon-strated through the simulation analysis [23] In this paperwe compare the performance of our proposed scheme withthe existing MVIG [21] PVDD [22] CRVC [23] schemes

3 The Proposed VCS VRS and VSS Algorithms

In this section the proposed VCS VRS and VSS algorithmsare presented in detail Based on the learning algorithm andgame-based approach these algorithms form a new triple-plane bargaining game model to adapt the fast changingVANET environments

31 Game Models for the VCS VRS and VSS AlgorithmsFor the operation of a VANET system we develop threedifferent game models for the VCS VRS and VSS algo-rithms As game players vehicles and RSUs select theirstrategies based on the interactions of other players In ourproposed scheme the VCS and VRS algorithms are for-mulated as non-cooperative game models and the VSSalgorithm is formulated as a triple-plane bargaining model

by cooperation coordination and collaboration of VCSVRS and VSS processes First for the VCS algorithm weformally define the game model GVCS N C SR SV

Vi isinV

URUVVi isinV LPR

j isinSR

T at each time period of gameplay

(i) N is the finite set of VCS game players N RV where R represents a RSU and V V1 1113864

Vi Vk is a set of multiple vehicles in the Rrsquoscoverage area

(ii) C is the cloud computation capacity ie number ofCPU cycles per second in the R

(iii) SR PRmin PR

j PRmax is a set of Rrsquos

strategies where PRj means the price level to ex-

ecute one basic cloud service unit (BCSU) SVViisinV

is the Virsquos available strategies SVVi

[rVi

min

rVi

k rVimax] where rVi represents the amount of

cloud services of the Vi and is specified in terms ofBCSUs

(iv) UR is the payoff received by the RSU and UVVi isinV is

the payoff received by Vi during the VCS process(v) LPR

j isinSR

is the Rrsquos learning value for the strategyPR

j L is used to estimate the probability distri-bution (PR) for the next Rrsquos strategy selection

(vi) T H1 HtHt+1 1113864 1113865 denotes time which isrepresented by a sequence of time steps with im-perfect information for the VCS game process

For the VRS algorithm we define the game modelGVRS vNVi

AViisinv UViisinv T at each time period ofGVRS

gameplayGVRS can formulate the interactions of vehicles forVANET routing operations

(i) v is the finite set of game players v V1 Vn1113864 1113865where n is the number of vehicles for theGVRS game

(ii) NViis the set of Virsquos one-hope neighboring

vehicles(iii) AViisinv a

Vm

Vi∣Vm isinNVi

is the finite set of Virsquosavailable strategies where a

Vm

Virepresents the se-

lection of Vm to relay a routing packet(iv) UViisinv is the payoff received by the Vi during the

VRS process(v) T H1 HtHt+1 1113864 1113865 is the same as the T in

GVCS

For the VSS algorithm we formally define the game modelGVSS N I SR SVVi isinV uR uV

Vi isinV ZP

IsisinIi T at

each time period of gameplay

(i) N RV is the finite set of game players for theGVSS game N is the same as N in GVCS

(ii) I is the finite set of sensing tasks I I1 Is1113864 1113865

in R where s is the number of total sensing tasks(iii) SR [ηI1

R ηIj

R ηIs

R ] is a vector of Rrsquosstrategies where ηIj

R represents the strategy set forthe Ij In the GVSS game ηIj

R means the set ofprice levels for the crowdsensing work for the taskIj during a time period (H isin T) Like as the GVCS

4 Mobile Information Systems

game ηIj

R is defined as ηIj

R 1113937IjisinNPIj

i |PIj

i isin[P

Ij

min PIj

l PIj

max](iv) SVVi

[μI1Vi

μIj

Vi μIs

Vi] is the vector of Virsquos

strategies where μIj

Virepresents the strategy for

Ij In the GVSS game μIj

Vimeans the Virsquos active

VSS participation ie μIj

Vi 1 or not ie μIj

Vi 0

the Ij during H(v) uR is the payoff received by the RSU and uV

ViisinVis

the payoff received by Vi during the VSS process(vi) ZP

Ij

iisinη

Ij

R is the learning value for the strategyPIj

i Zis used to estimate the probability distribution (PR)for the next Rrsquos strategy selection

(vii) T H1 HtHt+1 1113864 1113865 is the same as T inGVCS and GVRS

32 4e VCS Algorithm Based on a Non-Cooperative GameModel In the next-generation VANET paradigm diverseand miscellaneous applications require significant compu-tation resources However in general the computationalcapabilities of the vehicles are limited To address this re-source restriction problem vehicular cloud technology iswidely considered as a new paradigm to improve theVANET service performance In the VCS process appli-cations can be offloaded to the remote cloud server toimprove the resource utilization and computation perfor-mance [24]

Vehicular cloud servers are located in RSUs and executethe computation tasks received from vehicles However theservice area of each RSU may be limited by the radiocoverage Due to the high mobility vehicles may passthrough several RSUs during the task offloading processerefore service results must be migrated to another RSUwhich can reach the corresponding vehicle From theviewpoints of RSUs and vehicles payoff maximization istheir main concern To reach a win-win situation theyrationally select their strategies During the VCS processeach RSU is a proposer and individual vehicles are

responders they interact with each other for their objectivesduring the VCS process

To design our VCS algorithm we consider the VCSplatform consisting of one RSU (R) and a set of mobilevehicles (V) they formulate the game model GVCS As aproposer the Rrsquos strategy (PR

j isin SR) indicates the offered

price for one BCSU process R has its own utility functionwhich represents its net gain while providing the VCSprocess eRrsquos utility function withPR

j strategy at the trsquothtime step Ht (UR(VPR

j (Ht))) is given by

UR

VPRj Ht( 11138571113872 1113873 1113944

ViisinV1113874ξVi

Ht( 1113857 times 11138741113882PRj Ht( 1113857

times rVi

k Ht( 11138571113883minusCRrVi

k Ht( 11138571113872 111387311138751113875

st PRj Ht( 1113857 isin SR

CR

rVi

k Ht( 11138571113872 1113873 ΘRBCSU times rVi

k Ht( 1113857

ΘRBCSU (θ times M)

C

ξViHt( 1113857

1 Vi selectsrVi

k isin SVVi

atHt

0 otherwise ie noVCS of Vi( 1113857

⎧⎪⎨

⎪⎩

(1)

whereCR(rVi

k (Ht)) is the cost function to execute theVirsquoscloud service (r

Vi

k (Ht)) at timeHtΘRBCSU is the processingcost to execute one BCSU and M is the currently usingcapacity ofC θ is the coefficient factor of cost calculation Inpractice the actual sensing cost CR(middot) is usually unknownby vehicles

As a responder the vehicleVirsquos strategy (SVVi

) representsthe amount of cloud service e payoff ofVi can be definedas a function of the task offload level (rVi) and service price(PR) erefore the Virsquos utility function with PR

j and rVi

k

strategies at time Ht (UVViisinV(PR

j (Ht)rVi

k (Ht))) iscomputed as follows

UVViisinV P

Rj Ht( 1113857r

Vi

k Ht( 11138571113872 1113873 9Vi Ht( 1113857 times H

Vi times rVi

k Ht( 11138571113872 1113873minus PRj Ht( 1113857 times r

Vi

k Ht( 11138571113872 11138731113872 1113873

st 9Vi Ht( 1113857

1 and ΓVi

Ht+1 ΓVi

Htminus PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 1113873 if ΓVi

Htge PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

and rVi

k isin SVVi

is selected1113872 1113873

0 otherwise ie ΓVi

Htlt PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(2)

where HVi is the Virsquos profit factor if one BCSU is pro-cessed and ΓVi

Htis the amount ofVirsquos virtual money at time

Ht if Vi has enough money to pay the VCS price Vi canrequest its cloud service (r

Vi

k ) As the GVCS game playersthe RSU and vehicles attempt to maximize their utilityfunctions Interactions among game players continue

repeatedly during the VCS process over time In partic-ular the RSU should consider the reactions from vehiclesto determine the price strategy (PR) In this study wedevelop a new learning method to decide an effective pricepolicy for cloud services If the strategy PR

j is selected attime Htminus1 the RSU updates the strategy PR

j rsquos learning

Mobile Information Systems 5

value for the next time step (LPR

j

Ht) according to the fol-

lowing method

LPR

j

Ht max

⎧⎨

⎩⎡⎣ (1minus χ) times L

PRj

Htminust1113874 1113875

+ χ times log2UR VPR

j Htminus1( 11138571113872 1113873

1113936PRkisinSR1113874L

PRk

Htminus1 SR

1113875

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎤⎦ 0⎫⎬

(3)where SR is the cardinality of SR and χ is a learning ratethat models how the L values are updated To implement theprice learning mechanism in the RSU a strategy selectiondistribution (PR isin PR) for the VCS is defined based on theL(middot) values During the GVCS game process we sequentiallydetermine PR(Ht) 1113864PR

PRmin

(Ht) PR

PRj

(Ht)

PR

PRmax

(Ht) as the probability distribution of Rrsquos strategy

selection at timeHtePRj strategy selection probability at

time Ht (PR

PRj

(Ht)) is defined as follows

PR

PRj

Ht( 1113857 EXP L

PRj

Htminus11113874 1113875

1113936maxkmin EXP L

PRk

Htminus11113874 11138751113874 1113875

(4)

33 4e VRS Algorithm Based on a Non-Cooperative GameModel e main goal of VRS is to transmit data from asource vehicle to a destination vehicle via wireless multihoptransmission techniques In the wireless multihop trans-mission technique the intermediate vehicles in a routing pathshould relay data as soon as possible from the source todestination [6 13 25] In this section we develop a non-cooperative game model (GVRS) for vehicular routing ser-vices As game players in GVRS vehicles dynamically decidetheir routing routes To configure the routing topology a linkcost (LC) is defined to relatively handle dynamic VANETconditions In this study we define a wireless link status(LC

Vj

Vi(Ht)) between Vi and Vj at time Ht as follows

LCVj

ViHt( 1113857 (1minus α) times

dVj

ViHt( 1113857

dMVi

⎛⎜⎝ ⎞⎟⎠⎛⎜⎝ ⎞⎟⎠

+ α timesvjrarr

Ht( 11138571113966 1113967

maxVhisinM

HtVi

vhrarr

Ht( 11138571113864 1113865

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

where dMVi

is the maximum coverage range of Vi anddVj

Vi(Ht) is the relative distance betweenVi andVj at time

Ht Let MHt

Vibe the set of the neighboring vehicles of Vi at

timeHt and vjrarr

(Ht) is the relative velocity ofVj at timeHtFor the adaptive LC

Vj

Vi(Ht) estimation the parameter α

controls the relative weight between the distance and thevelocity of corresponding vehicle

For the VRS a source vehicle configures a multihoprouting path by using the BellmanndashFord algorithm As arouting game player each source vehicle attempts to min-imize total path cost (PC) From the source vehicle Vs tothe destination vehicleVd the total path cost (PC

VdVs

(Ht))

at time Ht is computed as the sum of all relay link costs asfollows

PCVdVs

Ht( 1113857 min 1113944

LCVeh

Ht( )isinPathVdVs

LCVeh Ht( 1113857

⎧⎪⎪⎨

⎪⎪⎩

⎫⎪⎪⎬

⎪⎪⎭ (6)

where PathVdVs

is the routing path from Vs toVd Usually avehicle acting as a relay node has to sacrifice its energy andbandwidth erefore incentive payment algorithm shouldbe developed to guide selfish vehicles toward the cooperativeintervehicle routing paradigm [6 26] By paying the in-centive cost to relaying vehicles the developed routing al-gorithm stimulates cooperative actions among selfish relayvehicles As a GVRS game player a source vehicle pays itsvirtual money (Γ) to disseminate the routing packet If thesource vehicle Vi selects the PathVd

Vsaccording to (6) its

utility function at time Ht (UVi(PathVd

VsHt)) can be de-

fined as

UViPathVd

VsHt1113872 1113873

QVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 and ΓVi

Ht+1 ΓVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 if ΓVi

Htge J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873

0 otherwise ie ΓVi

Htlt J times PC

VdVs

Ht( 11138571113872 11138731113872 11138731113872 1113873

⎧⎪⎨

⎪⎩

(7)

where QVi

Htand ΓVi

Htare the outcome ofVirsquos routing and the

amount of Virsquos virtual money at time Ht respectively J isthe coefficient factor to estimate the incentive payment forrelaying vehicles

34 4e VSS Algorithm Based on the Triple-Plane BargainingGame Recently the VSS has attracted great interests and

becomes one of the most valuable features in the VANETsystem Some VANET applications involve generation ofhuge amount of sensed data With the advance of vehicularsensor technology vehicles that are equipped with OBUs areexpected to effectively monitor the physical world InVANET infrastructures RSUs request a number of sensingtasks while collecting the local information sensed by ve-hicles According to the requested tasks OBUs in vehicles

6 Mobile Information Systems

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 4: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

costs e game system in the MVIG scheme differs fromother conventional models as it allows vehicles to prioritizetheir preferences In addition a quality-of-experienceframework is also developed to provision various types ofservices in a vehicular cloud it is a simple but an effectivemodel to determine vehicle preferences while ensuring fairgame treatment e simulation results show that theMVIGscheme outperforms other conventional models [21]

e Prefetching-Based Vehicular Data Dissemination(PVDD) scheme [22] devises a vehicle route-based dataprefetching approach while improving the data transmissionaccessibility under dynamic wireless connectivity and lim-ited data storage environments To put it more concretelythe PVDD scheme develops two control algorithms to de-termine how to prefetch a data set from a data center toroadside wireless access points Based on the greedy ap-proach the first control algorithm is developed to solve thedissemination problem Based on the online learningmanner the second control algorithm gradually learns thesuccess rate of unknown network connectivity and de-termines an optimal binary decision matrix at each iterationFinally this study proves that the first control algorithmcould find a suboptimal solution in polynomial time and theoptimal solution of the second control algorithm convergesto a globally optimal solution in a certain number of iter-ations using regret analysis [22]

e Cooperative Relaying Vehicular Cloud (CRVC)scheme [23] proposes a novel cooperative vehicular relayingalgorithm over a long-term evolution-advanced (LTE-A)network To maximize the vehicular network capacity thisscheme uses vehicles for cooperative relaying nodes inbroadband cellular networks With new functionalities theCRVC scheme can (i) reduce power consumption (ii)provide a higher throughput (iii) lower operational ex-penditures (iv) ensure a more flexibility and (v) increase acovering area In a heavily populated urban area the CRVCscheme is useful owning to a large number of relaying ve-hicles Finally the performance improvement is demon-strated through the simulation analysis [23] In this paperwe compare the performance of our proposed scheme withthe existing MVIG [21] PVDD [22] CRVC [23] schemes

3 The Proposed VCS VRS and VSS Algorithms

In this section the proposed VCS VRS and VSS algorithmsare presented in detail Based on the learning algorithm andgame-based approach these algorithms form a new triple-plane bargaining game model to adapt the fast changingVANET environments

31 Game Models for the VCS VRS and VSS AlgorithmsFor the operation of a VANET system we develop threedifferent game models for the VCS VRS and VSS algo-rithms As game players vehicles and RSUs select theirstrategies based on the interactions of other players In ourproposed scheme the VCS and VRS algorithms are for-mulated as non-cooperative game models and the VSSalgorithm is formulated as a triple-plane bargaining model

by cooperation coordination and collaboration of VCSVRS and VSS processes First for the VCS algorithm weformally define the game model GVCS N C SR SV

Vi isinV

URUVVi isinV LPR

j isinSR

T at each time period of gameplay

(i) N is the finite set of VCS game players N RV where R represents a RSU and V V1 1113864

Vi Vk is a set of multiple vehicles in the Rrsquoscoverage area

(ii) C is the cloud computation capacity ie number ofCPU cycles per second in the R

(iii) SR PRmin PR

j PRmax is a set of Rrsquos

strategies where PRj means the price level to ex-

ecute one basic cloud service unit (BCSU) SVViisinV

is the Virsquos available strategies SVVi

[rVi

min

rVi

k rVimax] where rVi represents the amount of

cloud services of the Vi and is specified in terms ofBCSUs

(iv) UR is the payoff received by the RSU and UVVi isinV is

the payoff received by Vi during the VCS process(v) LPR

j isinSR

is the Rrsquos learning value for the strategyPR

j L is used to estimate the probability distri-bution (PR) for the next Rrsquos strategy selection

(vi) T H1 HtHt+1 1113864 1113865 denotes time which isrepresented by a sequence of time steps with im-perfect information for the VCS game process

For the VRS algorithm we define the game modelGVRS vNVi

AViisinv UViisinv T at each time period ofGVRS

gameplayGVRS can formulate the interactions of vehicles forVANET routing operations

(i) v is the finite set of game players v V1 Vn1113864 1113865where n is the number of vehicles for theGVRS game

(ii) NViis the set of Virsquos one-hope neighboring

vehicles(iii) AViisinv a

Vm

Vi∣Vm isinNVi

is the finite set of Virsquosavailable strategies where a

Vm

Virepresents the se-

lection of Vm to relay a routing packet(iv) UViisinv is the payoff received by the Vi during the

VRS process(v) T H1 HtHt+1 1113864 1113865 is the same as the T in

GVCS

For the VSS algorithm we formally define the game modelGVSS N I SR SVVi isinV uR uV

Vi isinV ZP

IsisinIi T at

each time period of gameplay

(i) N RV is the finite set of game players for theGVSS game N is the same as N in GVCS

(ii) I is the finite set of sensing tasks I I1 Is1113864 1113865

in R where s is the number of total sensing tasks(iii) SR [ηI1

R ηIj

R ηIs

R ] is a vector of Rrsquosstrategies where ηIj

R represents the strategy set forthe Ij In the GVSS game ηIj

R means the set ofprice levels for the crowdsensing work for the taskIj during a time period (H isin T) Like as the GVCS

4 Mobile Information Systems

game ηIj

R is defined as ηIj

R 1113937IjisinNPIj

i |PIj

i isin[P

Ij

min PIj

l PIj

max](iv) SVVi

[μI1Vi

μIj

Vi μIs

Vi] is the vector of Virsquos

strategies where μIj

Virepresents the strategy for

Ij In the GVSS game μIj

Vimeans the Virsquos active

VSS participation ie μIj

Vi 1 or not ie μIj

Vi 0

the Ij during H(v) uR is the payoff received by the RSU and uV

ViisinVis

the payoff received by Vi during the VSS process(vi) ZP

Ij

iisinη

Ij

R is the learning value for the strategyPIj

i Zis used to estimate the probability distribution (PR)for the next Rrsquos strategy selection

(vii) T H1 HtHt+1 1113864 1113865 is the same as T inGVCS and GVRS

32 4e VCS Algorithm Based on a Non-Cooperative GameModel In the next-generation VANET paradigm diverseand miscellaneous applications require significant compu-tation resources However in general the computationalcapabilities of the vehicles are limited To address this re-source restriction problem vehicular cloud technology iswidely considered as a new paradigm to improve theVANET service performance In the VCS process appli-cations can be offloaded to the remote cloud server toimprove the resource utilization and computation perfor-mance [24]

Vehicular cloud servers are located in RSUs and executethe computation tasks received from vehicles However theservice area of each RSU may be limited by the radiocoverage Due to the high mobility vehicles may passthrough several RSUs during the task offloading processerefore service results must be migrated to another RSUwhich can reach the corresponding vehicle From theviewpoints of RSUs and vehicles payoff maximization istheir main concern To reach a win-win situation theyrationally select their strategies During the VCS processeach RSU is a proposer and individual vehicles are

responders they interact with each other for their objectivesduring the VCS process

To design our VCS algorithm we consider the VCSplatform consisting of one RSU (R) and a set of mobilevehicles (V) they formulate the game model GVCS As aproposer the Rrsquos strategy (PR

j isin SR) indicates the offered

price for one BCSU process R has its own utility functionwhich represents its net gain while providing the VCSprocess eRrsquos utility function withPR

j strategy at the trsquothtime step Ht (UR(VPR

j (Ht))) is given by

UR

VPRj Ht( 11138571113872 1113873 1113944

ViisinV1113874ξVi

Ht( 1113857 times 11138741113882PRj Ht( 1113857

times rVi

k Ht( 11138571113883minusCRrVi

k Ht( 11138571113872 111387311138751113875

st PRj Ht( 1113857 isin SR

CR

rVi

k Ht( 11138571113872 1113873 ΘRBCSU times rVi

k Ht( 1113857

ΘRBCSU (θ times M)

C

ξViHt( 1113857

1 Vi selectsrVi

k isin SVVi

atHt

0 otherwise ie noVCS of Vi( 1113857

⎧⎪⎨

⎪⎩

(1)

whereCR(rVi

k (Ht)) is the cost function to execute theVirsquoscloud service (r

Vi

k (Ht)) at timeHtΘRBCSU is the processingcost to execute one BCSU and M is the currently usingcapacity ofC θ is the coefficient factor of cost calculation Inpractice the actual sensing cost CR(middot) is usually unknownby vehicles

As a responder the vehicleVirsquos strategy (SVVi

) representsthe amount of cloud service e payoff ofVi can be definedas a function of the task offload level (rVi) and service price(PR) erefore the Virsquos utility function with PR

j and rVi

k

strategies at time Ht (UVViisinV(PR

j (Ht)rVi

k (Ht))) iscomputed as follows

UVViisinV P

Rj Ht( 1113857r

Vi

k Ht( 11138571113872 1113873 9Vi Ht( 1113857 times H

Vi times rVi

k Ht( 11138571113872 1113873minus PRj Ht( 1113857 times r

Vi

k Ht( 11138571113872 11138731113872 1113873

st 9Vi Ht( 1113857

1 and ΓVi

Ht+1 ΓVi

Htminus PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 1113873 if ΓVi

Htge PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

and rVi

k isin SVVi

is selected1113872 1113873

0 otherwise ie ΓVi

Htlt PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(2)

where HVi is the Virsquos profit factor if one BCSU is pro-cessed and ΓVi

Htis the amount ofVirsquos virtual money at time

Ht if Vi has enough money to pay the VCS price Vi canrequest its cloud service (r

Vi

k ) As the GVCS game playersthe RSU and vehicles attempt to maximize their utilityfunctions Interactions among game players continue

repeatedly during the VCS process over time In partic-ular the RSU should consider the reactions from vehiclesto determine the price strategy (PR) In this study wedevelop a new learning method to decide an effective pricepolicy for cloud services If the strategy PR

j is selected attime Htminus1 the RSU updates the strategy PR

j rsquos learning

Mobile Information Systems 5

value for the next time step (LPR

j

Ht) according to the fol-

lowing method

LPR

j

Ht max

⎧⎨

⎩⎡⎣ (1minus χ) times L

PRj

Htminust1113874 1113875

+ χ times log2UR VPR

j Htminus1( 11138571113872 1113873

1113936PRkisinSR1113874L

PRk

Htminus1 SR

1113875

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎤⎦ 0⎫⎬

(3)where SR is the cardinality of SR and χ is a learning ratethat models how the L values are updated To implement theprice learning mechanism in the RSU a strategy selectiondistribution (PR isin PR) for the VCS is defined based on theL(middot) values During the GVCS game process we sequentiallydetermine PR(Ht) 1113864PR

PRmin

(Ht) PR

PRj

(Ht)

PR

PRmax

(Ht) as the probability distribution of Rrsquos strategy

selection at timeHtePRj strategy selection probability at

time Ht (PR

PRj

(Ht)) is defined as follows

PR

PRj

Ht( 1113857 EXP L

PRj

Htminus11113874 1113875

1113936maxkmin EXP L

PRk

Htminus11113874 11138751113874 1113875

(4)

33 4e VRS Algorithm Based on a Non-Cooperative GameModel e main goal of VRS is to transmit data from asource vehicle to a destination vehicle via wireless multihoptransmission techniques In the wireless multihop trans-mission technique the intermediate vehicles in a routing pathshould relay data as soon as possible from the source todestination [6 13 25] In this section we develop a non-cooperative game model (GVRS) for vehicular routing ser-vices As game players in GVRS vehicles dynamically decidetheir routing routes To configure the routing topology a linkcost (LC) is defined to relatively handle dynamic VANETconditions In this study we define a wireless link status(LC

Vj

Vi(Ht)) between Vi and Vj at time Ht as follows

LCVj

ViHt( 1113857 (1minus α) times

dVj

ViHt( 1113857

dMVi

⎛⎜⎝ ⎞⎟⎠⎛⎜⎝ ⎞⎟⎠

+ α timesvjrarr

Ht( 11138571113966 1113967

maxVhisinM

HtVi

vhrarr

Ht( 11138571113864 1113865

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

where dMVi

is the maximum coverage range of Vi anddVj

Vi(Ht) is the relative distance betweenVi andVj at time

Ht Let MHt

Vibe the set of the neighboring vehicles of Vi at

timeHt and vjrarr

(Ht) is the relative velocity ofVj at timeHtFor the adaptive LC

Vj

Vi(Ht) estimation the parameter α

controls the relative weight between the distance and thevelocity of corresponding vehicle

For the VRS a source vehicle configures a multihoprouting path by using the BellmanndashFord algorithm As arouting game player each source vehicle attempts to min-imize total path cost (PC) From the source vehicle Vs tothe destination vehicleVd the total path cost (PC

VdVs

(Ht))

at time Ht is computed as the sum of all relay link costs asfollows

PCVdVs

Ht( 1113857 min 1113944

LCVeh

Ht( )isinPathVdVs

LCVeh Ht( 1113857

⎧⎪⎪⎨

⎪⎪⎩

⎫⎪⎪⎬

⎪⎪⎭ (6)

where PathVdVs

is the routing path from Vs toVd Usually avehicle acting as a relay node has to sacrifice its energy andbandwidth erefore incentive payment algorithm shouldbe developed to guide selfish vehicles toward the cooperativeintervehicle routing paradigm [6 26] By paying the in-centive cost to relaying vehicles the developed routing al-gorithm stimulates cooperative actions among selfish relayvehicles As a GVRS game player a source vehicle pays itsvirtual money (Γ) to disseminate the routing packet If thesource vehicle Vi selects the PathVd

Vsaccording to (6) its

utility function at time Ht (UVi(PathVd

VsHt)) can be de-

fined as

UViPathVd

VsHt1113872 1113873

QVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 and ΓVi

Ht+1 ΓVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 if ΓVi

Htge J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873

0 otherwise ie ΓVi

Htlt J times PC

VdVs

Ht( 11138571113872 11138731113872 11138731113872 1113873

⎧⎪⎨

⎪⎩

(7)

where QVi

Htand ΓVi

Htare the outcome ofVirsquos routing and the

amount of Virsquos virtual money at time Ht respectively J isthe coefficient factor to estimate the incentive payment forrelaying vehicles

34 4e VSS Algorithm Based on the Triple-Plane BargainingGame Recently the VSS has attracted great interests and

becomes one of the most valuable features in the VANETsystem Some VANET applications involve generation ofhuge amount of sensed data With the advance of vehicularsensor technology vehicles that are equipped with OBUs areexpected to effectively monitor the physical world InVANET infrastructures RSUs request a number of sensingtasks while collecting the local information sensed by ve-hicles According to the requested tasks OBUs in vehicles

6 Mobile Information Systems

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 5: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

game ηIj

R is defined as ηIj

R 1113937IjisinNPIj

i |PIj

i isin[P

Ij

min PIj

l PIj

max](iv) SVVi

[μI1Vi

μIj

Vi μIs

Vi] is the vector of Virsquos

strategies where μIj

Virepresents the strategy for

Ij In the GVSS game μIj

Vimeans the Virsquos active

VSS participation ie μIj

Vi 1 or not ie μIj

Vi 0

the Ij during H(v) uR is the payoff received by the RSU and uV

ViisinVis

the payoff received by Vi during the VSS process(vi) ZP

Ij

iisinη

Ij

R is the learning value for the strategyPIj

i Zis used to estimate the probability distribution (PR)for the next Rrsquos strategy selection

(vii) T H1 HtHt+1 1113864 1113865 is the same as T inGVCS and GVRS

32 4e VCS Algorithm Based on a Non-Cooperative GameModel In the next-generation VANET paradigm diverseand miscellaneous applications require significant compu-tation resources However in general the computationalcapabilities of the vehicles are limited To address this re-source restriction problem vehicular cloud technology iswidely considered as a new paradigm to improve theVANET service performance In the VCS process appli-cations can be offloaded to the remote cloud server toimprove the resource utilization and computation perfor-mance [24]

Vehicular cloud servers are located in RSUs and executethe computation tasks received from vehicles However theservice area of each RSU may be limited by the radiocoverage Due to the high mobility vehicles may passthrough several RSUs during the task offloading processerefore service results must be migrated to another RSUwhich can reach the corresponding vehicle From theviewpoints of RSUs and vehicles payoff maximization istheir main concern To reach a win-win situation theyrationally select their strategies During the VCS processeach RSU is a proposer and individual vehicles are

responders they interact with each other for their objectivesduring the VCS process

To design our VCS algorithm we consider the VCSplatform consisting of one RSU (R) and a set of mobilevehicles (V) they formulate the game model GVCS As aproposer the Rrsquos strategy (PR

j isin SR) indicates the offered

price for one BCSU process R has its own utility functionwhich represents its net gain while providing the VCSprocess eRrsquos utility function withPR

j strategy at the trsquothtime step Ht (UR(VPR

j (Ht))) is given by

UR

VPRj Ht( 11138571113872 1113873 1113944

ViisinV1113874ξVi

Ht( 1113857 times 11138741113882PRj Ht( 1113857

times rVi

k Ht( 11138571113883minusCRrVi

k Ht( 11138571113872 111387311138751113875

st PRj Ht( 1113857 isin SR

CR

rVi

k Ht( 11138571113872 1113873 ΘRBCSU times rVi

k Ht( 1113857

ΘRBCSU (θ times M)

C

ξViHt( 1113857

1 Vi selectsrVi

k isin SVVi

atHt

0 otherwise ie noVCS of Vi( 1113857

⎧⎪⎨

⎪⎩

(1)

whereCR(rVi

k (Ht)) is the cost function to execute theVirsquoscloud service (r

Vi

k (Ht)) at timeHtΘRBCSU is the processingcost to execute one BCSU and M is the currently usingcapacity ofC θ is the coefficient factor of cost calculation Inpractice the actual sensing cost CR(middot) is usually unknownby vehicles

As a responder the vehicleVirsquos strategy (SVVi

) representsthe amount of cloud service e payoff ofVi can be definedas a function of the task offload level (rVi) and service price(PR) erefore the Virsquos utility function with PR

j and rVi

k

strategies at time Ht (UVViisinV(PR

j (Ht)rVi

k (Ht))) iscomputed as follows

UVViisinV P

Rj Ht( 1113857r

Vi

k Ht( 11138571113872 1113873 9Vi Ht( 1113857 times H

Vi times rVi

k Ht( 11138571113872 1113873minus PRj Ht( 1113857 times r

Vi

k Ht( 11138571113872 11138731113872 1113873

st 9Vi Ht( 1113857

1 and ΓVi

Ht+1 ΓVi

Htminus PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 1113873 if ΓVi

Htge PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

and rVi

k isin SVVi

is selected1113872 1113873

0 otherwise ie ΓVi

Htlt PR

j Ht( 1113857 times rVi

k Ht( 11138571113872 11138731113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(2)

where HVi is the Virsquos profit factor if one BCSU is pro-cessed and ΓVi

Htis the amount ofVirsquos virtual money at time

Ht if Vi has enough money to pay the VCS price Vi canrequest its cloud service (r

Vi

k ) As the GVCS game playersthe RSU and vehicles attempt to maximize their utilityfunctions Interactions among game players continue

repeatedly during the VCS process over time In partic-ular the RSU should consider the reactions from vehiclesto determine the price strategy (PR) In this study wedevelop a new learning method to decide an effective pricepolicy for cloud services If the strategy PR

j is selected attime Htminus1 the RSU updates the strategy PR

j rsquos learning

Mobile Information Systems 5

value for the next time step (LPR

j

Ht) according to the fol-

lowing method

LPR

j

Ht max

⎧⎨

⎩⎡⎣ (1minus χ) times L

PRj

Htminust1113874 1113875

+ χ times log2UR VPR

j Htminus1( 11138571113872 1113873

1113936PRkisinSR1113874L

PRk

Htminus1 SR

1113875

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎤⎦ 0⎫⎬

(3)where SR is the cardinality of SR and χ is a learning ratethat models how the L values are updated To implement theprice learning mechanism in the RSU a strategy selectiondistribution (PR isin PR) for the VCS is defined based on theL(middot) values During the GVCS game process we sequentiallydetermine PR(Ht) 1113864PR

PRmin

(Ht) PR

PRj

(Ht)

PR

PRmax

(Ht) as the probability distribution of Rrsquos strategy

selection at timeHtePRj strategy selection probability at

time Ht (PR

PRj

(Ht)) is defined as follows

PR

PRj

Ht( 1113857 EXP L

PRj

Htminus11113874 1113875

1113936maxkmin EXP L

PRk

Htminus11113874 11138751113874 1113875

(4)

33 4e VRS Algorithm Based on a Non-Cooperative GameModel e main goal of VRS is to transmit data from asource vehicle to a destination vehicle via wireless multihoptransmission techniques In the wireless multihop trans-mission technique the intermediate vehicles in a routing pathshould relay data as soon as possible from the source todestination [6 13 25] In this section we develop a non-cooperative game model (GVRS) for vehicular routing ser-vices As game players in GVRS vehicles dynamically decidetheir routing routes To configure the routing topology a linkcost (LC) is defined to relatively handle dynamic VANETconditions In this study we define a wireless link status(LC

Vj

Vi(Ht)) between Vi and Vj at time Ht as follows

LCVj

ViHt( 1113857 (1minus α) times

dVj

ViHt( 1113857

dMVi

⎛⎜⎝ ⎞⎟⎠⎛⎜⎝ ⎞⎟⎠

+ α timesvjrarr

Ht( 11138571113966 1113967

maxVhisinM

HtVi

vhrarr

Ht( 11138571113864 1113865

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

where dMVi

is the maximum coverage range of Vi anddVj

Vi(Ht) is the relative distance betweenVi andVj at time

Ht Let MHt

Vibe the set of the neighboring vehicles of Vi at

timeHt and vjrarr

(Ht) is the relative velocity ofVj at timeHtFor the adaptive LC

Vj

Vi(Ht) estimation the parameter α

controls the relative weight between the distance and thevelocity of corresponding vehicle

For the VRS a source vehicle configures a multihoprouting path by using the BellmanndashFord algorithm As arouting game player each source vehicle attempts to min-imize total path cost (PC) From the source vehicle Vs tothe destination vehicleVd the total path cost (PC

VdVs

(Ht))

at time Ht is computed as the sum of all relay link costs asfollows

PCVdVs

Ht( 1113857 min 1113944

LCVeh

Ht( )isinPathVdVs

LCVeh Ht( 1113857

⎧⎪⎪⎨

⎪⎪⎩

⎫⎪⎪⎬

⎪⎪⎭ (6)

where PathVdVs

is the routing path from Vs toVd Usually avehicle acting as a relay node has to sacrifice its energy andbandwidth erefore incentive payment algorithm shouldbe developed to guide selfish vehicles toward the cooperativeintervehicle routing paradigm [6 26] By paying the in-centive cost to relaying vehicles the developed routing al-gorithm stimulates cooperative actions among selfish relayvehicles As a GVRS game player a source vehicle pays itsvirtual money (Γ) to disseminate the routing packet If thesource vehicle Vi selects the PathVd

Vsaccording to (6) its

utility function at time Ht (UVi(PathVd

VsHt)) can be de-

fined as

UViPathVd

VsHt1113872 1113873

QVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 and ΓVi

Ht+1 ΓVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 if ΓVi

Htge J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873

0 otherwise ie ΓVi

Htlt J times PC

VdVs

Ht( 11138571113872 11138731113872 11138731113872 1113873

⎧⎪⎨

⎪⎩

(7)

where QVi

Htand ΓVi

Htare the outcome ofVirsquos routing and the

amount of Virsquos virtual money at time Ht respectively J isthe coefficient factor to estimate the incentive payment forrelaying vehicles

34 4e VSS Algorithm Based on the Triple-Plane BargainingGame Recently the VSS has attracted great interests and

becomes one of the most valuable features in the VANETsystem Some VANET applications involve generation ofhuge amount of sensed data With the advance of vehicularsensor technology vehicles that are equipped with OBUs areexpected to effectively monitor the physical world InVANET infrastructures RSUs request a number of sensingtasks while collecting the local information sensed by ve-hicles According to the requested tasks OBUs in vehicles

6 Mobile Information Systems

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 6: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

value for the next time step (LPR

j

Ht) according to the fol-

lowing method

LPR

j

Ht max

⎧⎨

⎩⎡⎣ (1minus χ) times L

PRj

Htminust1113874 1113875

+ χ times log2UR VPR

j Htminus1( 11138571113872 1113873

1113936PRkisinSR1113874L

PRk

Htminus1 SR

1113875

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠

⎤⎦ 0⎫⎬

(3)where SR is the cardinality of SR and χ is a learning ratethat models how the L values are updated To implement theprice learning mechanism in the RSU a strategy selectiondistribution (PR isin PR) for the VCS is defined based on theL(middot) values During the GVCS game process we sequentiallydetermine PR(Ht) 1113864PR

PRmin

(Ht) PR

PRj

(Ht)

PR

PRmax

(Ht) as the probability distribution of Rrsquos strategy

selection at timeHtePRj strategy selection probability at

time Ht (PR

PRj

(Ht)) is defined as follows

PR

PRj

Ht( 1113857 EXP L

PRj

Htminus11113874 1113875

1113936maxkmin EXP L

PRk

Htminus11113874 11138751113874 1113875

(4)

33 4e VRS Algorithm Based on a Non-Cooperative GameModel e main goal of VRS is to transmit data from asource vehicle to a destination vehicle via wireless multihoptransmission techniques In the wireless multihop trans-mission technique the intermediate vehicles in a routing pathshould relay data as soon as possible from the source todestination [6 13 25] In this section we develop a non-cooperative game model (GVRS) for vehicular routing ser-vices As game players in GVRS vehicles dynamically decidetheir routing routes To configure the routing topology a linkcost (LC) is defined to relatively handle dynamic VANETconditions In this study we define a wireless link status(LC

Vj

Vi(Ht)) between Vi and Vj at time Ht as follows

LCVj

ViHt( 1113857 (1minus α) times

dVj

ViHt( 1113857

dMVi

⎛⎜⎝ ⎞⎟⎠⎛⎜⎝ ⎞⎟⎠

+ α timesvjrarr

Ht( 11138571113966 1113967

maxVhisinM

HtVi

vhrarr

Ht( 11138571113864 1113865

⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠

(5)

where dMVi

is the maximum coverage range of Vi anddVj

Vi(Ht) is the relative distance betweenVi andVj at time

Ht Let MHt

Vibe the set of the neighboring vehicles of Vi at

timeHt and vjrarr

(Ht) is the relative velocity ofVj at timeHtFor the adaptive LC

Vj

Vi(Ht) estimation the parameter α

controls the relative weight between the distance and thevelocity of corresponding vehicle

For the VRS a source vehicle configures a multihoprouting path by using the BellmanndashFord algorithm As arouting game player each source vehicle attempts to min-imize total path cost (PC) From the source vehicle Vs tothe destination vehicleVd the total path cost (PC

VdVs

(Ht))

at time Ht is computed as the sum of all relay link costs asfollows

PCVdVs

Ht( 1113857 min 1113944

LCVeh

Ht( )isinPathVdVs

LCVeh Ht( 1113857

⎧⎪⎪⎨

⎪⎪⎩

⎫⎪⎪⎬

⎪⎪⎭ (6)

where PathVdVs

is the routing path from Vs toVd Usually avehicle acting as a relay node has to sacrifice its energy andbandwidth erefore incentive payment algorithm shouldbe developed to guide selfish vehicles toward the cooperativeintervehicle routing paradigm [6 26] By paying the in-centive cost to relaying vehicles the developed routing al-gorithm stimulates cooperative actions among selfish relayvehicles As a GVRS game player a source vehicle pays itsvirtual money (Γ) to disseminate the routing packet If thesource vehicle Vi selects the PathVd

Vsaccording to (6) its

utility function at time Ht (UVi(PathVd

VsHt)) can be de-

fined as

UViPathVd

VsHt1113872 1113873

QVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 and ΓVi

Ht+1 ΓVi

Htminus J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873 if ΓVi

Htge J times PC

VdVs

Ht( 11138571113872 11138731113872 1113873

0 otherwise ie ΓVi

Htlt J times PC

VdVs

Ht( 11138571113872 11138731113872 11138731113872 1113873

⎧⎪⎨

⎪⎩

(7)

where QVi

Htand ΓVi

Htare the outcome ofVirsquos routing and the

amount of Virsquos virtual money at time Ht respectively J isthe coefficient factor to estimate the incentive payment forrelaying vehicles

34 4e VSS Algorithm Based on the Triple-Plane BargainingGame Recently the VSS has attracted great interests and

becomes one of the most valuable features in the VANETsystem Some VANET applications involve generation ofhuge amount of sensed data With the advance of vehicularsensor technology vehicles that are equipped with OBUs areexpected to effectively monitor the physical world InVANET infrastructures RSUs request a number of sensingtasks while collecting the local information sensed by ve-hicles According to the requested tasks OBUs in vehicles

6 Mobile Information Systems

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 7: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

sense the requested local information and transmit theirsensing results to the RSU Although some excellent re-searches have been done on the VSS process there are stillsignificant challenges that need to be addressed Most of allconducting the sensing task and reporting the sensing datausually consume resource of vehicles erefore selfishvehicles should be paid by the RSU as the compensation fortheir VSSs To recruit an optimal set of vehicles to cover themonitoring area while ensuring vehicles to provide their fullsensing efforts the RSU stimulates sufficient vehicles withproper incentives to fulfill VSS tasks [6 8 27]

In this sense we design our VSS algorithm to de-termine the incentive for vehicles to complete their VSStasks Since vehicles may make different contributions ofsensing work the RSU may issue appropriate incentives inreturn for the collected sensing data [6] e major goal ofour algorithm is that the interactive trading between theRSU and vehicles should benefit both of them is gamesituation can be modeled effectively though GVSS In theVSS algorithm we consider the VSS platform consisting ofone RSU (R) a set of mobile vehicles (V) and sensingtasks (I) they formulate the sensing game model GVSS Tocapture its own heterogeneity a vehiclersquos strategy (μ isin SVVi

)indicates the contribution for a specific task (I isin I) Forexample Vi can actually contribute for the task Is duringHt ie μ

Is

Vi1 or not ie μIs

Vi 0 where μIs

Viisin SVVi

Eachvehicle has its own utility function which represents its netgain eVirsquos utility function at timeHt (uV

Vi(SVVi

Ht)) isgiven by

uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

1113944

Is

IjI1

μIj

ViHt( 1113857 times ηIj

R Ht( 1113857minusCIj

ViHt( 11138571113874 11138751113874 1113875

st ηIj

R Ht( 1113857 isin SR

μIj

ViHt( 1113857

1 if Vi contributes forIj atHt

0 otherwise1113896

(8)

where CIj

Vi(Ht) and ηIj

R (Ht) are the Virsquos cost and theRSUrsquos incentive payment for the Ij sensing at time Htrespectively Usually game players simply attempt tomaximize their utility functions However vehicles in theGVSS game should consider not only the VSS but also theVCS and VRS From the point of view of vehicles thissituation can be modeled effectively though a cooperativebargaining process In this study we adopt a well-knownbargaining solution concept called Nash bargainingsolution (NBS) to effectively design the vehiclersquos VSSstrategy decision mechanism the NBS is an effective toolto achieve a mutually desirable solution among conflictingrequirements

In our proposed scheme vehicles can earn the virtualmoney (Γ) from the VSS incentive payment and spend theirΓ for their VCS and VRS Due to this reason the strategydecision in the VSS algorithmmight directly affect the payoff

ofGVCS andGVRS During VANEToperations the individualvehicle Vi decides his VSSrsquos strategy (SVVi

) to maximize thecombined payoffs (CPVi

(SVViHt)) at time Ht

max CPViSVVi

Ht1113872 1113873 maxSVVi

μI1Vi

μIj

ViμIs

Vi1113960 1113961

11139371lejle3

Xj minus dj1113872 1113873ψj

st

1113944

3

j1ψj 1

X1 UVViisinV PR

j Ht( 1113857rVi

k Ht( 11138571113872 1113873

X2 UViPathVd

VsHt1113872 1113873

X3 uVVi

SVVi μI1

Vi μIj

Vi μIs

Vi1113876 1113877Ht1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(9)

where dj is a disagreement point and a disagreement pointdi1leilen represents a minimum payoff of each game modelerefore d is the least guaranteed payoff for game players(ie zero in our model) ψj1lejle3 is a bargaining powerwhich is the relative ability to exert influence over thebargaining process Usually the bargaining solution isstrongly dependent on the bargaining powers In the GVSS

game ψj1lejle3 can be estimated as follows

ψj1lejle3 Yj

Y1 + Y2 + Y3( 1113857

st

Y1 max RΓVCS Vi( )HtminusUΓVCS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y2 max RΓVRS Vi( )HtminusUΓVRS Vi( )

Ht1113874 1113875 ΓVi

Ht1113882 1113883

Y3 ΓVi

Ht

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎩

(10)

where RΓVCS(Vi)

Htand RΓVRS(Vi)

Htare the requested virtual

money to maximize the Virsquos payoffs in GVCS and GVRS attime Ht respectively UΓVCS(Vi)

Ht UΓVRS(Vi)

Htare the used

virtual money by Vi in GVCS and GVRS at time Ht re-spectively Finally individual vehicles select their strategiesSV according to (9) and (10) In a distributed online fashioneach vehicle ensures a well-balanced performance among theVCS VRS and VSS processes

In the VSS algorithm the vehiclesrsquo actual sensing costCI

V(middot) and each individual vehiclersquos situation is usuallyunknown by the RSU Under this asymmetric informationsituation the RSU should learn the vehiclersquos circumstanceduring the VSS process Like as the VCS process each RSU isa proposer and individual vehicles are responders and theyalso interact with each other for their objectives based on thecollaborative feedback procedure For the RSU the payoffcan be defined as a function of price levels for tasks andvehiclesrsquo responses erefore the RSUrsquos utility function(uR(SRHt)) at time Ht is computed as the sum of eachtaskrsquos outcome

Mobile Information Systems 7

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 8: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

uR SR ηI1

R ηIj

R ηIs

R1113876 1113877Ht1113874 1113875

1113944

Is

IhI1

⎛⎝ TIh Ht( 1113857 timesΦIh1113872 1113873

minus 1113944ViisinV

μIh

ViHt( 1113857 times ηIh

R Ht( 11138571113872 1113873⎞⎠

st TIh Ht( 1113857 1 if 1113944

ViisinVμIh

ViHt( 11138571113872 1113873geLIh

R

0 otherwise

⎧⎪⎨

⎪⎩

(11)

where ΦIh is the profit of Ih when the VSS for Ih issuccessfully completed ie 1113936ViisinV(μIh

Vi(Ht))geL

Ih

R ΦIh isobtained LIh

R is the predefined VSS requirement to com-plete the task Ih

As GVSS game players the RSU considers the in-terrelationship among tasks and vehiclesrsquo reactions to de-termine its incentive payment strategy To select the beststrategy a novel learning method is needed In this study wedevelop a new learning method to determine an effectivepayment policy for each task For the RSU the strategyvector SR(Ht) [ηI1

R (Ht) ηIs

R (Ht)] represents eachprice strategy for each task at time Ht Let Z

Is

Ht(P

Is

i ηminusIs

R )

be the learning value of taking strategy PIs

i isin ηIs

R with avector of payment strategies except the strategy PIs

i (ηminusIs

R )e RSU updates its Z(middot) value over time according to thefollowing method

ZP

Isi

Ht+1P

Is

i isin ηIs

R ηminusIs

R1113872 1113873 (1minus β) times ZIs

HtP

Is

i ηminusIs

R1113872 11138731113872 1113873

+ ⎛⎝β times ⎛⎝ 1minus cR

1113872 1113873 times zIs

Ht1113872 1113873

+ cR

times 1113944

IhisinIminusIs

zIh

Ht

(sminus 1)

⎧⎪⎨

⎪⎩

⎫⎪⎬

⎪⎭⎞⎠⎞⎠

st zIs

Ht ⎛⎝ T

Is Ht( 1113857 timesΦIs1113872 1113873

minus 1113944ViisinV

μIs

ViHt( 1113857 times ηIs

R Ht( 11138571113872 1113873⎞⎠

(12)

During theGVSS game process the RSU adaptively learnsthe current VSS state and sequentially adjusts Z(middot) valuesfor each VSS task Based on the Z(middot) values we can deter-mine PR

IsisinI(Ht) P

PIsmin

Is(Ht) P

PIsi

Is(Ht) P

PIsmax

Is(Ht) as the

probability distribution of Rrsquos strategy selection for theIs at timeHt ePIs

i strategy selection probability for Is

at time Ht+1 (PP

Isi

Is(Ht+1)) is defined as

PP

Isi

Is Ht+1( )

exp ZP

Isi

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113874 1113875

1113936jmaxP

Isjjmin

exp ZP

Isj

HtP

Is

i isin ηIs

R ηminusIs

R1113872 11138731113888 1113889

(13)

35 Main Steps of Proposed Triple-Plane BargainingAlgorithm In this study we design a novel triple-planebargaining game model through a systematic interactivegame process In the proposed VCS VRS and VSS algo-rithms the RSUs and vehicles are game players to maximizetheir payoffs In particular the RSUs learn the currentVANET situation based on the learning methods and thevehicles determine their best strategies while balancing theirVCS VRS and VSS requirements e primary steps of theproposed scheme are described as follows

Step 1 Control parameters are determined by the simula-tion scenario (Table 1)

Step 2 At the initial time theZ and L learning values in theRSUs are equally distributed is starting estimationguarantees that each RSUrsquos strategy benefits similarly at thebeginning of the GVCS and GVSS games

Step 3 During the GVCS game the proposer RSU selects itsstrategyPR isin SR to maximize its payoff (UR) according to(1) (3) and (4) As responders vehicles select their strategyr isin SV to maximize their payoffs (UV) according to (2)while considering their current virtual money (Γ)

Step 4 At every time step (H) the RSU adjusts the learningvalues (L(middot)) and the probability distribution (PR) based onequations (3) and (4)

Step 5 During the GVRS game individual vehicles estimatethe wireless link states (LC) according to equation (5) Ateach time period theLC values are estimated online basedon the vehiclersquos relative distance and speed

Step 6 During the GVRS game the source vehicle configuresa multihop routing path using the BellmanndashFord algorithmbased on equation (6) e source vehiclersquos payoff (U) isdecided according to (7) while considering their currentvirtual money (Γ)

Step 7 During the GVSS game the proposer RSU selects itsstrategy SR to maximize its payoff (uR) according to (11) Asresponders vehicles select their strategy SV to maximizetheir combined payoff (CP) according to (9) while adjustingeach bargaining power (ψ) based on equation (10)

Step 8 At every time step (H) the RSU adjusts the learningvalues (Z(middot)) and the probability distribution (PR)according to equations (12) and (13)

Step 9 Based on the interactive feedback process the dy-namics of ourGVCSGVRS andGVSS games cause a cascade ofinteractions among the game players who choose their beststrategies in an online distributed manner

Step 10 Under the dynamic VANET environment theindividual game players constantly self-monitor for the nextgame process go to Step 3

8 Mobile Information Systems

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 9: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

4 Performance Evaluation

41 Simulation Setup In this section we evaluate the per-formance of our proposed protocol and compare it with thatof theMVIG PVDD and CRVC schemes [21ndash23] To ensurea fair comparison the following assumptions and systemscenario were used

(i) e simulated system was assumed to be a TDMApacket system for VANETs

(ii) e number of vehicles that passed over a RSU wasthe rate of the Poisson process (ρ) e offeredrange was varied from 0 to 30

(iii) Fifty RSUs were distributed randomly over the100 km road area and the velocity of each mobilevehicle was randomly selected to be 36 kmh72 kmh or 108 kmh

(iv) e maximum wireless coverage range of eachvehicle was set to 500m

(v) Cloud computation capacity (C) is 5 GHz and oneBCSU is the minimum amount (eg 20MHzs inour system) of the cloud service unit

(vi) e number (s) of sensing task in each R is 4 ieI I1I2 I3 I4

(vii) e source and destination vehicles were randomlyselected Initially virtual money (Γ) in each vehiclewas set to 100

(viii) At the source node data dissemination was gen-erated at a rate of λ (packetss) According to thisassumption the time duration H in our simula-tion model is one second

(ix) Network performance measures obtained on thebasis of 100 simulation runs are plotted as func-tions of the vehicle distribution (ρ)

To demonstrate the validity of our proposed method wemeasured the cloud service success ratio normalized dis-semination throughput and crowdsensing success probabilityTable 1 shows the control factors and coefficients used in thesimulation Each parameter has its own characteristics [6]

42 Results and Discussion Figure 1 compares the cloudservice success ratio of each scheme In this study the cloudservice success ratio represents the rate of cloud services thatwas completed successfully is is a key performanceevaluation factor in the VCS operation As shown in Fig-ure 1 the cloud service success ratios of all schemes aresimilar to each other however the proposed scheme adoptsan interactive environmental feedback mechanism and theRSUs in our scheme adaptively adjust their VCS costs isapproach can improve the VCS performance than theexisting MVIG PVDD and CRVC schemes erefore weoutperformed the existing methods from low to high vehicledistribution intensities

Figure 2 compares the normalized disseminationthroughput in VANETs Typically the network throughputis measured as bits per second of network access In thisstudy the dissemination throughput is defined as the ratio ofdata amount successfully received in destination vehicles tothe total generated data amount in the source vehicles ethroughput improvement achieved by the proposed schemeis a result of our GVRS game paradigm During the VRSoperations each vehicle in the proposed scheme can selectthe most efficient routing path with real-time adaptabilityand self-flexibility Hence we attained a higher dissemi-nation throughput compared to other existing approachesthat are designed as lopsided and one-way methods and donot effectively adapt to the dynamic and diversified VANETconditions

e crowdsensing success probability which is shown inFigure 3 represents the efficiency of the VANET system Inthe proposed scheme we employed the learning-basedtriple-plane game model to perform control decisions in adistributed online manner According to the interactiveoperations of the VCS VSS and VRS our game-basedapproach can improve the crowdsensing success probabilitymore effectively than the other schemes e simulationresults shown in Figures 1 to 3 demonstrate that the pro-posed scheme can attain an appropriate performance bal-ance in contrast the MVIG [21] PVDD [22] and CRVC[23] schemes cannot offer this outcome under widely dif-ferent and diversified VANET situations

Table 1 System parameters used in the simulation experiments

Parameter Value Description[PR

min PRj PR

max] 02 04 06 08 1 Predefined price levels for cloud service (min 1 max 5)[rmin rk rmax] 1 2 3 4 5 e amount of cloud services in terms of BCSUs[PI

min PIl PI

max] 02 04 06 08 1 Predefined price levels for sensing service (min 1 max 5)θ 02 A coefficient factor of cost calculationH 08 A profit factor of vehicle if one BCSU is processedα 05 e weight control factor to the distance and velocityχ 02 A learning rate to update the L valuesQ 5 e routing outcome at each time periodJ 01 A coefficient factor to estimate the incentive paymentCI1 CI2 CI3 CI4 01 02 03 04 Predefined cost for each task sensingΦI1 ΦI2 ΦI3 ΦI4 10 10 10 10 Predefined profit for each sensing taskL1 L2 L3 L4 5 5 5 5 Predefined requirement for each sensing taskβ 03 A control parameter to estimate the learning value Z

cR 03 A discount factor to estimate the learning value Z

Mobile Information Systems 9

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 10: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

5 Summary and Conclusions

e VANET using vehicle-based sensory technology is be-coming more popular It can provide vehicular sensingrouting and clouding services for 5G network applicationserefore the design of next-generation VANET manage-ment schemes is important to satisfy the new demandsHerein we focused on the paradigm of a learning algorithmand game theory to design the VCS VRS and VSS algo-rithms By combining the VCS VRS and VSS algorithms anew triple-plane bargaining game model was developed to

provide an appropriate performance balance Duringthe VANET operations the RSUs learned their strategiesbetter under dynamic VANET environments and the ve-hicles considered the mutual-interaction relationships oftheir strategies As game players they considered the ob-tained information to adapt to the dynamics of the VANETenvironment and performed control decisions intelligentlyby self-adaptation According to the unique features ofVANETs our joint design approach is suitable to providesatisfactory services under incomplete information envi-ronments In the future we would like to consider privacyissues such as the differential privacy during the VANEToperation Furthermore we will investigate probabilisticalgorithms to estimate the service quality of sensing routingand clouding services In addition we plan to investigatesmart city applications where the different sensory in-formation of a given area can be combined to provide acomplete view of the smart city development

Data Availability

e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is research was supported by the MSIT (Ministry ofScience and ICT) Korea under the ITRC (InformationTechnology Research Center) support program (IITP-2018-2018-0-01799) supervised by the IITP (Institute for In-formation and Communications Technology Promotion)

05 1 15 2 25 303

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Crow

dsen

sing

succ

ess p

roba

bilit

y

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 3 Crowdsensing success probability Y-axis normalizedcrowdsensing success probability X-axis offered numbers of ve-hicles that passed over a RSU they are the rate of the Poissonprocess

05 1 15 2 25 30

05

1

15

Offered load (vehicle distribution in VANET)

Clou

d se

rvic

e suc

cess

ratio

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 1 Cloud service success ratio Y-axis normalized cloudservice success ratioX-axis offered numbers of vehicles that passedover a RSU they are the rate of the Poisson process

05 1 15 2 25 302

03

04

05

06

07

08

09

1

Offered load (vehicle distribution in VANET)

Nor

mal

ized

diss

emin

atio

n th

roug

hput

The proposed schemeThe MVIG scheme

The PVDD schemeThe CRVC scheme

Figure 2 Normalized dissemination throughput Y-axis nor-malized dissemination throughput X-axis offered numbers ofvehicles that passed over a RSU they are the rate of the Poissonprocess

10 Mobile Information Systems

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 11: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

grant funded by the Korea government (MSIT) (no 2017-0-00498 A Development of Deidentification Technique Basedon Differential Privacy)

References

[1] Y Wang Y Liu J Zhang H Ye and Z Tan ldquoCooperativestorendashcarryndashforward scheme for intermittently connectedvehicular networksrdquo IEEE Transactions on Vehicular Tech-nology vol 66 no 1 pp 777ndash784 2017

[2] M M C Morales R Haw E-J Cho C-S Hong andS-W Lee ldquoAn adaptable destination-based disseminationalgorithm using a publishsubscribe model in vehicularnetworksrdquo Journal of Computing Science and Engineeringvol 6 no 3 pp 227ndash242 2012

[3] Y Kim S Atchley G R Vallee S Lee and G M ShipmanldquoOptimizing end-to-end big data transfers over terabitsnetwork infrastructurerdquo IEEE Transactions on Parallel andDistributed Systems vol 28 no 1 pp 188ndash201 2017

[4] J Chen G Mao C Li A Zafar and A Y Zomayaldquoroughput of infrastructure-based cooperative vehicularnetworksrdquo IEEE Transactions on Intelligent TransportationSystems vol 18 no 11 pp 2964ndash2979 2017

[5] Z Su Y Hui and Q Yang ldquoe next generation vehicularnetworks a content-centric frameworkrdquo IEEE WirelessCommunications vol 24 no 1 pp 60ndash66 2017

[6] S Kim ldquoEffective crowdsensing and routing algorithms fornext generation vehicular networksrdquoWireless Networks 2017

[7] C Wang Z Zhang S Lu and MC Zhou ldquoEstimating travelspeed via sparse vehicular crowdsensing datardquo in Proceedingsof IEEE World Forum on Internet of 4ings (WF-IoT)pp 643ndash648 Reston VA USA December 2016

[8] L Xiao T Chen C Xie H Dai and P Vincent ldquoMobilecrowdsensing games in vehicular networksrdquo IEEE Trans-actions on Vehicular Technology vol 67 no 2 pp 1535ndash15452018

[9] B Oh N Yongchan J Yang S Park J Nang and J KimldquoGenetic algorithm-based dynamic vehicle route search usingcar-to-car communicationrdquo Advances in Electrical andComputer Engineering vol 10 no 4 pp 81ndash86 2011

[10] M Chaqfeh N Mohamed I Jawhar and J Wu ldquoVehicularcloud data collection for intelligent transportation systemsrdquoin Proceedings of 2016 3rd Smart Cloud Networks and Systems(SCNS) pp 1ndash6 Dubai UAE December 2016

[11] A Ashok S Peter and B Fan ldquoAdaptive cloud offloading forvehicular applicationsrdquo in Proceedings of 2016 IEEE VehicularNetworking Conference (VNC) pp 1ndash8 Columbus OH USADecember 2016

[12] J Ahn D Shin K Kim and J Yang ldquoIndoor air qualityanalysis using deep learning with sensor datardquo Sensors vol 17no 11 pp 1ndash13 2017

[13] S Kim ldquoTimed bargaining-based opportunistic routingmodel for dynamic vehicular ad hoc networkrdquo EURASIPJournal on Wireless Communications and Networkingvol 2016 no 14 pp 1ndash10 2016

[14] J-H Kim K-J Lee T-H Kim and S-B Yang ldquoEffectiverouting schemes for double-layered peer-to-peer systems inMANETrdquo Journal of Computing Science and Engineeringvol 5 no 1 pp 19ndash31 2011

[15] I Jang D Pyeon S Kim and H Yoon ldquoA survey oncommunication protocols for wireless sensor networksrdquoJournal of Computing Science and Engineering vol 7 no 4pp 231ndash241 2013

[16] N Cheng N Zhang N Lu X Shen J W Mark and F LiuldquoOpportunistic spectrum access for CR-VANETs a game-theoretic approachrdquo IEEE Transactions on VehicularTechnology vol 63 no 1 pp 237ndash251 2014

[17] T Chen L Wu F Wu and S Zhong ldquoStimulating co-operation in vehicular ad hoc networks a coalitional gametheoretic approachrdquo IEEE Transactions on Vehicular Tech-nology vol 60 no 2 pp 566ndash579 2011

[18] D B Rawat B B Bista and G Yan ldquoCoR-VANETs gametheoretic approach for channel and rate selection in cognitiveradio VANETsrdquo in Proceedings of International Conference onBroadband Wireless Computing Communication and Ap-plications pp 94ndash99 Victoria Canada November 2012

[19] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro clouds as virtual edge servers forefficient data collectionrdquo in Proceedings of ACM InternationalWorkshop on Smart Autonomous and Connected VehicularSystems and Services pp 31ndash35 Snowbird UT USA October2017

[20] F Hagenauer C Sommer T Higuchi O Altintas andF Dressler ldquoVehicular micro cloud in action on gatewayselection and gateway handoversrdquo Ad Hoc Networks vol 78pp 73ndash83 2018

[21] M Aloqaily B Kantarci and H T Mouftah ldquoMultiagentmultiobjective interaction game system for service pro-visioning in vehicular cloudrdquo IEEE Access vol 4 pp 3153ndash3168 2016

[22] R Kim H Lim and B Krishnamachari ldquoPrefetching-baseddata dissemination in vehicular cloud systemsrdquo IEEETransactions on Vehicular Technology vol 65 no 1pp 292ndash306 2016

[23] M F Feteiha and H S Hassanein ldquoEnabling cooperativerelaying VANET clouds over LTE-A networksrdquo IEEETransactions on Vehicular Technology vol 64 no 4pp 1468ndash1479 2015

[24] K Zhang Y Mao S Leng Y He and Y Zhang ldquoMobile-edgecomputing for vehicular networks a promising networkparadigm with predictive off-loadingrdquo IEEE VehicularTechnology Magazine vol 12 no 2 pp 36ndash44 2017

[25] C T Hieu and C-S Hong ldquoA connection entropy-basedmulti-rate routing protocol for mobile ad hoc networksrdquoJournal of Computing Science and Engineering vol 4 no 3pp 225ndash239 2010

[26] S Kim ldquoAdaptive ad-hoc network routing scheme by usingincentive-based modelrdquo Ad Hoc and Sensor Wireless Net-works vol 15 pp 1ndash19 2012

[27] K Han C Chen Q Zhao and X Guan ldquoTrajectory-basednode selection scheme in vehicular crowdsensingrdquo in Pro-ceedings of IEEECIC International Conference on Commu-nications in China pp 1ndash6 Shenzhen China November 2015

Mobile Information Systems 11

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom

Page 12: NewBargainingGameModelforCollaborativeVehicular ...downloads.hindawi.com/journals/misy/2019/6269475.pdf · (VRS).Vehicularcloudserviceisanewparadigmthatexploits cloud computing to

Computer Games Technology

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

Advances in

FuzzySystems

Hindawiwwwhindawicom

Volume 2018

International Journal of

ReconfigurableComputing

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

thinspArtificial Intelligence

Hindawiwwwhindawicom Volumethinsp2018

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawiwwwhindawicom Volume 2018

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Computational Intelligence and Neuroscience

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018

Human-ComputerInteraction

Advances in

Hindawiwwwhindawicom Volume 2018

Scientic Programming

Submit your manuscripts atwwwhindawicom