Randomized Ensemble SVM based Deep learning with Verifiabledynamic access control using user revocation in IoT architecture
RAVULA ARUN KUMAR* and KAMBALAPALLY VINUTHNA
Department of CSE, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram,
Andhra Pradesh 522502, India
e-mail: [email protected]; [email protected]
MS received 8 June 2020; revised 14 June 2021; accepted 30 July 2021
Abstract. As applications based on the Internet of Things (IoT) are growing rapidly, the deployment of sensor
nodes have also increased. So, a large amount of sensed data has to be processed before storing. Since IoT-based
applications are context-aware computing, an attacker can easily inject false data. Even though existing
mechanisms were able to render the security on data, it is subjected to stealing, due to using symmetric-based
encryption. In order to provide a comprehensive security system, the entrance control strategy called Verifiable
Dynamic Access Control using User Revocation is utilized. In that, the combined CPASBE (Cipher Text
Attribute Set Based Encryption) –VOMAACS (Verifiable Outsourced Multi-Authority Access Control Scheme)
is utilized. CPASBE- a method for protecting sensitive data from third parties is to have data in the data owner
itself. A data owner is solely responsible for securing his data and therefore replication is avoided. The CPASBE
strategy ensures that the scrambled information is kept private regardless of considering reliability of supplier,
which secured information against agreement assaults.
Keywords. CPASBE–Cipher Text Attribute Control Scheme; VOMAACS – Verifiable Outsources Multi-
Authority Access Control Scheme.
1. Introduction
When all is said and done, numerous database groups and
extra assets are required to store enormous information.
Capacity and recovery, on the other hand, are not only
issues, according to all analyses. Getting important exam-
ples from huge information, for example, quiet demon-
strative data is additionally a fundamental issue [1].
Nowadays, this emerging application is being produced for
different situations like sensors. Sensors are regularly uti-
lized in basic applications without a doubt or not so distant
future. Various body sensors gadgets have been created for
ceaseless checking of social insurance, individual wellness,
and physical action mindfulness [2, 3]. Recently, numerous
analysts have been attempting to build up various wearable
clinical gadgets in remote wellbeing checking frameworks
for ceaseless observation of individual wellbeing conditions
[4]. For precedent, wearable device gadgets are utilized for
recommending physiological activities and sustenance
propensities by a two-multiday time of constant physio-
logical checking of patients. In this period wearable sensors
would ceaselessly watch and store the patients’ wellbeing
information into an information stock [5]. This enables
specialists towards the analysis of the patient’s wellbeing to
state and improve grades not just utilizing the laboratory
tests but also patients’ well-being information gathered
from the wearable body sensors. In this way, the sensor
information is frequently utilized for making the suit-
able move for patient’s wellbeing and treatment suggestion,
way of life decisions, and early analysis that are basic in
improving the nature of patient’s wellbeing [6]. Out-dated
well-being information gathered from the wearable body
sensors. In this manner, the sensor information is a standard
data for stockpiling procedure, and the stages are not for
assisting a previously stated developing sensing device
inside a utilised location where the volume, speed, and
variety of data is growing. This issue requires the
advancement of an effective stockpiling framework for
putting away such issue and handling large information [7].
Under this a verified engineering and execution of versatile
IoT design for handling and ensuring ongoing sensor
information is needed for adapting enormous information
technologies [8].
Fog processing is a dispersed registering framework in
which particular submission administrations are measured
at the system control in a brilliant gadget though particular
and others are measured in a distant information center
[9, 10]. The information created through sensors inserted in
different things/objects produces huge measures of
unstructured (enormous) information on the constant*For correspondence
Sådhanå (2021) 46:229 � Indian Academy of Sciences
https://doi.org/10.1007/s12046-021-01705-1Sadhana(0123456789().,-volV)FT3](0123456789().,-volV)
premise that hold the guarantee for knowledge and bits of
knowledge for significantly improved choice procedures.
Even though incorporation of distributed computing and
Fog registering have conveyed various preferences towards
the IT associations, yet the greatest critical drawback con-
cerning the cloud is security and insurance of the redis-
tributed data, so that cloud data wellbeing is an important
worry for information proprietors while utilizing cloud
administrations [11]. While utilizing cloud benefits the
clients got the opportunity to hand over their information to
cloud specialist co-ops. The cloud authority center is a
beneficial element that can not be trusted. Data is a fun-
damental information for data owners, since there is some
relationship presented before any endeavour were made,
therefore data security is the critical concern [12]. So, data
owners will at first guarantee that their data is kept char-
acterized by the unapproved person. Not just security, but
the data privacy, flexibility, and fine-grained access control
in circulating registration conditions are the critical issues.
Access switch is additionally an urgent issue and in
various models. Toward accomplishing a reasonable
grained access regulator the number of plans [13–16] have
been exhibited. However, these plans are just appropriate to
the frameworks in which information proprietors and the
specialist organizations are on the equivalent confided
space. Since information proprietors and specialist co-ops
are ordinarily on the various believed area this plan can not
be connected, another plan called quality based encryption
[17–19] is introduced. Superiority Created Encryption
(ABE) stays an auspicious method to guarantee the start to
finish security of huge information in the cloud. In any case,
the strategy refreshing has constantly remained a difficult
issue when ABE is utilized to build access control plans.
An inconsequential execution is to give information pro-
prietors a chance to recover the information and re-encode
it under the new access approach after that send it back to
the cloud. This method brings about a high correspondence
overhead and substantial calculation load on information
proprietors [20].
Moreover, IoT devices are gotten from anywhere through
the dependent system similar to the network. So, IoT sys-
tems are unprotected against a wide scope of malicious
attacks. If security issues are not kept, then the secret
records might be spilled once. Interruption Detection Sys-
tem (IDS) [21] is utilized to screen the vindictive traffic
specifically in hub and systems. It can go about as a
moment line of the barrier which can safeguard the system
from interlopers. Interruption is an undesirable or noxious
movement that is destructive to sensor hubs. IDS can
remain a product or equipment device. IDS can assess and
explore machinery and customer activities, recognize
marks of understood assaults and distinguish vindictive
system activity [22–25]. The objective of IDS is to watch
the systems and hubs, recognize different interruptions in
the system, and caution the clients after interruptions had
been distinguished. Recently, IoT gadgets are constantly
creating generous information which is regularly called
enormous information (structured and unstructured infor-
mation). When all said and done, it is hard to process and
break down huge information for finding important data. In
addition, security is the key prerequisite in social insurance
in the enormous information framework. To defeat this
issue, utilizing the construction for the usage of the IoT
(Internet of Things) to store and process versatile sensor
data is required. This design comprises two fundamental
sub-models, in particular, Meta Fog-Redirection (Mf-R)
and Grouping and Choosing (GC) architecture. MF-R
architecture (Meta-Fog Redirection) utilizes enormous
information advancements, for example, an Apache Pig and
Apache H Base for accumulation. The capacity of the
sensor information generated from the diverse sensor gad-
gets. With distributed computing, the GC Design is used to
protect reconciliation on cloud systems.
2. Literature survey
Some of the existing studies include are as follows.
Bhavani Thuraisingham et al [26] presented the approachnamely, Proactive Dynamic Secure Data Scheme (P2DS),
this intends to ensure the unforeseen gatherings which can
not achieve the protection information. There are dual
principle calculations secondary to the projected plan; they
remain Attribute-based Semantic Access Control (A-SAC)
Procedure and Proactive Determinative Access (PDA)
Procedure. The primary commitments of this paper require
three viewpoints. Fistly, displayed semantic methodology
for compelling information to get. Secondly, a client-driven
methodology that proactively keeps clients’ information
from sudden activities on the cloud side. Thirdly, this plan
has a more elevated amount of secure manageability, then it
can manage dynamic dangers, including the rising and
future perils.
Rahman et al [27] exploited the vital location of such
portals at the edge of the system to deal with a few larger
quantity directions, for example, nearby capacity, continu-
ous neighborhood information handling, installed infor-
mation mining, and so on and so forth. Therefore,
displaying a Smart e-Health Access. Before misuse the idea
of Fog Calculating in Healthcare, IoT frameworks by
shaping a Geo-dispersed middle person layer of insight
between sensor hubs and Cloud. By assuming liability for
taking maintenance through some certain weights of the
sensor organize and a remote human services focus, our
Fog-helped framework design can adjust to numerous dif-
ficulties in omnipresent social insurance structures, with
respect to portability, vitality proficiency, versatility, and
unwavering quality issues. The operative procedure of
Smart e-Health Gateways can empower huge sending of
omnipresent wellbeing checking frameworks, particularly
in clinical situations. Additionally, presents a model of a
Smart e-Health Gateway called UT-GATE where a portion
229 Page 2 of 14 Sådhanå (2021) 46:229
of the examined more elevated amount of highlights have
been executed. Furthermore, execute an IoT-based Early
Warning Score (EWS) wellbeing observing for all intents
and purposes determine the proficiency and pertinence of
our framework on tending to a restorative contextual
investigation. Our proof of idea configuration shows IoT-
based wellbeing observing framework with upgraded by
and large framework knowledge, vitality effectiveness,
portability, execution, interoperability, security, and
unwavering quality.
Thirumalai et al [28] discussed that Memory Efficient
Multi Key (MEMK) acreage conspire. For touchy infor-
mation, our plan will help in trading the data between cloud
to IoT and IoT to IoT gadgets. At the point when cryp-
tography has a place with the topsy-turvy type, at that point
it has open and reserved keys. On behalf of memory
effectiveness, our plan reclaims the RSA conspire with a
Diophantine type of the nonlinear condition. Additionally,
our plan execution similarly performs well, and this is
chiefly because of the utilization of RSA open key alone.
Due to this, MEMK does not require multiplicative reverse
capacity or Extended Euclid’s algorithm.
Yinghui Zhang et al [29] introduced a trademark
appended the information sharing course of action rea-
sonable for asset constrained conservative clients in cir-
cled figuring. This course of action disposes of a
calculation task by with structure open parameters extra
than flimsy lacking encryption estimation isolates. An
open figure substance test stage is performed before the
unscrambling stage, which gets out a colossal portion of
calculation overhead because of preposterous fig-
ure works. For information security, a Chameleon hash
point of confinement is utilized to make a short fig-
ure content, which will be blinded by the disconnected
figure attempts to get the last online figure creations. This
course of action has appeared against adaptively picked
figure substance ambushes, which is completely observed
as standard security suspected. Expansive examination
shows that the proposed course of action is secure and
effective. Watchwords: Cloud processing, Access control,
Attribute-based encryption, Online/disconnected encryp-
tion, Chosen figure content security. Anyway, this system
would not be considered direct quality renouncement in
information sharing for asset restricted customers in dis-
persed processing.
Xiong et al [30] explained that the sharing of personal
data with multiple users from different domains has been
benefitted considerably from the rapid advances of cloud
computing, and it is highly desirable to ensure the sharing
file should not be exposed to the unauthorized users or
cloud providers. Unfortunately, issues such as achieving the
flexible access control of the sharing file, preserving the
privacy of the receivers, forming the receiver groups
dynamically, and high efficiency in encryption/decryption
remain challenging. To deal with these challenges, a novel
anonymous attribute-based broadcast encryption (A2 B2 E)
features the property of hidden access policy and enables
the data owner to share his/her data with multiple partici-
pants who are inside a predefined receiver set and fulfil the
access policy was provided. Firstly, it suggested a concrete
A2 B2 E scheme together with the rigorous and formal
security proof without the support of the random oracle
model. Then, it designed an efficient and secure data
sharing system by incorporating the A2 B2 E scheme, ver-
ifiable outsourcing decryption technique for attribute-based
encryption, and the idea of online/offline attribute-based
encryption. Extensive security analysis and performance
evaluation demonstrate that the data-sharing system is
secure and practical.
This proposed technique is especially huge to informa-
tion proprietors for protecting information from aggressors.
By then, widen this strategy for checking steadfastness in
cloud structure at the time of record invigorating. Besides,
this proposes an Attribute-Based Encryption (ABE) which
adds fitting analyses to uncover at any rate various
infringements as would be sensible. Then, it is finished by
utilizing the client task table. Each utilization keeps up a
UOT for tape nearby assignments. Every affirmation in the
UOT is portrayed by three sections: task, judicious vector,
and physical vector. Notwithstanding the way that users can
see distinctive sub servers in CSP, it is considered to give
an ordinary update framework to confirm areas essentially
and give the data to customers resulting in reviving in a
manner of speaking.
One of the key inconveniences going up against the
insistence of the web of Things is the security challenge,
particularly in the district of protection and characteriza-
tion. Constancy, economy, efficiency, and feasibility of the
security and insurance of the catch of Things are principal
for ensuring grouping, reliability, confirmation, and access
control. For example, customers should be anxious to share
certain data about their penchants in the open spaces of the
web, and this yearning is made for the customer just with
the fundamental Shields to divert the divulgence of infor-
mation to different individuals. In this way, the framework
must ensure the protection and secrecy of the client. Our
main aim is to screen the client action to recognize the
legitimate access, and also to keep away from any unap-
proved access of data, to reduce the computational cost and
burden of the IoT devices.
In Secure fog communication, an existing Cipher text
technique is used for encryption and decryption in key
exchange protocol to establish secure communication
among a gathering of mist hubs and the cloud. Since
encryption and decryption operation requires a large num-
ber of module experimentation and pairings. So, the com-
putation cost is high. The traditional ciphertext policy
attribute creates new private keys for the user’s original set
of attributes. The new keys are misguiding the users hence,
difficult to trace the malicious user. While using the face
distinguishing proof and goals technique in the fog com-
puting framework, it has to meet the challenges of
Sådhanå (2021) 46:229 Page 3 of 14 229
confidentiality, integrity, and availability in the process of
face identification. So, high privacy-preserving security
was required. For that, a flexible and scalable, well-grained
access device must be maintained. For access control,
policy-based schemes have been utilized. Generally,
Attribute-Based Encryption (ABE) has ascended as a
promising procedure to ensure data security. It empowers
data owners to portray and get the chance to courses of
action scrambles the information under the methodology,
with the genuine target that single clients whose properties
fulfilling these section procedures can interpret the infor-
mation. Whenever more and more affiliation is pursued, the
path of action that energizes the cloud becomes a crucial
problem, because if the information holder redistributes the
information to the cloud, a duplicate in close systems would
be held. Precisely when the information proprietor needs to
change the section strategy, it needs to move the informa-
tion back to the zone site from the cloud, re-encode the
information under the new access approach, and a for brief
span later move it back to the cloud server. Along these
lines, it accomplishes a high correspondence overhead and
overwhelming check burden on information proprietors.
This rouses us to build up another method to redistribute the
undertaking of methodology invigorating to the cloud
server.
3. Superintend framework based on regressionapproach using encryption
In this research, to rectify the issues given above, proposed
a Randomized Ensemble SVM-based Deep learning with
Verifiable dynamic access control using user revocation in
IoT architecture which aims at improving data security.
Initially, client validation is a basic procedure because of
the consistently developing security and assurance con-
cerns. In this research, client’s face is given just as the
client’s mark for the client verification. At first, the mark of
the client is scrambled with the SHA calculation to make
the advanced mark which gives the underlying confirma-
tion. To give greater security to the validation, the strategy
DNN (Deep Neural Network) learning is used. Utilizing
DNN, the component similitude can be figured out how to
locate the unapproved client. The ability to create new
features from the restricted course for highlights in the
prerequisite dataset is valuable learning over a range of AIs
because substantial learning can render features without
human intercession. Even so, computational planning is
unbelievably expensive and it also has a low example, it
looks at the rough highlights for the deep realization that
the results are not anything but hard to comprehend. The
randomized ensemble SVM is consolidated into DNN to
enhance understanding. Randomized Ensemble SVM is the
coordination of SVM with Random Jungles in which the
SVM classifier having the restriction in speed, just as size
both in preparing and testing. Furthermore, it has high
algorithmic multifaceted design and wide memory neces-
sities so as to beat that the irregular backwoods is fused
with the SVM more useful.Irregular Forests is likewise
called Ensemble Learning in light of the fact that, rather
than preparing a solitary tree, a variety of trees is prepared.
Each tree is set up on a subset of the scheme, chosen by the
substitute with an autonomous subset of information vari-
ables. Arbitrary woodlands have great exactness if the fast
and adaptable to tremendous enlightening accumulation
and it requires a limited quantity of reminiscence. Because
of that arbitrary nature, the DNN performs better profound
figuring out how to make the system increasingly validated
for safeguarding security.
To give greater security and protection, the entrance
gadget system called Verifiable Dynamic Access Control
utilizing User Revocation is used. In that, the joined
CPASBE (Cipher Text Attribute Set Based Encryption) –
VOMAACS (Verifiable Outsourced Multi-Authority
Access Control Scheme) is used. CPASBE- a method for
protecting sensitive data from third parties is to have data
in the data owner itself. A data owner is solely respon-
sible for securing his data also and the replication is
avoided. The CPASBE procedure ensures that scrambled
information is kept secret regardless of whether the
supplier is not reliable. It verifies the information against
conspiracy assaults. Moreover, CP-ASBE supports mul-
tiple value assignments for an attribute with a single key
which organizes user attributes in key thus allowing users
to dynamically inflict constraint to combine attributes to
satisfy policies. In addition to that, the VOMAACS is
used which will be secure against agreement assaults.
Here, the vast majority of the encryption and decoding
calculation is mineral appropriated to fog contraptions,
which immensely decreases the count on the customer
side, furthermore, it gives a check strategy to the redis-
tributed encryption and unscrambling. On the off chance
that a mist gadget returns inaccurate outcomes, clients
can see it promptly by running the relating check cal-
culation. Moreover, this VOMAACS scheme can have an
efficient client and property denial method. During the
process of property renouncement, a large portion of the
update and re-encryption tasks are re-appropriated to the
cloud server, which incredibly lessens the overhead of
the data owner. In the meantime, all cipher texts and
proxy keys need not be refreshed. It can only be modi-
fied for components involved in the default attribution. In
this way, the combined access policy control CPASBE
and VOMAACS can greatly improve the efficiency of
the security and privacy preservation with reduced cost.
For improving the network security, the optimized
intrusion detection system can be used. Coincidental PSO
intrusion detection is integrated into the PSO (Particle
Swarm optimization is used. PSO is nothing but difficult to
fall into the neighbourhood ideal in high dimensional space
and has a low assembly rate in the iterative procedure.
MBO (Marriage in Honey bee optimization) in which MBO
229 Page 4 of 14 Sådhanå (2021) 46:229
has a random assignments behavior, thus providing fast
convergence. The search of the PSO algorithm at the
population phase is optimized by random behavior.
Accordingly, this optimized algorithm can optimally detect
intrusion attacks in the audit data. If there is any anomaly
present, the Authentication and access control approach is
again adopted to make the network secure. Throughout all
the techniques, this paper provides more privacy and
security with less cost.
3.1 Secure Hash Algorithm
An application or a computer that demands authentication
from access owners will definitely implement the login
process to secure the data. The login operation is per-
formed in conjunction with the application or device by
uploading data in the form of a username and password.
If the data are right, the client will be allowed to enter
the network. A single-way hash function known as a
message definition or a compression function is a cryp-
tographic feature which takes the variable length and
makes it a binary format of constant length. The one-way
hash function is built to overcome the series in a com-
plicated way, which ensures that the circuit is located at
a particular value. The main concept to use the secure
hash algorithm is to secure the data in the process. In a
cryptographic type, the data is then transmitted using a
secure hash algorithm, which increases the reliability of
privacy and protection at minimal cost.
The SHA 512 algorithm is an algorithm that uses the
one-way hash function created by Ron Rivest. This algo-
rithm is the development of previous algorithms SHA 0,
SHA 1, SHA 256, and SHA 384 algorithms. The crypto-
graphic algorithm of SHA 512 is receiving input in the form
of messages of any size and generates a message digest that
has a 512-bit length.
Its predecessor is SHA1, and MD5 which is a renewal of
MD4, the linkage, and development of the hash algorithm,
indicating that the collision vulnerability algorithm proved
to be observed. Currently, the National Institute of Stan-
dards and Technology (NIST) has made SHA 224, SHA
256, SHA 384, and SHA 512, the new standard hash
function. In Table 2, the resume parameters show some
hash functions.
SHA 512 hash function performs the same hash opera-
tion as SHA 2 operation in general [10]. SHA 512 hash
function is a function that generates message diggest
512-bit size and 1024 bit block length. The cryptographic
algorithm works SHA 512 is to accept input in the form of a
message with any length or size and will generate a mes-
sage digest that has a fixed length of 512 bits.
The workings of making message digest with SHA 512
algorithm are as follows:
3.1a The addition of bits: The primary procedure is to
include a message with various piece wedges to such an
extent that the message length (in bits) is harmonious with
890 mod1024. The thing to recall is that the 1024 number
shows up, as a result of the SHA 512 algorithm procedures
messages are in squares of1024 sizes. If a 24-bit-length
message is off, then the message will be distributed with the
group bits in both situations. The message will be added
with 896-(24 ? 1) = 871 bits.
So the length of the wedge bits is between 1 and 896. At
that point, one more thing to note is that the bit comprises
of a bit 1 pursued by the rest of the bit 0.
3.1b Adding Long Message Redemption Value: At that
point, the following procedure is the message included
again with 128 bits expressing the length of the first mes-
sage. On the off chance that the message length is more
prominent than 2128, at that point, the length is taken in
modulo 2128. In other words, if initially, the message
length is equivalent to Kbit, at that point 128 pieces include
K modulo of 2128 after the second procedure is done then
the message length now is 1024 bits.
3.1c Initialize Hash Value: In the SHA 512 algorithm, the
H hash value (0) consists of 8 words with 64 bits in the
hexadecimal notation as in the Table.
Buffer Initial value
A 6a09e667f3bcc908
B bb67ae8584caa73b
C 3c6ef372fe94f82b
D a54ff53a5f1d36f1
E 510e527fade682d1
F 9b05688c2b3e6c1f
G 1f83d9abfb41bd6b
H 5be0cd19137e2179
3.2 Ensemble machine learning algorithm basedon sequential Adaboost
In ensemble techniques, various learning calculations are
used to get preferred advancements over which any of
the component learning calculations alone can be
achieved. Boosting involves creating a group by design-
ing each new model for the implementation phase
misordered by the previous versions occurs from time to
time that boosting is superior to sacking. However, the
design information would often usually be over-fitted. By
a wide margin, the most well-known usage of boosting is
Adaboost. The process for preparing AdaBoost only
selects certain highlights that can enhance the model’s
high intensity.
Sådhanå (2021) 46:229 Page 5 of 14 229
a) Training phase AdaBoost refers to a specific strategy
for preparing a boosted classifier. A lift classifier is a
classifier in the structure
CI pð Þ ¼XI
i¼1
ci pð Þ ð1Þ
where each ci is a frail student that takes an article p as info
and returns esteem showing the class of the item. For
precedent, in the two-class issue, the indication of the
feeble student yield distinguishes the anticipated item class
and the incomparable regard gives the trust in that
arrangement. Equally, the Ith classifier is sure if the
example is in a confident exercise and destructive else.
Each frail student creates yield speculation, h pmð Þ for
each example in the training set. At every emphasis I, afrail student is chosen and allotted a coefficient ei with the
end goal that the entirety preparing mistake Mi of the
subsequent i-stage help classifier is limited.
Mi ¼Xm
M Ci�1 pmð Þ þ eih pmð Þ½ � ð2Þ
Here Ci�1 pmð Þ is the supported classifier that has been
developed to the past phase of preparing, M Cð Þ is some
mistake work and ci pð Þ ¼ eih pmð Þ is the powerless student
that is being considered for expansion to the last classifier.
b) Weighting At every emphasis of the preparation pro-
cedure, a weight xm;i is doled out to each example in the
training set equivalent to the present blun-
der M Ci�1 pmð Þð Þ on that example. These loads can be uti-
lized to educate the preparation regarding the feeble
student, for example, choice trees can be developed that
support part sets of tests with high loads.
c) Derivation Suppose it has a data set
p1; q1ð Þ:::::; pB; qBð Þf g where everything pm has a related
class qm 2 �1; 1f g, furthermore, a lot of powerless classi-
fiers S1; :::; S‘f g every one of which yields a characteriza-
tion Sn pmð Þ 2 �1; 1f g for everything. After the a� 1ð Þ-themphasis our helped classifier is a straight mixture of the
delicate classifiers of the structure:
F a�1ð Þ pmð Þ ¼ e1s1 pmð Þ þ ::::þ ea�1sa�1 pmð Þ ð3Þ
At the a-th cycle, this need to stretch out to a superior
supported classifier by including another feeble classifier
sa, with additional load ea
Fa pmð Þ ¼ F a�1ð Þ pmð Þ þ easa pmð Þ ð4ÞSo it stays to figure out which frail classifier is the best
decision for sa, its weight should be ea. To characterize the
absolute mistake M of Fa as the total of its exponential
misfortune on every datum point, given as pursues:
M ¼XBm¼1
e�qmFa pmð Þ ð5Þ
Letting x 1ð Þm ¼ 1 and x að Þ
m ¼ e�qmFa pmð Þ for a[ 1 it have,
M ¼XBm¼1
x að Þm e�qmeasa pmð Þ ð6Þ
It can part this summation between those information
focuses that are accurately arranged by sa (so
qmsa pmð Þ ¼ 1) also, those that are misclassified (so
qmsa pmð Þ ¼ �1):
M ¼X
qm¼sa pmð Þ
x að Þm e�ea þ
Xqm 6¼sa pmð Þ
x að Þm e�ea ð7Þ
¼XBm¼1
x að Þm e�ea þ
Xqm 6¼sa pmð Þ
x að Þm eea � e�eað Þ ð8Þ
Since the main piece of the right-hand side of this con-
dition relies upon sa isP
qm 6¼sa pmð Þx að Þ
m , see that theM is the one
that limitsP
qm 6¼sa pmð Þx að Þ
m , i.e. the powerless classifier with the
most minimal weighted mistake (with loads
x að Þm ¼ e�qmsa�1 pmð Þ).To determine the desired weight ea that minimizes M
with the sa that it was just determined, then to differentiate:
dM
dea¼
dP
qm¼sa pmð Þ xað Þm e�ea þP
qm 6¼sa pmð Þ xað Þm eea
� �dea
ð9Þ
Setting this to zero and concluding for ea yields:
ea ¼ 1
2ln
Pqm¼sa pmð Þ x
að ÞmP
qm 6¼sa pmð Þ xað Þm
0@
1A ð10Þ
To determine the weighted mistake rate of the frail
classifier be Ea ¼P
qm 6¼sa pmð Þ xað Þm
�PBm¼1
x að Þm ; so it follows
that:
ea ¼ 1
2ln
1� Ea
Ea
� �ð11Þ
Which is the negative log its capacity increased by 0.5.
Subsequently, to determine the AdaBoost calculation: At
every emphasis, pick the classifier sa, which limits the all-
out weighted mistakeP
qm 6¼sa pmð Þx að Þ
m ; use this to calculate the
error rate Ea ¼P
qm 6¼sa pmð Þ xað Þm
�PBm¼1
x að Þm ; use this to cal-
culate the weight ea ¼ 12ln 1�Ea
Ea
� �, lastly utilize this to
improve the helped classifier F a�1ð Þ to Fa ¼ F a�1ð Þ þ easa.
229 Page 6 of 14 Sådhanå (2021) 46:229
Thus AdaBoost classifier predicts the network that has
better QoS and allows the user to connect with that ideal
network.
3.3 MPO
The Particle Swarm Optimization calculation contained a
gathering of particles that move around the pursuit space
impacted by their own best past area and the best past
location of the entire swarm or a nearby neighbor. Every
cycle a molecule’s speed is refreshed utilizing:
viðt þ 1Þ ¼ viðtÞ þ ðc1 �randðÞ � ðpbesti � piðtÞÞÞþ ðc2 �randðÞ � ðpgbest � piðtÞÞÞ ð12Þ
where viðt þ 1Þ is the new speed for the ith element, c1and c2 are the weighting coefficients for the individual best
and worldwide best positions separately, piðtÞ is the
ith particle’s position at the time t , pbesti is the ith element’s
best-known location, and pgbest is the best location known
to the group. The randðÞ capacity produces a consistently
arbitrary variable 2 ½0; 1�½0; 1� .Variations on this update condition consider best posi-
tions inside particles nearby neighborhood at time t.
A particle’s location is updated and used by:
piðt þ 1Þ ¼ piðtÞ þ viðtÞThe algorithm (below) delivers a pseudocode listing of
the Particle Swarm Optimization procedure for reducing a
cost function.
4. Experimental results and discussion
The proposed system for the privacy cheating discourage-
ment and secure computation auditing protocol is imple-
mented in the working platform of MATLAB 2013a with
the following system configuration.
Processor: Intel Core i5
CPU Speed: 3 GHz
RAM: 8 GB
Operating system: Windows 8
4.1 Security Analysis
In this method, the file data are encrypted utilizing kfbecause of the lower efficiency of ABE and kf is encryptedutilizing the SHA algorithm. The confidentiality of data of
our method is based on the security of Verifiable Dynamic
Access Control using User Revocation.
Initialization: The adversary X selects the access struc-
ture T�, attribute sets cð1Þ; cð2Þ; � � � c ver��1ð Þ� �, and version
number ver�, and then submits to challenger Y.
Setup: Y initially creates version 1 for the public key of X
as follows. e b; bð Þb is expressed as e b; bð Þb¼ e A;Bð Þ�e b; bð Þy
0¼ e b; bð Þabþy
0; y
0 2R Zp; and b ¼ abþ y0. For each
n 2 V , Y selects randomly rn 2 Zp. For each set of attributes
cðkÞ; 1� k� ver� � 1, create a PK for that particular version
as follows: For n 2 cðkÞ, for 1� n� i, select randomly
rkðkÞn 2 Zp,T
ðkþ1Þn ¼ T
ðkÞn
� �rkðkÞn
. For n 62 cðkÞ; rkðkÞn ¼ 1;
Tðkþ1Þn ¼ T
ðkÞn .
Stage 1: The adversary X gives a top-secret key query on
set S ¼ njn 2 V; and n 2 T�f g on behalf of variety
k,1� k� ver�. Y initially selects randomly r0 2 Zp: D is
expressed as D ¼ by0�r
0 �g; and r ¼ agþ r0: Study that for
any n 2 V ; TðkÞn ¼ T
ð1Þn
� �rkð2Þn �rkð3Þn ���rkðkÞn ¼ T
ð1Þn
� �Qk
m¼2rk
ðmÞn
denote as RðkÞn ¼ Qk
m¼2 rkðmÞn : For n 2 S;Dn ¼ br:R
ðkÞn =tn ¼
bðagþr0 Þ�RðkÞ
n = g=rnð Þ ¼ Arn:RðkÞn � br0 :rn:RðkÞ
n ,where tn ¼ grn.Y directs
secret key SKS¼ k;D; 8n 2 S : Dnð Þ to X.
Challenge X gives two messages j0 and j1. Y flips random
coin g and set C ¼ bc; ~C ¼ jg:e b; bð Þb�c¼ jg � e b; bð Þagc�e b; bð Þcy
0.According to the technique of
producing qxð0Þ in encryption, Cx ¼ bqxð0Þ�rn�RðkÞn ; for each
leaf node n 2 T�.Stage 2: Stage 1 is repeated.
In this method, the cloud server acquires the fractional
set of secret key elements of the client and the proxy re-
encryption keys. So, this needs that the leakage does not
affect the confidentiality of data. In attribute adjunction, the
data owner selects a randomly symmetric key k0f and s0 2 Zp
for the root node of the access control tree. Then, it pro-
duces the proxy re-encryption keys, i.e.,
rks$s0 ; rkqxð0Þ$q0xð0Þ; rkn$n0 . In the proxy re-encryption pro-
cedure, a cloud server does not decrypt cipher text for not
recovering secret numbers s0. They acquire random num-
bers only while obtaining the proxy re-encryption keys.
When a client accesses the encrypted file, the cloud servers
initially update the secret key of the client. Cloud servers
can not acquire all the secret key elements of a user because
SA will not get updated at all times. Then, they do not
decrypt the encrypted file data.
Figures 4–9 prove the calculation overhead brought
about in the stages set-up, key age, and encryption and
decoding under different conditions (figures 1). Figure 2, 3
proves the framework-wide set-up time with the various
number of trait specialists. Also in the proposed method,
comparison was made with the existing attribute-based
encryption scheme [28] based on parameters like set-up
time, key generation time, encryption time as well as
decryption time.
Figure 4 proves the all-out key age time with the various
number of experts, and the quantity of attributes is fixed to
20.
Sådhanå (2021) 46:229 Page 7 of 14 229
Figures 5 and 6 prove the encryption and decoding time
with the various number of characteristics and set just one
benefit for document access to gauge the most continuous
task, the record gets with file size is 100 KB.
Figures 7 and 8 prove the encryption and separating time
with various document sizes, where the quantity of assigns
is fixed to 20.
4.2 Utilization Rate
The average evolutionary curves of SLPSO, GA, and our
proposed method are given. The parallel axis denotes the
number of calculations and the perpendicular axis denotes
the yield value. It can be detected that in initial iterations
SLPSO and GA can join more quickly than this projected
method but the problem is SLPSO and GA cannot contin-
uously evolve the swarm to find a better explanation. The
operation rate of VM or memory at the ith time slot for the
secluded cloud is designed as,
CðiÞ ¼XS
n¼1
RTnaniTR
; i 2 1; 2; :::; If g
Where, RT� Number of VM or memory in the private
cloud
TR� Total VM or memory in the private cloud
If assignment tn runs in the ith slot, then ani ¼ 1; other-
wise, ani ¼ 0. Then, the normal operation rate of VM or
memory is designed by
Av ¼XI
i¼1
C ið ÞI
The average virtual machine utilization rate and average
memory utilization rate of the proposed method are given
in table 1 and also have contrasted the proposed strategy
and existing methods like SLPSO and GA-based methods
and the comparison chart is shown in figure 4.
Figure 1. Flow diagram of the proposed system.
Figure 2. Working Illustration/Creation of Message-Digest SHA
512.
229 Page 8 of 14 Sådhanå (2021) 46:229
From figure 9 and table 1, it can be understood that the
proposed method can accomplish a higher source operation
ratio for the average virtual machine and average memory
than the SLPSO algorithm and GA.
4.3 Execution Cost
Figure 10 shows the comparison between the Execution
cost of the SLPSO algorithm and GA with our proposed
system. The graph showed that our proposed system
executes with less cost as per increasing user demands.
Thus the cost of SLPSO and GA is higher than our pro-
posed system.
4.4 Comparison of the proposed systemwith existing techniques
In this section, the proposed system is compared with
existing approaches like duel steganography combined with
AES, Steganography with DES, and Steganography with
Figure 3. Set-up Time.
Figure 4. KeyGen time with different authorities’ number.
Figure 5. Encryption time with different attributes number.
Figure 6. Decryption time with different attributes number.
Figure 7. Encryption time with the different file size.
Figure 8. Decryption time with the different file size.
Sådhanå (2021) 46:229 Page 9 of 14 229
AES to evaluate the performance of the proposed system.
In order to evaluate the proposed system following
parameters are concerned Encryption time, Encryption
memory, Mean Square Error Value, Peak Signal to noise
ratio, and embedded ratio.
4.4a Encryption time: The amount of time required to
encrypt data using the selected algorithm is known as
encryption time (table 2). The comparative encryption time
of the proposed algorithms for cryptography is given using
figure 11. In the given diagram the X-axis shows the file
size (in terms of KB-kilobytes) of images used for experi-
ments and the Y-axis shows the amount of time consumed
for encryption in terms of seconds.In the proposed system,
the SHA algorithm is combined with Randomized
Ensemble Deep Learning for reducing the number of steps
for substitution and permutation operations which is
involved in the traditional AES based encryption. So the
proposed system has taken less time such that when the size
of an image is 100KB, it has taken 118 sec., whereas
existing approaches like duel steganography combined with
AES, Steganography with DES and Steganography with
AES experienced 315.48, 287.95 and 125.98 sec.
respectively.
4.4b Encryption memory: The measure of principle
memory required to execute the implemented encryption
algorithms is known as encryption memory. Figure 10
shows the relative performance of the proposed procedures
for the space complexity of encryption. The X pivot shows
the different examinations conducted by the framework,
whereas the Y pivot shows the consumption during
encryption in kilobytes. To compute the memory con-
sumption, the following formula is used (table 3; figures
12, 13, 14).
The proposed system has taken less memory since it
reduces the no of repetitions by adding features derived
from signature data to achieve the lightweight encryption
process. The result obtained showed that when the image
size is 100, the proposed system has taken a memory size of
15.633 KB whereas existing approaches like duel steganog-
raphy combined with AES, Steganography with DES, and
Steganography with AES experienced 31.25895, 21.25987,
and 25.12365, respectively.
4.4c MSE: The Mean Square Error is characterized as the
square of the distinction between the pixel estimations of
the first picture and the Steno picture and after that sepa-
rating it by the size of the picture (table 4).
In the proposed system, the least significant bit encoding
is combined with Randomized Ensemble Deep Learning to
embed the encrypted data into the cover image. So, this
approach perfectly embedded the data in less time when
compared to existing techniques such that when size is 100,
the proposed system experienced the less mean square error
value of 0.24 whereas existing approaches like duel
steganography combined with AES, Steganography with
DES and Steganography with AES experienced 0.551367,
0.497305 and 0.65925, respectively.
4.4d PSNR: The PSNR estimates the pinnacle signal-to-
commotion proportion between two pictures. This propor-
tion is frequently utilized as quality estimation between the
first and a packed picture. Higher the PSNR means better
the nature of the packed or repeated picture (table 5).
In the proposed system, Randomized Ensemble SVM for
reducing the number of repetition for basic operations such
as substitution and permutation which is involved in the
traditional AES based encryption and pixel vertex differ-
entiating technique is combined with least significant bit
encoding for embedding process increased the quality of
the embedded image so it achieves high PSNR value when
Table 1. Comparison of VM and Memory Utilization Rates of
SLPSO, GA and the Proposed Method.
Utilization Rate
Proposed
Method SLPSO GA
Average VM Utilization rate 0.911 0.892 0.709
Average memory Utilization
rate
0.822 0.774 0.682
Figure 9. Average VM and Memory Operation Rate for SLPSO,
GA and the Proposed method.
Figure 10. Comparison between Execution Cost of SLPSO and
GA with Proposed method.
229 Page 10 of 14 Sådhanå (2021) 46:229
compared to other existing techniques. When size is
100 Kb, the proposed system experienced the high PSNR
value of 96.45 whereas existing approaches like duel
steganography combined with AES, Steganography with
DES and Steganography with AES experienced 52.673192,
54.164578 and 53.94774, respectively.
4.5 Comparison of the proposed methodand the existing method’s various parameters
In this section, the proposed system is compared with
existing approaches like Attribute based Access control
scheme (ABACS), Novel Attribute based access control
(NABAC), Multi authority Access based Encryption
(MAABE), Block chain based Multi authority Access
control (BMAC).
Figure 15 depicts the comparison for computation over-
head. Thus, the graph reveals that for the proposed method
attains low computation overhead when compared with the
previous technique.
Figure 16 depicts that comparison for delay. The graph
reveals that the proposed method attains less delay when
compared with the previous technique such as for ABACS
which has 500 ms. It gradually decreases for NABAC and
then it attains Maximum for MAABE, then it decreases to
Table 2. Comparison table for encryption time.
Data Size
(in KB )
Dual Steganography Combined with
AES (Time in sec.)
Steganography Combined with
DES (Time in sec.)
Steganography Combined with
AES (Time in sec.) Proposed
10 561.10 261.94 114.6 72
20 466.94 308.66 777 79
40 758.12 314.81 116.3 89
60 364.64 350.98 116.25 95
80 285.20 230.65 115.31 119
100 315.48 287.95 125.98 120
10 20 30 40 50 60 70 80 90 100File Size(KB)
0
200
400
600
800
Tim
e(se
cond
s)
Encryption TimeDual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 11. Comparison graph for encryption time.
Table 3. Comparison table for memory requirement.
Data
Size(in
KB)
Dual Steganography Combined
with AES (Memory in KB)
Steganography Combined with
DES (Memory in KB)
Steganography Combined with
AES (Memory in KB)
Proposed
(Memory in
KB)
10 39.211914 19.598633 39.00098 13.450
20 55.18457 24.712891 38.4541 13.930
40 72.585938 27.175781 45.24316 14.329
60 25.279297 18.581055 37.20215 14.119
80 22.392578 20.542969 29.2666 15.211
100 31.25895 21.25987 25.12365 15.633
10 20 30 40 50 60 70 80 90 100File Size(Kb)
10
20
30
40
50
60
70
80
Mem
ory(
Kb)
Memory Consumption in EncryptionDual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 12. Comparison graph for memory usage.
Sådhanå (2021) 46:229 Page 11 of 14 229
0.375 ms for BMAC and for the proposed which has 0.15
ms.
Figure 17 depicts that comparison for throughput. Thus
the graph reveals that the proposed method achieves high
throughput when compared with the previous technique
such as ABACS, NABAC, MAABE and BMAC.
Table 6 shows comparison of the proposed method with
various parameters. The computation overhead of the pro-
posed method obtained 9 ms when compared with prior
techniques as ABACS which has 727.6 ms, NABAC has
500 ms, MAABE has 300 ms and BMAC has 50 ms. The
delay of the proposed method obtained 0.15 ms when
compared with prior techniques as ABACS has 500 ms,
10 20 30 40 50 60 70 80 90 100File Size(Kb)
0.2
0.3
0.4
0.5
0.6
0.7
Err
orR
ate
Mean Square Error Rate
Dual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 13. Comparison graph for mean square error.
10 20 30 40 50 60 70 80 90 100File Size(Kb)
50
60
70
80
90
100
Rat
io%
PSNR
Dual Steganography combined with AESSteganography combined with DESSteganography combined with AESPROPOSED
Figure 14. Comparison graph for PSNR value.
Table 4. Comparison table for MSE.
Data Size
(in KB)
Dual Steganography Combined with
AES
Steganography with
DES
Steganography Combined with
AES Proposed
10 0.3227109 0.4995313 0.449846 0.29
20 0.4156016 0.5329297 0.396492 0.30
40 0.6013047 0.5052734 0.454565 0.27
60 0.4822031 0.5118359 0.417235 0.26
80 0.5302891 0.6456641 0.46752 0.25
100 0.551367 0.497305 0.65925 0.24
Table 5. Comparison table for PSNR.
Data Size(in
KB)
Dual Steganography Combined with
AES
Steganography Combined with
DES
Steganography Combined with
AES Proposed
10 53.042667 51.145177 51.60016 97.23
20 57.501167 56.894584 52.14845 97.12
40 55.092265 51.095539 54.07282 96.23
60 52.307861 51.039496 51.927 96.16
80 56.981724 54.227387 52.478 96.03
100 52.673192 54.164578 53.94774 96.45
ABACS NABAC MAABE BMAC ProposedMethods
0
200
400
600
800
Time(ms)
Computation Overhead
Figure 15. Comparison for computation overhead.
229 Page 12 of 14 Sådhanå (2021) 46:229
NABAC has 2.9 ms, MAABE has 1500 ms and BMAC has
0.375 ms. The throughput of the proposed method obtained
was 90% when compared with prior techniques such as
ABACS has 62%, NABAC has 65%, MAABE has 75% and
BMAC has 85%. Thus, the proposed method will effi-
ciently have enhanced the security.
5. Conclusion
IOT indicates to the following period of data upset whose
setting includes billions of smart gadgets and sensors
interconnected to encourage quick data and information
trade under continuous limitations. The large collection of
data leads to a large volume consumption, which paves
the way for the pre-processing requirements. Then, the
pre-processed data should be stored in the cloud-based
storage system. Since the proposed system used machine
learning algorithms for pre-processing data, it achieved
better results when compared to the existing one. Then
data to be stored is subjected to cryptography as well as
learning-based steganography, the integrity of the data is
high when compared to other existing methods. Therefore,
the proposed work attains the throughput as 15% higher
than the MAABE. Similarly, the delay is highly reduced
as 1499.85 ms than MAABE and also computation over-
head is highly diminished as 291 ms than MAABE. Thus,
the proposed method proficiently enhanced the data
security.
References
[1] Manogaran Gunasekaran, Varatharajan Ramachandran,
Lopez Daphne, Kumar Priyan Malarvizhi, Sundarasekar
Revathi and Thota Chandu 2018 A new architecture of
Internet of Things and big data ecosystem for secured smart
healthcare monitoring and alerting system. Future Gener.Comput. Syst. 82: 375–387
[2] Huang Yangjian, Junkai Xu, Bo Yu and Peter Shull B 2016
Validity of FitBit, Jawbone UP, Nike? and other wearable
devices for level and stair walking. Gait & Posture 48:
36–41
[3] Santos Alexandre, Macedo Joaquim, Costa Antonio and Joao
Nicolau M 2014 Internet of things and smart objects for
M-health monitoring and control. Procedia Technol. 16:
1351–1360
[4] Paradiso Rita and Gianni cola Loriga and Nicola Taccini,
2005 A wearable health care system based on knitted
integrated sensors. IEEE Trans. Inf. Technol. Biomed. 9(3):337–344
[5] Lorincz Konrad, David Malan J, Thaddeus Fulford-Jones R
F, Nawoj Alan, Clavel Antony, Shnayder Victor, Mainland
Geoffrey, Welsh Matt and Moulton Steve 2004 Sensor
networks for emergency response: challenges and opportu-
nities. IEEE Pervasive Comput. 3(4): 16–23[6] Ullah, Sana, Henry Higgin, Arif Siddiqui M and Kyung Sup
Kwak 2008 A study of implanted and wearable body sensor
networks. In: KES International Symposium on Agent andMulti-Agent Systems: Technologies and Applications464-473
[7] Snigdh and Itu 2019 Anonymity in Body Area Sensor
Networks-an Insight. In: 2018 IEEE World Symposium onCommunication Engineering (WSCE) 1-7
[8] Sankar S and Srinivasan P 2016 Internet of things (iot): A
survey on empowering technologies, research opportunities
and applications. Int. J. Pharmacy Technol 8(4):
26117–26141
[9] Hou Xueshi, Li Yong, Chen Min, Di Wu, Jin Depeng and
Chen Sheng 2016 Vehicular fog computing: A viewpoint of
vehicles as the infrastructures. IEEE Trans. VehicularTechnol. 65(6): 3860–3873
[10] Bonomi, Flavio, Rodolfo Milito, Preethi Natarajan and Jiang
Zhu 2014 Fog computing: A platform for internet of things
and analytics. In: Big data and internet of things: A roadmapfor smart environments 169-186
ABACS NABAC MAABE BMAC ProposedMethods
0
500
1000
1500Time(ms)
Delay
Figure 16. Comparison for delay.
ABACS NABAC MAABE BMAC ProposedMethods
60
70
80
90
Throughput(%)
Figure 17. Comparison for throughput.
Table 6. Shows the comparison of the proposed method and
previous methods with various parameters.
Methodologies
Computation
Overhead(ms)
Delay
(ms) Throughput(%)
ABACS 727.6 500 62
NABAC 500 2.9 65
MAABE 300 1500 75
BMAC 50 0.375 85
Proposed 9 0.15 90
Sådhanå (2021) 46:229 Page 13 of 14 229
[11] Zhu, Jiang S, Douglas Chan, Mythili Suryanarayana Prabhu,
Preethi Natarajan, Hao Hu and Flavio Bonomi 2013
Improving web sites performance using edge servers in fog
computing architecture. In: Service Oriented System Engi-neering (SOSE), 2013 IEEE 7th International Symposium,320-323
[12] Ghorbani Reza H and Hossein Ahmadzadegan M 2017
Security challenges in internet of things: survey. In: WirelessSensors (ICWiSe), 2017 IEEE Conference 1-6
[13] Shree, Iniya, Narmatha K and Vijesh Joe C 2016 An multi-
authority attribute based encryption for personal health
record in cloud computing. In: 2016 10th InternationalConference on Intelligent Systems and Control (ISCO), 1-5
[14] Wang Shulan, Zhou Junwei, Liu Joseph K, Jianping Yu,
Chen Jianyong and Xie Weixin 2016 An efficient file
hierarchy attribute-based encryption scheme in cloud com-
puting. IEEE Transactions on Information Forensics andSecurity 11(6): 1265–1277
[15] Grandison, Tyrone, Michael Maximilien E, Sean Thorpe and
Alfredo Alba 2010 ‘Towards a formal definition of a
computing cloud. In: 2010 6th World Congress on Services,191-192
[16] Liu, Xuejiao, Yingjie Xia, Shasha Jiang, Fubiao Xia, and
YanboWang 2013 Hierarchical attribute-based access control
with authentication for outsourced data in cloud computing.
In: 2013 12th IEEE international conference on trust, securityand privacy in computing and communications 477-484
[17] Dastjerdi, Amir Vahid, and Buyya Rajkumar 2016 Fog
computing: Helping the Internet of Things realize its
potential. Computer 49(8): 112–116[18] Fugkeaw, Somchart and Hiroyuki Sato 2015 An extended
CP-ABE based Access control model for data outsourced in
the cloud. In: Computer Software and Applications Confer-ence (COMPSAC), 2015 IEEE 39th Annual, 3: 73-78
[19] Wan Zhiguo, Jun Liu E and Robert Deng H 2012 HASBE: a
hierarchical attribute-based solution for flexible and scalable
access control in cloud computing. IEEE Trans. Inf. Foren-sics Secur. 7(2): 743–754
[20] Yang, Kan, Xiaohua Jia, Kui Ren, Ruitao Xie and Liusheng
Huang 2014 Enabling efficient access control with dynamic
policy updating for big data in the cloud. In: INFOCOM,2014 Proceedings IEEE 2013-2021
[21] Gendreau, Audrey A and Michael Moorman 2016 Survey of
intrusion detection systems towards an end to end secure
internet of things. In: 2016 IEEE 4th international confer-ence on future internet of things and cloud (FiCloud), 84-90
[22] Can, Okan and Ozgur Koray Sahingoz 2015 A survey of
intrusion detection systems in wireless sensor networks.
In: 2015 6th International Conference on Modeling, Simu-lation, and Applied Optimization (ICMSAO), 1-6
[23] Yan Qiao, Huang Wenyao, Luo Xupeng, Gong Qingxiang
and Richard Yu F 2018 A multi-level DDoS mitigation
framework for the industrial internet of things. IEEECommuni. Mag. 56(2): 30–36
[24] Guo Qi, Li Xiaohong, Guangquan Xu and Feng Zhiyong
2017 MP-MID: Multi-Protocol Oriented Middleware-level
Intrusion Detection method for wireless sensor networks.
Future Gener. Comput. Syst. 70: 42–47[25] Sandhya G and Anitha Julian 2014 Intrusion detection in
wireless sensor network using genetic K-means algorithm.
In: 2014 IEEE International Conference on Advanced Com-munications, Control and Computing Technologies1791-1794
[26] Qiu Meikang, Gai Keke, Thuraisingham Bhavani, Tao Lixin
and Zhao Hui 2018 Proactive user-centric secure data
scheme using attribute-based semantic access controls for
mobile clouds in financial industry. Future Gener. Comput.Syst. 80: 421–429
[27] Rahmani Amir M, Gia Tuan Nguyen, Negash Behailu,
Anzanpour Arman, Azimi Iman, Jiang Mingzhe and Lilje-
berg Pasi 2018 Exploiting smart e-health gateways at the
edge of healthcare internet-of-things: a fog computing
approach. Future Gener. Comput.Syst. 78: 641–658[28] Thirumalai, Chandrasegar and Himanshu Kar 2017 Memory
Efficient Multi Key (MEMK) generation scheme for secure
transportation of sensitive data over Cloud and IoT devices.
In: Power and Advanced Computing Technologies (i-PACT),Innovations, 1-6
[29] Li Jin, Zhang Yinghui, Chen Xiaofeng and Xiang Yang 2018
Secure attribute-based data sharing for resource-limited users
in cloud computing. Comput.Secur. 72: 1–12[30] Xiong Hu, Zhang Hao and Sun Jianfei 2018 Attribute-based
privacy-preserving data sharing for dynamic groups in cloud
computing. IEEE Syst. J. 99: 1–22
229 Page 14 of 14 Sådhanå (2021) 46:229