Transcript
Page 1: Wireless User Positioning via Synthetic Data Augmentation

Dataset

CyclicalLearningRate(CLR)

Ensembling

Visitsignet.dei.unipd.it andlttm.dei.unipd.ittoknowmoreaboutourresearchgroups

Results

WirelessUserPositioningviaSyntheticDataAugmentationandSmartEnsembling

UmbertoMichieli,PaoloTestolina,MattiaLecci,MicheleZorziUniversityofPadova– Padova,Italy– {umberto.michieli, paolo.testolina, mattia.lecci, zorzi}@dei.unipd.it

MattiaLecci’s activitywasfinanciallysupportedbyFondazioneCassa diRisparmio diPadovaeRovigo underthegrant“Dottorati diRicerca 2018”

AutoencoderPre-training

Models

Several models from the literature were tested:• U-Net [1];• VGG [2];• ResNet50 [3];• DenseNet [custom];• Others.The architectures were adjusted to the differentnature of our problem, enhancing theirperformance.

• Cyclical Learning Rate can train deepnetworks faster and better;

• CLR works especially well withConvolutional Layers;

• Regularization is necessary to avoid strongoverfitting;

• CLR allows to find better minima;

• One-cycle policy performed best for us.

To exploit the correlation between thereceived signal power and the RX-TXdistance, the following architecture isproposed:

• A deep network, chosen amongthe best-performing ones, extractsthe information from the rawchannel matrix;

• A shallower network conveys thepower information to the outputlayers, giving it a higher relevanceon determining the position.

The respective outputs are thereforeconcatenated and fed to a last, denseoutput layer.

In the novel framework we proposed, first an autoencoder learns a compressedrepresentation of the channel matrix using the synthetic data, in a completelyunsupervised manner. Thus, positions (labels) are not needed during the first phase ofthe training.After this phase, only the encoder part of the network is reused and the acquiredknowledge is transferred to the real data domain.

We used an implementation of the 3GPP TR 38.901 indoor channel model to obtain alarge number of synthetic, unlabeled channel matrices, making it possible for theautoencoder to understand the underlying correlations within a channel matrix.

• Thegivendataarenotnormalized• Pathlossyieldsimportantdistanceinformation;

• Frequencyandspatialcorrelationshavetobetakenintoaccount;

• Modelsfromtheliteraturetendtohavelotsofparameters;• Thelownumberofgivensamplesmightbeproblematicfordeep-learningapproaches;

• Forthisreasonwekeepalargetestset.

TOTAL Train Validation Test

17486 11366 2623 3497

(a)One-cyclepolicy

(b)Triangularpolicy (c)VanishingTriangularpolicy

Autoencoders: although the frameworkseems to be valid, further work is neededto achieve higher precision.

VGG: the most promising basearchitecture, future work shouldinvestigate the reasons why this specificconvolutional model outperforms all theothers.

Ensemble: the side information on the SNRcan enrich the pretrained base networkboosting the performance.

CLR: beside speeding up the training, CLRcan help reaching better minima.

To completely exploit the informationcontained in the received power and deliverit to the output layer, a simple multilayerperceptron was used.

[1] Ronneberger O. et al., "U-Net: ConvolutionalNetworks for Biomedical Image Segmentation", MICCAI2015.[2] Simonyan K., Zisserman A., "Very DeepConvolutional Networks for Large-Scale ImageRecognition", ICLR 2015.[3] He et al., "Deep Residual Learning for ImageRecognition", CVPR 2016.

Fig.:SmoothedtrainingcurvesforVGG-basednetworks

Fig.:Performanceofsomeofthetestednetworks

Fig.:Visualizationofourbestmodel’serrorsQuiversareresizedforaestheticreasons.

Top Related