university of rochester center for visual science 31st ... · click on the interactive google map....

Post on 07-Jun-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

University of RochesterCenter for Visual Science

31st Symposium

Frontiers in Virtual Reality

June 1-3, 2018

Memorial Art Gallery, Rochester, New York

GeneralInformationYoumaynotbringfoodordrinkintotheauditorium.Snacksandbeveragesmustbeconsumedindesignatedareasonly!

WifiPublicwifiisavailable.LookfortheURGuestnetwork.

ShuttleInformationAlthoughtheMemorialArtGalleryisjusta5-8minutewalkfromtheStrathallanhotelandEastAvenueInn&Suites,wedidsetupashuttlescheduleincaseofinclementweather.EastAvenueInn&Suites:IndividualsstayingatthishotelshouldmakeshuttlearrangementsdirectlywiththehotelfrontdeskstaffHolidayInnRochesterDowntown:7pmdeparturefromtheHolidayInnonThursdayeveningtoWelcomeReception9pmpickupfromtheStrathallanHotel/WelcomeReceptiononThursdayevening8:30amdropoff@meetingvenueonFriday9pmpickupfromthemeetingvenueonFriday8:30amdropoff@themeetingvenueonSaturday6pm&9pmpickupfromthemeetingvenueonSaturday9amdropoff@meetingvenueonSundaySundaytoairport:YouwillberesponsibleforarrangingyourowntransportationStrathallanShuttleInfo:8amdropoff@meetingvenueonFriday&Saturday6pm&9pmpickupfrommeetingvenueonFriday&Saturday8:50amdropoff@meetingvenueonSunday1pmpickupfrommeetingvenueonSunday(toairport)

LunchOptionsFridayandSaturdayGotohttp://www.cvs.rochester.edu/symposium/logistics.htmlClickontheinteractivegooglemap.Manynearbyrestaurantshavebeenmarkedonthemapwithaccesstodirectionsandreviews.

OculusRiftDemoStationOculusRifthasademostationsetupintheBausch&LombParloronFriday,June1from12-4PMand5:30-9PMandonSaturday,June2from8:20AM-4:30PM.

PostersPostersneedtobetakendownbetweentheendoftheSaturdayafternoonbreak(4pm),andbeforethe6pmcocktailhourbegins.

ProgramCommitteeDujeTadinGabrielDiazEdLalor

1

Schedule31stCVSSymposium,FrontiersinVirtualReality

June1-3,2018atMemorialArtGallery,RochesterNY

Thursday,May317:00-9:00pm—Registration&WelcomeReception,StrathallanHotel

Friday,June1***12pm–4pm&5:30pm–9:00pm:OculusRiftdemostation,Bausch&LombParlor8:20-9:00am—Registration&Breakfast9:00-9:05am—Welcome:DavidWilliams,UniversityofRochester9:05-10:05am—Keynote:MartinBanks,Univ.ofCalifornia,Berkeley

Accommodation,vergence,andstereoscopicdisplays10:05-11:05am—Keynote:BarrySilverstein,FacebookRealityLabs

TheOpportunitiesandChallengesofCreatingArtificialPerceptionThroughAugmentedandVirtualReality11:05am-2:00pm—Lunch(offsite)

Session1:MultisensoryProcessingChair:EdmundLalor,UniversityofRochester2:00-2:30pm—RobertaKlatzky,CarnegieMellonUniversity

Hapticrenderingofmaterialproperties2:30-3:00pm—AmirAmedi,HebrewUniversity

Naturevs.Nurturefactorsinshapinguptopographicalmapsandcategoryselectivityinthehumanbrain:insightfromSSDandVRfMRIexperiments

3:00-4:00pm—Coffeebreak&posters

2

Session2:ApplicationsChair:KrystelHuxlin,UniversityofRochester4:00-4:30pm—MatthewNoyes,NASAJohnsonSpaceCenter

HybridReality:OneGiantLeapForFullDive4:30-5:00pm—BenjaminBackus,VividVisionInc.andSUNYCollegeofOptometry

MobileVRforvisiontestingandtreatment5:00-5:30pm—StephenEngel,UniversityofMinnesota

Inducingvisualneuralplasticityusingvirtualreality6:00-9:00pm—Grazingdinner&postersession

Saturday,June2***8:20am–4:30pm:OculusRiftdemostation,Bausch&LombParlor8:20-9:00am—Registration&Breakfast

Session3:AR/VRDisplaysandOpticsChair:MichaelMurdoch,RIT9:00-9:30am—JannickRolland,UniversityofRochester

Augmentedrealityandthefreeformrevolution9:30-10:00am—KaanAkşit,NVIDIA

ComputationalNearEyeDisplayOptics10:00-10:30am—MarinaZannoli,OculusVR

Perception-centereddevelopmentofAR/VRdisplays10:30-11:00am—YonVisell,UCSantaBarbara

HapticsatMultipleScales:EngineeringandScience11:00am-1:15pm—Lunch(offsite)

Session4:SpaceandNavigationChair:GregDeAngelis,UniversityofRochester1:30-2:00pm—AmanSaleem,UniversityCollegeLondon

VisiontoNavigation:InformationprocessingbetweentheVisualCortexandHippocampus2:00-2:30pm—ArneEkstrom,UCDavisCenterforNeuroscience

Howvirtualandalteredrealitycanhelpustounderstandtheneuralbasisofhumanspatialnavigation

3

2:30-3:00pm—MaryHayhoe,UniversityofTexasAustinSpatialMemoryandVisualSearchinImmersiveEnvironments

3:00-4:00pm—Coffeebreak&posters

Session5:PerceptionandActionChair:MartinaPoletti,UniversityofRochester4:00-4:30pm—SarahCreem-Regehr,UniversityofUtah

ThroughtheChild’sLookingGlass:AComparisonofChildrenandAdultPerceptionandActioninVirtualEnvironments

4:30-5:00pm—GabrielDiaz,RochesterInstituteofTechnology

UnrestrictedMovementsoftheEyesandHeadWhenCoordinatedbyTask5:00-5:30pm—JodyCulham,WesternUniversity

Differencesbetweenrealityandcommonproxiesraisequestionsaboutwhichaspectsofvirtualenvironmentsmatterincognitiveneuroscience

6:00-9:00pm—Banquet

Sunday,June39:00-9:30am—Breakfast

Session6:VisualPerceptionChair:GabrielDiaz,RochesterInstituteofTechnology9:30-10:00am—FlipPhillips,SkidmoreCollege

ExploringtheUncannyValley10:00-10:30am—WendyAdams,UniversityofSouthampton

Materialrenderingforperception:vision,touchandnaturalstatistics10:30-11:00am—Coffeebreak11:00-11:30am—LaurieWilcox,YorkUniversity,CentreforVisionResearch

Depthperceptioninvirtualenvironments11:30am-12:00pm—BasRokers,UniversityofWisconsin

ProcessingofsensorysignalsinVR12:00-1:00pm—BoxLunch(onsite)Endofmeeting

TalkAbstracts

June1,2018:Keynotes

31stCVSSymposium:SpeakerAbstracts S1

9:05-10:05am

Accommodation,vergence,andstereoscopicdisplaysMartinBanks,UniversityofCalifornia,BerkeleyImprovingcomfortinaugmented-andvirtual-realitydisplays(ARandVR)isasignificantchallenge.Oneknownsourceofdiscomfortisthevergence-accommodationconflict.InARandVR,theeyesaccommodatetoafixedscreendistancewhileconvergingtothesimulateddistanceoftheobjectofinterest.Thisrequiresundoingthenaturalcouplingbetweenthetworesponsesandtherebyleadstodiscomfort.Weinvestigatedwhethervariousdisplaymethods(depth-of-fieldrendering,focus-adjustablelenses,andmonovision)canalleviatethevergence-accommodationconflict.WemeasuredaccommodationinaVRsetupusingthosemethods.Thefocus-adjustable-lensmethoddrivesaccommodationeffectively(therebyresolvingthevergence-accommodationconflict);theothermethodsdonot.Wealsoshowthattheabilitytodriveaccommodationcorrelatessignificantlywithviewercomfort.Incomputergraphics,theprimarygoalforrealisticrenderinghasbeentocreateimagesthataredevoidofopticalaberrations.Butfordisplaysthataremeanttogivehumanviewersrealisticexperiences(i.e.,ARandVR),thisgoalshouldchange.Oneshouldinsteadproducedisplayimagesthat,whenviewedbyanormaleye,producetheretinalimagesthatarenormallyexperienced.Creatingsuchimagesreducestoadeconvolutionproblemthatwehavesolvedaccuratelyformostcases.Iwilldescriberesultsthatshowthatcreatingblurproperlydrivesthehumanfocusingresponsewhilecreatingblurinconventionalfashiondoesnot.

June1,2018:Keynotes

31stCVSSymposium:SpeakerAbstracts S2

10:05-11:05am

TheOpportunitiesandChallengesofCreatingArtificialPerceptionThroughAugmentedandVirtualRealityBarrySilversteinFacebookRealityLabsWeliveininterestingtimes.Physics,chemistry,andbiologyarebeginningtomeshwithcomputerandsocialsciences.Advancingtechnologiesdrivenbythismergingofscienceisenablingnewintegratedsolutionsinmanyspaces.Todayresearchersarebeginningtocreateanddeliverrealisticartificialhumaninputs.Theseinputsofsight,sound,motion,andtoucharebeingwoventogetherintovirtualandaugmentedrealitysystemsthancanstarttoemulateconvincinghumanperception.Whilemovieshavealwaystriedtodothisforgroupswithcontrolledstorytelling,ARandVRattempttotakethisfurther.Simultaneouslysimulatingmultifacetedcontentinreactiontoindividual’sactions,thoughts,needsandwantsisthenextstepinentertainment,informationexchange,socialinteractionandmore.Wearenotthereyet,butbyconqueringfurthertechnicalchallengeswecanexpectarevolutionarychangetothehumancomputerinterfaceandalongwithitsignificantopportunitiestoenhanceourlives.

June1,2018:MultisensoryProcessing

31stCVSSymposium:SpeakerAbstracts S3

1:30-2:00pm

HapticsatMultipleScales:EngineeringandScienceYonVisell,UCSantaBarbaraAlongstandinggoalinengineeringhasbeentodesigntechnologiesthatareabletoreflecttheamazingperceptualandmotorcapabilitiesofbiologicalsystemsfortouch,includingthehumanhand.Thisturnsouttobeverychallenging.Onereasonforthisisthat,fundamentally,ourunderstandingofwhatisfeltwhenwetouchobjectsintheworld,whichistosayhapticstimuli,isfairlylimited.Thisisdueinparttothemechanicalcomplexityoftouchinteractions,themultiplelengthscalesandphysicalregimesinvolved,andthesensitivedependenceofwhatwefeelonhowwetouchandexplore.Iwilldescriberesearchinmylabonafewrelatedproblems,andwillexplainhowtheresultsareinformingthedevelopmentofnewtechnologiesforwearablecomputing,virtualreality,androbotics.Y.Shao,V.Hayward,Y.Visell,SpatialPatternsofCutaneousVibrationDuringWhole-HandHapticInteractions.ProceedingsoftheNationalAcademyofSciences,113(15),2016.M.Janko,M.Wiertlewski,Y.Visell,Contactgeometryandmechanicspredictfrictionforcesduringtactilesurfaceexploration.NatureScientificReports,8(4868),2018.Y.Shao,Y.Visell,LearningConstituentPartsofTouchStimulifromWholeHandVibrations.Proc.IEEEHapticsSymposium,2016.Acknowledgements:NSFCNS-1446752,NSFCISE-1527709,NSFCISE-1751348

June1,2018:MultisensoryProcessing

31stCVSSymposium:SpeakerAbstracts S4

2:00-2:30pm

HapticrenderingofmaterialpropertiesRobertaKlatzky,CarnegieMellonUniversityHumanshapticallyperceivethematerialpropertiesofobjects,suchasroughnessandcompliance,throughsignalsfromsensoryreceptorsinskin,muscles,tendons,andjoints.Approachestohapticrenderingofmaterialpropertiesoperatebystimulating,orattemptingtostimulate,someorallofthesereceptorpopulations.Mytalkwilldescriberesearchonhapticperceptionofroughnessandsoftnessinrealobjectsandsurfacesandbyrenderingwithavarietyofdevices.Acknowledgements:NSFgrantCHS1518630.

June1,2018:MultisensoryProcessing

31stCVSSymposium:SpeakerAbstracts S5

2:30-3:00pm

Naturevs.Nurturefactorsinshapinguptopographicalmapsandcategoryselectivityinthehumanbrain:insightfromSSDandVRfMRIexperimentsAmirAmedi,HebrewUniversity“Thebesttechnologiesmaketheinvisiblevisible.”-BeauLotto.Mylabstudiestheprinciplesdrivingspecializationsinthehumanbrainandtheirdependenceonspecificexperiencesduringdevelopment(i.e.critical/sensitiveperiods)versuslearningintheadultbrain.Ourwww.BrainVisionRehabERCprojectfocusesonstudyingNaturevs.Nurturefactorsinshapingupcategoryselectivityinthehumanbrain.Akeypartoftheprojectinvolvestheusealgorithmswhichconvertvisualinputtoblindusingmusicandsound.Frombasicscienceperspectivethemostintriguingresultscamefromstudyingblindwithoutanyvisualexperience.Wedocumentedthatessentiallymostifnotallhigher-order‘visual’corticescanmaintaintheiranatomicallyconsistentcategory-selectivity(e.g.,forbodyshapes,letters,numbersandevenfaces;e.g.AmedietalTICS2017)eveniftheinputisprovidedbyanatypicalsensorymodalitylearnedinadulthood.Ourworkstronglyencouragesaparadigmshiftintheconceptualizationofoursensorybrainbysuggestingthatvisualexperienceduringcriticalperiodsisnotnecessarytodevelopanatomicallyconsistentspecializationsinhigher-order‘visual’or‘auditory’regions.Thisalsohaveimplicationstorehabilitationbysuggestingthatconvergingmultisensorytrainingismoreeffective.InthesecondpartofthelectureIwillfocusonthedorsalvisualstreamandwillfocusonnavigationinvirtualenvironments.Humansrelyonvisionastheirmainsensorychannelforspatialtasksandaccordinglyrecruitvisualregionsduringnavigation.However,itisuncleariftheseregionsroleismainlyasaninputchannel,oriftheyalsoplayamodalityindependentroleinspatialprocessing.Sighted,blindandsighted-blindfoldedsubjectsnavigatedvirtualenvironmentswhileundergoingfMRIscanningbeforeandaftertrainingwithanauditorynavigationinterface.Wefoundthatretinotopicregions,includingbothdorsalstreamregions(e.g.V6)andprimaryregions(e.g.peripheralV1),wererecruitedfornon-visualnavigationaftertraining,againdemonstratingamodality-independenttask-basedroleeveninretinotopicregions.InthelastpartIwillalsodiscussinitialresultsfromournewERCExperieSenseproject.Inthisprojectwefocusontransmittinginvisibletopographicalinformationtoindividualswithsensorydeprivationbutalsoaugmentedtopographicalinformationtonormallysightedandtestingwhethernoveltopographicalrepresentationscanemergeintheadultbraintoinputthatwasneverexperiencedduringdevelopment(orevolution).

June1,2018:Applications

31stCVSSymposium:SpeakerAbstracts S6

4:00-4:30pm

HybridReality:OneGiantLeapForFullDiveMatthewNoyes,FranciscoDelgado,NASAJohnsonSpaceCenterVirtualrealityisidealforgeneratingphotorealisticimageryandbinauralaudioatlowcost,importantforcontext-dependentmemoryrecallinatrainingprogram.Physicalrealityisidealfortactileinteraction,avitalcomponentfordevelopingmusclememory.Bycombiningelementsofvirtualandphysicalreality(called"HybridReality"),forexampleby3Dprintingobjectsofinterestwithaccuratetopology,trackingthoseobjectsin3Dspace,andoverlayingphotorealisticvirtualimageryinaVRheadset,itbecomesmucheasiertocreateimmersivesimulationswithminimalcostandscheduleimpact,withapplicationsintraining,prototypedesignevaluation,scientificvisualization,andhumanperformancestudy.ThistalkwillshowcaseprojectsleveragingHybridRealityconcepts,includingdemonstrationsoffutureastronauttrainingcapability,digitallunarterrainfieldanalogs,aspacehabitatevaluationtool,andasensorimotorcountermeasureagainsttheeffectsofgravitationaltransition.

June1,2018:Applications

31stCVSSymposium:SpeakerAbstracts S7

4:30-5:00pm

MobileVRforvisiontestingandtreatmentBenjaminT.Backus,PhDChiefScienceOfficerVividVision,Inc.Consumer-levelHMDsareadequateformanymedicalapplications.VividVision(VV)takesadvantageoftheirlowcost,lightweight,andlargeVRgamingcodebasetomakevisiontestsandtreatments.Thecompany’ssoftwareisbuiltusingtheUnityengine,whichallowsformultiplatformsupport.intheUnityframework,allowingittorunonmanyhardwareplatforms.Newheadsetsareavailableeverysixmonthsorless,whichcreatesinterestingchallengeswithininthemedicaldevicespace.VV’sflagshipproductisthecommerciallyavailableVividVisionSystem,usedbymorethan120clinicstotestandtreatbinoculardysfunctionssuchasconvergencedifficulties,amblyopia,strabismus,andstereoblindness.VVhasrecentlydevelopedanew,VR-basedvisualfieldanalyzer.

June1,2018:Applications

31stCVSSymposium:SpeakerAbstracts S8

5:00-5:30pm

InducingvisualneuralplasticityusingvirtualrealityStephenA.Engel, UniversityofMinnesotaInthevisualsystem,neuralfunctionchangesdramaticallyaspeopleadapttochangesintheirvisualworld.Mostpastwork,however,hasalteredvisualinputonlyovertheshort-term,typicallyafewminutes.Ourlabusesvirtualrealitydisplaystoallowsubjectstolivein,forhoursanddaysatatime,visualworldsmanipulatedinwaysthattargetknownneuralpopulations.Oneexperiment,forexample,removedverticalenergyfromthevisualenvironment,effectivelydeprivingorientation-tunedneuronsofinput.Resultssuggestthatvisualadaptationissurprisinglysophisticated:ithasamemorythatallowsustoreadaptmorequicklytofamiliarenvironments,itactssimultaneouslyonmultipletimescales,anditissensitivetonotonlythebenefitsofplasticity,butalsoitspotentialcosts.Currentresearchisapplyingtheselessonstostudiesofamblyopiaandmaculardegeneration.Acknowledgements:SupportedbyNSFgrantBCS1558308

June2,2018:AR/VRDisplaysandOptics

31stCVSSymposium:SpeakerAbstracts S9

9:00-9:30am

AugmentedrealityandthefreeformrevolutionJannickRolland,UniversityofRochesterTheultimateaugmentedreality(AR)displaycanbeconceivedasatransparentinterfacebetweentheuserandtheenvironment—apersonalandmobilewindowthatfullyintegratesrealandvirtualinformationsuchthatthevirtualworldisspatiallysuperimposedontherealworld.AnARdisplaytailorslightbyopticalmeanstopresentauserwithvisualinformationsuperimposedonspaces,buildings,objects,andpeople.Thesedisplaysarepowerfulandpromisingbecausetheaugmentationoftherealworldbyvisualinformationcantakeonsomanyforms.Inthistalk,wewillprovideashorthistoricalhighlightofearlyworkinopticsforARandengagetheaudienceontheemergingtechnologyoffreeformopticsthatispoisedtopermeatevariousapproachestofuturedisplaytechnology.

June2,2018:AR/VRDisplaysandOptics

31stCVSSymposium:SpeakerAbstracts S10

9:30-10:00am

ComputationalNearEyeDisplayOpticsKaanAkşit,NVIDIAAlmost50yearsago,withthegoalofregisteringdynamicsyntheticimageryontotherealworld,IvanSutherlandenvisionedafundamentalideatocombinedigitaldisplayswithconventionalopticalcomponentsinawearablefashion.Sincethen,variousnewadvancementsinthedisplayengineeringdomain,andabroaderunderstandinginthevisionsciencedomainhaveledustocomputationaldisplaysforvirtualrealityandaugmentedrealityapplications.Today,suchdisplayspromiseamorerealisticandcomfortableexperiencethroughtechniquessuchasadditivelightfielddisplays,holographicdisplays,always-in-focusdisplays,discretemultiplanedisplays,andvarifocaldisplays.KaanAkşit,WardLopes,JonghyunKim,PeterShirley,andDavidLuebke.2017.Near-eyevarifocalaugmentedrealitydisplayusingsee-throughscreens.ACMTrans.Graph.36,6,Article189(November2017),13pages.DavidDunn,CaryTippets,KentTorell,PetrKellnhofer,KaanAkşit,PiotrDidyk,KarolMyszkowski,DavidLuebke,andHenryFuchs.“WideFieldOfViewVarifocalNear-EyeDisplayUsingSee-ThroughDeformableMembraneMirrors.”IEEETransactionsonVisualizationandComputerGraphics23,no.4(2017):1322-1331.K.Akşit,J.Kautz,andD.Luebke,“Slimnear-eyedisplayusingpinholeaperturearrays,”AppliedOptics,vol.54,no.11,pp.3422–3427,2015.Acknowledgements:DavidLuebke

June2,2018:AR/VRDisplaysandOptics

31stCVSSymposium:SpeakerAbstracts S11

10:00-10:30am

Perception-centereddevelopmentofAR/VRdisplaysMarinaZannoli,OculusVRMixedrealitytechnologieshavetransformedthewaycontentcreatorsbuildexperiencesfortheirusers.Picturesandmoviesarecreatedfromthepointofviewtheartistandtheviewerisapassiveobserver.Incontrast,creatingcompellingexperiencesinAR/VRrequiresustobetterunderstandwhatitmeanstobeanactiveobserverinacomplexenvironment.Inthistalk,IwillpresentatheoreticalframeworkthatdescribeshowAR/VRtechnologiesinterfacewithoursensorimotorsystem.Iwillthenfocusonhow,atOculusResearch,wedevelopnewimmersivedisplaytechnologiesthatsupportaccommodation.IwillpresentafewofourresearchprototypesanddescribehowweleveragethemtohelpdefinerequirementsforfutureAR/VRdisplays.

June2,2018:SpaceandNavigation

31stCVSSymposium:SpeakerAbstracts S12

1:30-2:00pm

VisiontoNavigation:InformationprocessingbetweentheVisualCortexandHippocampusAmanSaleem,UniversityCollegeLondonWeconstantlymovefromonepointtoanotherornavigateintheworld:inaroom,buildingoraroundacity.Whilenavigating,welookaroundtounderstandtheenvironment,andourpositionwithinit.Weusevisionnaturallyandeffortlesslytonavigateintheworld.Howdoesthebrainusevisualimagesobservedbytheeyesfornaturalfunctionssuchasnavigation?Researchintothisareahasmostlyfocusedatthetwoendsofthisspectrum:eitherunderstandinghowvisualimagesareprocessed,orhownavigationrelatedparametersarerepresentedbythebrain.However,littleisknownregardinghowvisualandnavigationalareasworktogetherorinteract.Thefocusofmyresearchistobridgethegapbetweenthesetwofieldsofresearchusingacombinationofrodentvirtualreality,electrophysiologyandoptogenetictechnologies.Oneofthefirststepstowardsthisquestionistounderstandhowthevisualsystemfunctionsduringnavigation.Iwilldescribeworkonneuralcodingandbrainoscillationsintheprimaryvisualcortexduringlocomotion:wediscoveredthatrunningspeedisrepresentedintheprimaryvisualcortex,andhowitisintegratedwithvisualinformation.Iwillnextdescribeworkonhowthevisualcortexandhippocampusworkincohesionduringgoal-directednavigation,basedonsimultaneousrecordingsfromthetwoareas.Wefindthatboththeseareasmakecorrelatederrorsanddisplayneuralcorrelatesofbehaviour.Iwillfinallyshowsomepreliminaryworkoninformationprocessinginareasintermediatetotheprimaryvisualcortexandthehippocampus.

June2,2018:SpaceandNavigation

31stCVSSymposium:SpeakerAbstracts S13

2:00-2:30pm

HowvirtualandalteredrealitycanhelpustounderstandtheneuralbasisofhumanspatialnavigationArneEkstrom,UCDavisCenterforNeuroscienceDeviceslikehead-mounteddisplaysandomnidirectionaltreadmillsofferenormouspotentialforgamingandnetworking-relatedapplications.However,theiruseinexperimentalpsychologyandcognitiveneuroscience,sofar,havebeenrelativelylimited.Oneofclearestapplicationsofsuchnoveldevicesisthestudyofhumanspatialnavigation,historicallyanunderstudiedareacomparedtomoreexperimentally-constrainablestudiesinrodents.Here,wepresentseveralexperimentsthelabhasconductedusingVR/AR,anddescribethenovelinsightstheyprovideintohowwenavigate.Wealsodiscusshowsuchdevices,whencombinedwithfunctionalmagneticresonanceimaging(fMRI)andwirelessscalpEEG,alsoprovidenewinsightsintotheneuralbasisofhumanspatialnavigation.

June2,2018:SpaceandNavigation

31stCVSSymposium:SpeakerAbstracts S14

2:30-3:00pm

SpatialMemoryandVisualSearchinImmersiveEnvironmentsMaryMHayhoeandChia-LingLi,UniversityofTexasAustinSearchisacentralvisualfunction.Mostofwhatisknownaboutsearchderivesfromexperimentswheresubjectsview2Ddisplaysoncomputermonitors.Inthenaturalworld,however,searchinvolvesmovementofthebodyinlarge-scalespatialcontextsanditisunclearhowthismightaffectsearchstrategies.Inthisexperiment,weexplorethenatureofmemoryrepresentationsdevelopedwhensearchinginanimmersivevirtualenvironment.Bymanipulatingtargetlocation,wedemonstratedthatsearchdependsonepisodicspatialmemoryaswellaslearntspatialpriors.Subjectsrapidlylearnedthelarge-scalestructureofthespace,withshorterpathsandlessheadrotationtofindtargets.Theseresultssuggestthatspatialmemoryoftheglobalstructureallowsasearchstrategythatinvolvesefficientattentionallocationbasedontherelevanceofsceneregions.Thereforethecostsofmovingthebodymayneedbeconsideredasafactorinthesearchprocess.Acknowledgements:SupportedbyNIHEY05729

June2,2018:PerceptionandAction

31stCVSSymposium:SpeakerAbstracts S15

4:00-4:30pm

ThroughtheChild’sLookingGlass:AComparisonofChildrenandAdultPerceptionandActioninVirtualEnvironmentsSarahCreem-Regehr&JeanineStefanucci,UniversityofUtahBobbyBodenheimer,VanderbiltTheutilityofimmersivevirtualenvironments(VEs)formanyapplicationsincreaseswhenviewersperceivethescaleoftheenvironmentassimilartotherealworld.SystematicstudyofhumanperformanceinVEs,especiallyinstudiesofperceivedactioncapabilitiesandperceptual-motoradaptation,hasincreasedourunderstandingofhowadultsperceiveandactinVEs.Researchwithchildrenhasjustbegun,thankstonewcommodity-levelhead-mounted-displayssuitableforchildrenwithsmallerheadsandbodies.Children'sperceptionandactioninVEsisparticularlyimportanttostudy,notonlybecausechildrenwillbeactiveconsumersofVEsbutalsobecausechildren'srapidlychangingbodieslikelyinfluencehowtheyperceiveandadapttheiractions.IwillpresentanoverviewofourapproachtostudyingchildrenandteensinavarietyoftasksinvolvingperceivedaffordancesandrecalibrationinVEs,showingbothsimilaritiesanddifferencesacrossagegroups.

June2,2018:PerceptionandAction

31stCVSSymposium:SpeakerAbstracts S16

4:30-5:00pm

UnrestrictedMovementsoftheEyesandHeadWhenCoordinatedbyTaskGabrielDiaz,RochesterInstituteofTechnologyItisknownthattheheadandeyesfunctionsynergisticallytocollecttask-relevantvisualinformationneededtoguideaction.Althoughadvancesinmobileeyetrackingandwearablesensorshavenowmadeitpossibletocollectdataabouteyeandheadposewhilesubjectsexplorethethree-dimensionalenvironment,algorithmsfordatainterpretationremainrelativelyunderdeveloped.Forexample,almostallgazeeventclassifiersalgorithmicallydefinefixationasaperiodwhentheeye-in-headvelocitysignalisstable.However,whentheheadcanmove,fixationsalsoarisefromcoordinatedmovementsoftheeyesandhead,forexample,throughthevestibulo-ocularreflex.Thustoidentifyfixationswhentheheadisfreerequiresthatoneaccountsforheadrotation.Ourapproachwastoinstrumentmultiplesubjectswithahat-mounted2DRGBstereocamera,a6-axisinertialmeasurementunit,anda200HzPupilLabseyetrackertorecordangularvelocityoftheeyesandheadastheyperformedavarietyoftasksthatinvolvecoordinatedmovementsoftheeyesandhead.Thesetasks,includewalkingthroughacorridor,makingtea,catchingaball,andperformingasimplevisualsearchtask.Fourtrainedlabelersmanuallyannotatedaportionofthedatasetasperiodsofgazefixations(GF),gazepursuits(GP),andgazeshifts(GS).Inthispresentation,Iwillreportsomeofourinitialfindingsfromoureffortstounderstandtheprinciplesofcoordinationbetweentheeyesandheadoutsideofthelaboratory.Inaddition,IwillreportcurrentprogresstowardstrainingaForward-BackwardRecurrentWindow(FBRW)classifierfortheautomatedclassificationofgazeeventshiddenwithintheeye+headvelocitysignals.

June2,2018:PerceptionandAction

31stCVSSymposium:SpeakerAbstracts S17

5:00-5:30pm

DifferencesbetweenrealityandcommonproxiesraisequestionsaboutwhichaspectsofvirtualenvironmentsmatterincognitiveneuroscienceJodyCCulham,WesternUniversityPsychologistsandneuroimagerscommonlystudyperceptualandcognitiveprocessesusingimagesbecauseoftheconvenienceandeaseofexperimentalcontroltheyprovide.However,realobjectsdifferfrompicturesinmanyways,includingtheavailabilityandconsistencyofdepthcuesandthepotentialforinteraction.Acrossaseriesofneuroimagingandbehavioralexperiments,wehaveshowndifferentresponsestorealobjectsthanpictures,intermsofthelevelandpatternofbrainactivationaswellasvisualpreferences.Nowthattheseresultshaveshownquantitativeandqualitativedifferencesintheprocessingofrealobjectsandimages,thenextstepistodeterminewhichaspectsofrealobjectsdrivethesedifferences.Virtualandaugmentedrealityenvironmentsprovideaninterestingapproachtodeterminewhichaspectsmatter;moreover,knowingwhichaspectsmattercaninformthedevelopmentofsuchenvironments.Acknowledgements:ThisresearchwasfundedbytheNaturalSciencesandEngineeringResearchCouncilofCanadaandtheCanadianInstitutesofHealthResearch.ResearchprojectswereledbyJacquelineSnow,withtechnicalsupportfromKevinStubbsandDerekQuinlan.

June3,2018:VisualPerception

SpeakerAbstracts S18

9:30-10:00am

ExploringtheUncannyValleyFlipPhillips,SkidmoreCollegeAsrobotsbecomemorehuman-likeourappreciationofthemincreases—uptoacrucialpointwherewefindthemrealisticbutnot*perfectly*so.Atthispoint,humanpreferenceplummetsintotheso-called*uncannyvalley.*Thisphenomenonisn’tlimitedtoroboticsandhasbeenobservedinmanyotherareas.Theseincludethefinearts,especiallyphotorealisticpainting,sculpture,computergraphics,andanimation.Theinformalheuristicpracticesofthefinearts,*especially*thoseoftraditionalanimation,havemuchtooffertoourunderstandingoftheappearanceofphenomenologicalreality.Oneinterestingexampleistheuseof*exaggeration*tomitigateuncannyvalleyphenomenainanimation.Rawrotoscopedimagery(e.g.,actioncapturedfromliveperformance)isfrequentlyexaggeratedtogivethemotion‘morelife’soastoappearlessuncanny.Weperformedaseriesofexperimentstotesttheeffectsofexaggerationonthephenomenologicalperceptionofsimpleanimatedobjects—bouncingballs.Aphysicallyplausiblemodelofabouncingballwasaugmentedwithafrequentlyusedformofexaggerationknownas*squashandstretch.*Subjectswereshownaseriesofanimatedballs,depictedusingsystematicparameterizationsofthemodel,andaskedtoratetheirplausibility.Arangeofrenderingstylesprovidedvaryinglevelsofinformationastothetypeofball.Inallcases,ballswithnoexaggeration(e.g.,veridically)wereseenassignificantlylessplausiblethanthosewithit.Furthermore,whenthetypeofballwasnotspecified,subjectstoleratedalargeamountofexaggerationbeforejudgingthemasimplausible.Whenthetypeofballwasindicated,subjectsnarrowedtherangeofacceptableexaggerationsomewhatbutstilltoleratedexaggerationwellbeyondthatwhichwouldbephysicallypossible.Wecontendthat,inthiscase,exaggerationactstobridgetheuncannyvalleyforartificialdepictionsofphysicalreality.

June3,2018:VisualPerception

SpeakerAbstracts S19

10:00-10:30am

Materialrenderingforperception:vision,touchandnaturalstatisticsWendyJAdams,UniversityofSouthamptonRecoveringshapeorreflectancefromanobject'simageisunder-constrained:effectsofshape,reflectanceandilluminationareconfoundedintheimage.Weovercomethisambiguityby(i)exploitingpriorknowledgeaboutthestatisticalregularitiesofourenvironment(e.g.lighttendstocomefromabove)and(ii)combiningsensorycuesbothwithinvisionandacrossmodalties.Iwilldiscussacollectionofstudiesthatrevealtheassumptionsthatweholdaboutnaturalillumination.Whenvisualscenesarerenderedinawaythatviolatestheseassumptionsourperceptionbecomesdistorted.Forexample,failingtopreservethehighdynamicrangeofilluminationreducesperceivedgloss.Inaddition,Idiscusstwoquitedifferentwaysinwhichtouchcuesinteractwithvisiontomodulatematerialperception.First,objectscan'feel'shiny;surfacesthataremoreslipperytothetouchareperceivedasmoreglossy.Second,touchdisambiguatestheperceivedshapeofabistableshadedimage.Thehaptically-inducedchangeinshapeisaccompaniedbyaswitchinmaterialperception-amattesurfacebecomesglossy.

June3,2018:VisualPerception

SpeakerAbstracts S20

11:00-11:30am

DepthperceptioninvirtualenvironmentsLaurieWilcox,YorkUniversity,CentreforVisionResearchOverthepast25years,studiesofstereoscopicdepthperceptionhavelargelybeendominatedbyitsprecision.However,itisarguablethatsuprathresholdpropertiesofstereopsisarejustasrelevant,ifnotmoreso,tonaturaltaskssuchasnavigationandgrasping.Inthispresentation,Iwillreviewseveralstudiesinwhichwehaveassesseddepthmagnitudeperceptsfromstereopsis.Iwillhighlightfactorsthatimpactperceiveddepthin3Ddisplaysystemssuchaspriorexperience,andtherichnessofadditionaldepthcues.

June3,2018:VisualPerception

SpeakerAbstracts S21

11:30am-12:00pm

ProcessingofSensorySignalsinVirtualRealityJacquelineM.Fulvio,BasRokers,UniversityofWisconsin-MadisonVirtualreality(VR)displayscanbeusedtopresentvisualstimuliinnaturalistic3Denvironments.Littleisknownhowever,aboutoursensitivitytosensorycuesinsuchenvironments.Traditionalvisionresearchhasreliedonhead-fixedobserversviewingstimulionflat2Ddisplays.Undersuchconditionsmanysensorycuesareeitherinconflict,orentirelylacking.Forexample,anopticflowfieldwillcontainconflictingbinocularcues,andlackmotionparallaxcues.Wethereforeinvestigatedsensorysensitivitytocuesthatsignal3DmotioninVR.Wefoundconsiderablevariabilityincuesensitivitybothwithinandbetweenobservers.Nextweinvestigatedthepossiblerelationshipbetweencuesensitivityandmotionsickness.Priorworkhashypothesizedthatmotionsicknessstemsfromfactorsrelatedtoself-motionandthatthereareinherentgenderdifferencesinVRtolerance(e.g.,RiccioandStoffregen,1991).Wehypothesizedthatthediscomfortiscausedbysensorycueconflicts,whichimpliesthataperson'ssusceptibilitytomotionsicknesscanbepredictedbasedontheircuesensitivity.Wefoundthatgreatercuesensitivitypredictedmotionsickness,supportingthecueconflicthypothesis.Inconsistentwithpriorwork,wedidnotfindgenderdifferences:femalesdidnotshowevidenceofgreatermotionsickness.WespeculatethatpriorVRresultsmayberelatedtouseofafixedinterpupillarydistance(IPD)forallobservers.Ourresultsindicatemuchgreatervariabilityinsensorysensitivityto3DmotioninVR,thanmightbeexpectedbasedonpriorresearchon2Dmotion.Moreover,ourfindingssuggestmotionsicknesscanbeattenuatedbyeliminatingorreducingspecificsensorycuesinVR.Acknowledgements:SupportedbyGoogleDaydream

PosterAbstractsP1:RainierBarrettSocialandtactileaugmentedrealitysystemforchemicalengineeringlabactivitiesP2:KamranBinaeeLSTM-RNNmodelscancaptureindividualsubjectvisual-motorstrategiesP3:JonathanK.DoyonSurfacetexturediscontinuities,surfaceluminance,andexplorationpatternsaffecttheperceptionofobjectreachabilityinvirtualrealityP4:DavidEngelImpactofoscillatoryopticflowinvirtualrealityonquietstanceP5:JanisIntoyTheimpactofretinalimagemotiononextra-fovealsensitivityP6:NicholasKochanDesignofaspatiallymultiplexedlightfielddisplayoncurvedsurfacesforVRHMDapplicationsP7:RakshitKothariHeadfreegazeclassificationandstatisticsP8:SunwooKwonPre-saccadicmotionintegrationdrives“pursuit”forsaccadestomotionaperturesP9:PaulLintonHowdoweseedistanceinVR?P10:JulietteMcGregorOptogeneticvisionrestorationinthelivingmacaqueP11:ElizabethSaionzRelativeefficacyofglobalmotionversuscontrasttrainingearlyafterstrokeforrecoveringcontrastsensitivityincorticalblindnessP12:SarahWaltersAdaptiveopticsophthalmoscopyofmacaquephotoreceptorsrevealstheslowingoftwo-photonautofluorescencekineticsduringsystemichypoxiaP13:SéamasWeechPresenceandcybersicknessinvirtualrealityaremodulatedbygamingexperienceandnarrativeP14:ZhizhuoYangEnhancingpasswordrecollectionperformanceusingaugmentedrealitywiththemethodoflociP15:MingjianZhangOptimizingsEMGcontrolofwristmovementintheMicrosoftHoloLensHEART*project

PosterAbstracts

31stCVSSymposium:June1-3,2018

PosterAbstracts P1

SocialandTactileAugmentedRealitySystemforChemicalEngineeringLabActivitiesRainierBarrett,HetaGandhi,AndrewWhiteAugmentedreality(AR)hasthepotentialtobeaversatiletoolforeducationenrichment.WehaveconstructedanARtabletosupplementtraditionalchemicalengineeringlaboratoryeducation.ThisARtableusescomputervisionsoftwaretosimulateachemicalreactorplant,buildingstudentintuitioninahands-onwaythatcannotbereplicatedwitha"real-life"labexperience.BarrettR,GandhiH,NaganathanA,DanielsD,ZhangY,OnwunakaC,LuehmannA,WhiteAD,(2018).SocialandTactileAugmentedRealityinanUndergraduateChemicalEngineeringLaboratory.Submitted.Acknowledgements:Dr.MarcPorosoffandDr.WyattTenhaeffBobMarcotteJimAlkinsHilaryMogulZiyueYang

31stCVSSymposium:June1-3,2018

PosterAbstracts P2

LSTM-RNNmodelscancaptureindividualsubjectvisual-motorstrategiesKamranBinaee,RakshitS.Kothar,JeffB.Pelz,GabrielJ.DiazStudieshavedemonstratedthatvisuallyguidedactioncanbemodeledasanonlinecouplingbetweensourcesofvisualinformationandaction.However,duetoocclusion(unreliablevisualinformation)andalsomotordelay,onlinecontrolfailstoexplainthebehavior.Evidenceshowhumansswitchtosometypeofpredictivemodeofcontroltosucceed.Inthisstudy,weproposealongshort-termmemoryrecurrentneuralnetwork(LSTM-RNN)modelforthevisual-motorprocessingwithoutaninternalrepresentationofthephysicsoftheworld.Themodel'spredictivecharacteristicsistheresultofalearnedmappingbetweenrecentlysensedvisualstateinahead-centeredframeofreferenceandfutureaction.EachLSTM-RNNmodelwastrainedtoreproducedatafromtensubjectsinavirtualrealitysetupwhiletheyengagedinaballcatchingtask.Themodelssuccessfullypredictgazebehaviorwithin3◦,andhandmovementswithin8.5cmasfaras500msintofuture.Toinvestigatethecontributionofeachinputfeature,weperformedanablationstudyonallmodelswhereinputfeatureswereremovediterativelyandmodeloutputswererecorded.Theresultsshow,notonlythemodelslearnedamappingbetweenvisualinformationoftheballandmotoroutput,butalsothelongerintegrationdurationthemorerobustthemodeliswhenitcomestolossofinputfeatures.Furthermore,thesamenetworkarchitecturewastrainedonexpertandnovicegroupsofsubjectsseparately.Comparingtheablationstudyresultsbetweenthesetwogroupsshowsthatthemodelstrainedonsuccessfulpopulationdemonstratemorerobustnessfacingsensoryinputperturbations.ThissuggeststhattheproposedLSTM-RNNmodelcapturesdifferentvisual-motorstrategiesemployedbydifferentgroupofsubjects.1)Zhao,H.,&Warren,W.H.(2014,October).On-lineandmodel-basedapproachestothevisualcontrolofaction.VisionResearch,110,190-202.2)Binaee,K.,Diaz,G.,Pelz,J.,&Phillips,F.(2016).BinocularEyetrackingCalibrationDuringaVirtualBallCatchingtaskusingHeadMountedDisplay.InProceedingsoftheacmsymposiumonappliedperception(pp.15–18).3)Diaz,G.,Cooper,J.,&Hayhoe,M.(2013).Memoryandpredictioninnaturalgazecontrol.PhilosophicalTransactionsoftheRoyalSocietyB:BiologicalSciences,368(1628).Acknowledgements:ThankstoDr.Messingerforhissupport

31stCVSSymposium:June1-3,2018

PosterAbstracts P3

Surfacetexturediscontinuities,surfaceluminance,andexplorationpatternsaffecttheperceptionofobjectreachabilityinvirtualrealityJonathanK.Doyon,JosephD.Clark,TylerSurber,AlenHajnalIn4experiments,weusedVR(OculusRift)toinvestigatetheroleof2surfacetexturevariablesintheperceptionofobject-reachability.Reachabilityjudgmentsweregivenforobjectsplacedonatableatdistancesof60-140%ofarm-length.Thetableshadtextureswithbothdiscontinuitiesandvaryingluminance(Exp1),varyingdiscontinuities(Exp2),varyingluminance(Exp3),orbothvaryingdiscontinuitiesandluminance(Exp4).Headmotionwasquantifiedbyrecordingthevisualfeedintheheadsetanddifferencingthevideos.Magnitudeandcomplexityofmovementwereextractedfromthesedatausingamultifractaldetrendedfluctuationanalysis.Theseparameters,anddiscontinuityandluminance,werethenusedtopredictparticipantjudgmentsandresponsetimesusinghierarchicalmixedeffectsregressionmodels.Inallexperiments,movementparameterswerefoundtomodulatetheeffectsofluminanceanddiscontinuity.Distanceperceptionandexplorationpatternswillbediscussed.

31stCVSSymposium:June1-3,2018

PosterAbstracts P4

ImpactofOscillatoryOpticFlowinVirtualRealityonQuietStanceDavidEngel,AdrianSchütz,MiloszKrala&FrankBremmerInordertomaintainbalance-amongstothersenses-humansheavilyrelyonvision.Whensubjectsperceivetheirbodyasmovingrelativetotheenvironment,theytriggercountermovements,resultinginbodysway.Thisswayhastypicallybeeninvestigatedinareal,movingroomorinfrontoflargedisplaystosimulateself-motioninspace.Here,weaimedto(i)inducebodyswaybysimulatedself-motioninvirtualreality(VR)withtheuseofacommerciallyavailableheadmounteddisplayand(ii)provideanewmethodofanalysisbyinvestigatingthephasecoherenceofsubjects’bodilyresponsestothestimulusacrosstrials.Wesimulatedsinusoidalperturbationsoftheenvironmentandquantifiedtrajectoriesofthesubjects’centerofpressure.Phasecoherenceanalysisrevealedacouplingtothestimulusforeachpresentedfrequency,evenwhentherewerenonoticeableeffectsinthefrequencypowerspectrum.WeconcludethatsubjectsadjustthephaseoftheirinducedswaytothestimuluspresentedthroughVR.Acknowledgements:DeutscheForschungsgemeinschaft:IRTG-1901,CRC/TRR-135;EU:PLATYPUS

31stCVSSymposium:June1-3,2018

PosterAbstracts P5

Theimpactofretinalimagemotiononextra-fovealsensitivityJanisIntoy,NorickRBowers,JonathanDVictor,MartinaPoletti,MicheleRucciThehumaneyemovescontinuallyinperiodsoffixation.Previousworkhasshownthathumansaresensitivetothemodulationsfromtheseeyedriftsandusethemtoenhancesensitivitytohighspatialfrequencies(Kuangetal,2012;Boietal,2017),aneffectthatlikelyactsprimarilyinthefoveola.Outsidethefoveola,driftisassumedtohavelittleimpactasitcoversasmallerfractionofneuronalreceptivefields.Hereweshowthatoculardriftimprovessensitivitytohighspatialfrequenciesevenwithoutfovealstimulation.Wemeasuredcontrastsensitivityat16cpdwithcontrolledretinalimagemotionandafovealscotoma.Sensitivityis(1)impairedunderretinalstabilizationwhentheretinaleffectsofdriftareeliminatedand(2)reducedwhenretinalimagemotionisartificiallyattenuatedoramplified.Theseresultsarewellaccountedforbythedistributionoftemporalpowerintheretinalinputconveyedbydrift.Theyindicatethateyedriftexertsitsactionthroughoutthevisualfield.Boietal.,ConsequencesoftheOculomotorCyclefortheDynamicsofPerception,CurrentBiology(2017),http://dx.doi.org/10.1016/j.cub.2017.03.034Kuangetal.,TemporalEncodingofSpatialInformationduringActiveVisualFixation,CurrentBiology(2012),http://dx.doi.org/10.1016/j.cub.2012.01.050SupportedbyNIHR01EY018363andNSFgrantsBCS-1457283andBCS-1420212

31stCVSSymposium:June1-3,2018

PosterAbstracts P6

DesignofaspatiallymultiplexedlightfielddisplayoncurvedsurfacesforVRHMDapplicationsTianyiYang,NicholasS.Kochan,SamuelJ.Steven,GregSchmidt,JulieL.Bentley,DuncanT.MooreAtypicallightfieldvirtualrealityhead-mounteddisplay(VRHMD)iscomprisedofalensletarrayandadisplayforeacheye.Anarrayoftiledsubobjectsshownonthedisplayreconstructsthelightfieldthroughthelensletarray,andthelightfieldissynthesizedintooneimageontheretina.Inthispaper,wepresentanovelcompactdesignofbinocularspatiallymultiplexedlightfielddisplaysystemforVRHMD.Contrarytotheflatlensletarrayandflatdisplayusedincurrentlightfielddisplays,theproposeddesignexplorestheviabilityofcombiningaconcentriccurvedlensletarrayandcurveddisplaywithoptimizedlensletshape,sizeandspacing.Thedesignofplacinglensletarrayonasphericalsurfaceisinvestigatedandthespecificationtradeoffsareshown.Thesystemdisplayshighestresolutionatthedirectionwherevertheeyegazes.ThedesignformisthinandlightweightcomparedtomostotherVRopticaltechnologies.Furthermore,theuseofacurveddisplayreducesthecomplexityofopticaldesignandwastesfewerpixelsbetweensubobjects.Thedesignsimultaneouslyachievesawidefieldofview,highspatialresolution,largeeyeboxandrelativelycompactformfactor.[1]R.M.Tasso,“Efficientanduniformilluminationwithmicrolens-basedband-limiteddiffusers,”PhotonicsSpectra,April2010.[2]E.Fennig,G.SchmidtandD.T.Moore,“DesignofPlanarLightGuideConcentratorsforBuildingIntegratedPhotovoltaics,”inOpticalDesignandFabrication2017(Freeform,IODC,OFT),OSATechnicalDigest(online)(OpticalSocietyofAmerica,2017),paperITh1A.4,2017.[3]G.Lippman,“Epreuvesreversiblesdonnantlasensationdurelief,”J.Phys.Theor.Appl.,vol.7,no.1,pp.821-825,1908.[4]R.NgandP.Hanrahan,“DigitalCorrectionoflensaberrationsinlightfieldphotography,”Proc.SPIE6342,InternationalOpticalDesignConference2006,63421E(17July2006).[5]R.W.Massof,L.G.Brown,M.D.Shapiro,G.D.Barnett,F.H.BakerandF.Kurosawa,“37.1:InvitedPaper:Full-FieldHigh-ResolutionBinocularHMD,”SIDsymposiumdigestofTechnicalPapers,vol.34,no.1,pp.1145-1147,2012.

31stCVSSymposium:June1-3,2018

PosterAbstracts P7

HeadfreeGazeclassificationandstatisticsRakshitKothari,KamranBinaee,ReynoldBailey,JeffPelz,GabrielDiazItisknownthattheheadandeyesfunctionsynergisticallytocollecttask-relevantvisualinformationneededtoguideaction.However,investigationofeye/headcoordinationhasbeendifficultbecausemostgazeeventclassifiersalgorithmicallydefinefixationasaperiodwhentheeye-in-headvelocitysignalisstable.However,whentheheadcanmove,fixationsalsoarisefromcoordinatedmovementsoftheeyesandhead,forexample,throughthevestibulo-ocularreflex.Toidentifyfixationswhentheheadisfreerequiresthatoneaccountsforheadrotation.Ourapproachwastoinstrumentmultiplesubjects'witha6-axisInertialMeasurementUnitanda200HzPupillabsETGtorecordangularvelocityoftheeyesandheadastheyperformeddifferenttypesoftasks(ballcatching,indoorexploration,visualsearchandteamaking)for5minseach.Fiveexpertsmanuallyannotatedaportionofthedatasetasperiodsofgazefixations(GF),gazepursuits(GP),andgazeshifts(GS).Eachdatasamplewaslabeledbythemajorityvotefromthelabelers.Thisdatasetwasthenusedtotrainanovel2-stageForward-BackwardRecurrentWindow(FBRW)classifierforautomatedeventlabeling.Inter-labelerreliability(Cohens-kappa)wasusedtocomparetheperformanceoftrainedclassifiersandhumanlabelers.Wefoundthat64to78msdurationprovidesenoughcontextforclassificationofsampleswithanaccuracyabove98%onasubsetofthelabeleddatathatwasnotusedduringthetrainingphase.Inaddition,analysisofFleiss'kappaindicatesthatthealgorithmclassifiesatrateon-parwithhumanlabelers.Thisalgorithmprovidesnewinsightintothestatisticsofnaturaleye/headcoordination.Forexample,preliminarystatisticsindicatethatfixationoccursveryrarelythroughstabilizationoftheeye-in-headvectoralone,butthroughcoordinatedmovementsoftheeyesandheadwithanaveragegainof1.

31stCVSSymposium:June1-3,2018

PosterAbstracts P8

Pre-saccadicmotionintegrationdrives“pursuit”forsaccadestomotionaperturesSunwooKwon,MartinRolfs,JudeMitchellWhenasaccadeisdirectedtowardsatranslatingtarget,smoothpursuitmovementstrackthetargetfromthemomentofsaccadelanding,indicatingthatmotionintegrationoccurredpriortothesaccade(GardenerandLisberger,2001).Since,priortosaccades,perceptualperformanceimprovesatthesaccadetarget(Kowleretal,1995;DeubelandSchneider,1996;Whiteetal.,2013),wehypothesizedthatsaccadestoamotionstimulusinastationaryaperturewoulddrivepost-saccadicpursuitmovementsduetothepre-saccadicselectionofitsmotion.Participantsperformedasaccadetooneoffourmotionapertures,cuedbyacentralline.Aperturesconsistedofrandomdotfields(5degeccentricityanddiameter,100%coherentmotion)movinginoneoftworandomlyassignedradialdirectionstangentialtothecenter-outsaccade.Saccadesexhibitedalowgain(~10%)pursuitalongthetarget'smotiondirectionatsaccadelanding.Theseeffectsweredrivenbymotionintegrationpriortothesaccade,aswefoundconsistentresultswhenthemotionstimulusoffsetoccurredduringthesaccade.Theseeffectsgrewaswereducedthespatialcertaintyoftheaperturelocation,fromawell-definedringaperture,tonoring,ortoasmoothedGaussianenvelope.Pursuitvelocityincreasedwithincreasingstimulusspeedwithgainsaturatingatspeedshigherthan10deg/s.Toexaminewhatperiodpriortothesaccadecontributedtomotionintegration,wepresentedstimuliwithrandommotion(0%coherence)thattransitionedtocoherentmotion(eitherpermanentlyorforfixed100msepochs)aroundthetimebeforesaccadeonset.Wefoundthataminimumof100msmotionintegrationwasnecessarytoobserveaneffect,with150-50msbeforethesaccadeprovidingthestrongestinput.Theseresultssuggestthatpresaccadicattentionengagesmotionintegrationforthesaccadetargetthatcanbeobservedasaninvoluntarilylowgainpursuituponsaccadelanding.1)Deubel,H.,&Schneider,W.X.(1996).Saccadetargetselectionandobjectrecognition:Evidenceforacommonattentionalmechanism.Visionresearch,36(12),1827-1837.2)Gardner,J.L.,&Lisberger,S.G.(2001).Linkedtargetselectionforsaccadicandsmoothpursuiteyemovements.JournalofNeuroscience,21(6),2075-20843)Kowler,E.,Anderson,E.,Dosher,B.,&Blaser,E.(1995).Theroleofattentionintheprogrammingofsaccades.Visionresearch,35(13),1897-1916.4)White,A.L.,Rolfs,M.,&Carrasco,M.(2013).Adaptivedeploymentofspatialandfeature-basedattentionbeforesaccades.Visionresearch,85,26-35.MartinRolfsissupportedbytheDeutscheForschungsgemeinschaft(DFGgrantsRO3579/8-1andRO3579/9-1)JudeMitchellissupportedbyfundingfromNIHgrantU01-NS094330

31stCVSSymposium:June1-3,2018

PosterAbstracts P9

HowDoWeSeeDistanceinVR?PaulLinton,CentreforAppliedVisionResearch,City,UniversityofLondonAsVRbecomesincreasinglyconcernedwithsocialinteraction,oneofthekeyquestionsishowweseedistancesininteractionspace?Accordingtotheliteraturethetwoprimarysourcesofneardistanceinformationare'vergence'(theangleoftheeyes)and'accommodation'(thefocalpoweroftheintraocularlens).ButintwostudiesIdemonstratethatneithervergencenoraccommodationfunctionaseffectivecuestodistance,evenatneardistances.InbothstudiesIhadsubjectsfixateonasurfacefor30seconds.UnbeknownsttothemIvariedtheirvergencetoanywherebetween20cmand50cm.Ialsointroducedappropriateaccommodationcuesinthesecondstudy.Andyetthesemanipulationsofvergenceandaccommodationhadverylittleeffectontheirreachingresponsestonewlypresenteddotstimuli:againof0.16(vs.the0.86+suggestedbytheliterature).Thisleadsmetoconcludethatcognition,ratherthanperception,maybeplayingagreaterroleintheestimationofdistancesthanpreviouslythought.

31stCVSSymposium:June1-3,2018

PosterAbstracts P10

OptogeneticvisionrestorationinthelivingmacaqueJulietteE.McGregor,TylerGodat,KeithParkins,KamalDhakal,SarahWalters,JenniferStrazzeri,BrittanyBateman,DavidR.Williams,WilliamH.MeriganOptogeneticsofferstheprospectofrestoringlightsensitivitytoganglioncellswhenphotoreceptorinputhasbeenlostduetodiseaseorinjury.Channelrhodopsin-mediatedactivityinprimateretinalganglioncells(RGCs)haspreviouslybeendemonstratedex-vivo,butnotinthelivingprimate.WedemonstratechannelrhodopsinmediatedRGCactivityinaninvivomacaquemodelofretinaldegenerationbyhighresolutionopticalrecordingfromRGCsexpressingbothachannelrhodopsin(ChrimsonR)andacalciumindicator(GCaMP6s).Acknowledgements:AmberWalker,QiangYang,JieZhang,JenniferHunter,StephenMcAleavey,JamesGriffin,JesseSchallek,ChristinaSchwarz,KennyCheong,LouisDiVincenti,TracyBubel,DavidDiLoreto,CharlieGranger,SarahWaltersandEthanRossi.

31stCVSSymposium:June1-3,2018

PosterAbstracts P11

RelativeefficacyofglobalmotionversuscontrasttrainingearlyafterstrokeforrecoveringcontrastsensitivityincorticalblindnessElizabethL.Saionz,DujeTadin,KrystelR.HuxlinStroketoV1causescorticalblindness(CB).Inchronic(>6monthspost-stroke)CBpatients,thisischaracterizedbyacompletedeficitincontrastsensitivity(CS)andcomplexmotiondiscriminationintheCBfield.Globaldirectiondiscrimination(GDD)traininginchronicCBrecoversGDDbutCSremainsimpaired.Here,weinvestigatetheeffectoftrainingonCSinsubacute(<3monthspost-stroke)CB.7CBstrainedonGDD;2CBshadnormalblindfieldGDDandtrainedonstaticorientationdiscriminationwithvaryingcontrast.WeassessedchangeinstaticandmotionCSwiththeqCSF.WefoundthatGDD,alwaysreducedinchronics,maybepreservedinsubacutesalongwithmotionCS.GDDtrainingimprovedCSFsonlyformotion,butcontrasttrainingledtorobustimprovementinstaticCSFs.Contrast-trainedsubacutesalsoimprovedmoreonluminancedetectionvs.GDD-trainedsubacutes.Thus,earlytrainingtakesadvantageofpreservedvisioninsubacuteCB,inducinggreaterrecovery.Suchtrainingmaybecriticaltopreventdegradationofenhancedperceptualabilitiespresentintheearlypost-strokeperiod.HuxlinKR,MartinT,KellyK,RileyM,FriedmanDI,BurginWS&HayhoeM(2009).PerceptualrelearningofcomplexvisualmotionafterV1damageinhumans.JournalofNeuroscience,29(13),3981–3991.DasA,TadinD&HuxlinKR(2014).BeyondBlindsight:PropertiesofVisualRelearninginCorticallyBlindFields.JournalofNeuroscience,34(35),11652–11664.Acknowledgements:NIHgrants(UL1TR002001toELS,R01EY027314toKRH,CoreCenterGrantP30EY001319totheCVS,T32GM007356totheMSTP,TL1TR002000totheTBSGraduateProgram),unrestrictedgrantfromtheResearchtoPreventBlindnessFoundationtoFEI.

31stCVSSymposium:June1-3,2018

PosterAbstracts P12

Adaptiveopticsophthalmoscopyofmacaquephotoreceptorsrevealstheslowingoftwo-photonautofluorescencekineticsduringsystemichypoxiaSarahWalters,ChristinaSchwarz,AmberWalker,LouisDiVincenti,JenniferJ.HunterThekineticsoftwo-photonexcitedfluorescence(TPEF)fromphotoreceptorsinresponsetovisualstimulationareindicativeofall-trans-retinolproductionandclearanceandshowpromiseasanobjective,non-invasivemeasureofvisualcyclefunctioninbothhealthyanddiseasedretina.Aschangesinoxygensupplyplayaroleinmanydiseasesleadingtoretinaldegeneration,weinducedsystemichypoxiainmacaqueasamodelofalteredphysiologicalstateandtrackedtheTPEFkineticsofphotoreceptorsusingadaptiveoptics(AO)ophthalmoscopy.Macaqueswereanesthetized,paralyzed,andventilatedwith100%O2(pre-hypoxia).Repeatedly,systemichypoxiawasinducedbyventilatingwith10%O2/90%N2,followedbyrecoverywith100%O2(post-hypoxia).AnAOscanninglightophthalmoscopewasusedtocollectTPEF(ex:730nm,em:<550nm)ineachconditionfromthephotoreceptorsof3macaques,andTPEFtimecourseswerefittedwithanexponentialfunction.HypoxiaproducednosignificantchangeinthefractionalTPEFincrease;however,itsignificantlyslowedTPEFresponse,increasingthetimeconstantby11±2%onaverage.TPEFresponseswerenotsignificantlydifferentpre-andpost-hypoxia.SystemicallyreducedoxygensupplyslowsthetimecourseofTPEFinphotoreceptors,yetthetotalTPEFincreaseisunaffected,potentiallyindicatingvisualcycleslowinginresponsetohypoxia.Thisdemonstrationbroadenstheutilityoftwo-photonAOophthalmoscopytodetectchangesinvisualcyclefunctionthatoccurwithdiseaseoralteredphysiologicalstate.

31stCVSSymposium:June1-3,2018

PosterAbstracts P13

PresenceandcybersicknessinvirtualrealityaremodulatedbygamingexperienceandnarrativeSéamasWeech,SophieKenny,MarkusLenizki,MichaelBarnett-CowanMinimizingcybersicknessandmaximizingpresenceareimportantconsiderationsinthedesignofvirtualrealityapplications.Cantop-downinterventionsbeusedtodrivehighpresenceandlowsickness,withoutalteringgamemechanics?Wegathereddatafromadiverseconveniencesample(N=153)overoneweekatapublicmuseum.Participantsexploredaninteractivevirtualenvironmentfor7minafterlisteningtoeitherashort(~40secs)loworenrichednarrativecontext.Wecollectedself-reportedpresenceandsickness,aswellasdemographics,gamingexperience,engagement,andtaskperformance.Presencewasassociatedwithlowersicknessseverity,althoughtherelationshipwasweakerforregulargamers.Participantswhoplayedgamesforlessthanfivehoursaweekreportedstrongersicknesssymptomsiftheywereassignedtothelow-narrativecondition.Conversely,regulargamerswereunaffectedbynarrativecontext.Theresultsshowthepotentialformodulationofcybersicknesswithnarrativeintervention.SupportedbyfundingfromNSERCandOculusVR.WealsothankRAs:AmbikaBansal,AyshaBasharat,JudyEhrentraut,KatieFleming,SarahHawkshaw,LauraJimenez,ShivarnyMaheswaran,ClaudiaMartinCalderon,CarolinSachgau,ManasiShah,andFrankTran.

31stCVSSymposium:June1-3,2018

PosterAbstracts P14

EnhancingPasswordRecollectionPerformanceusingAugmentedRealitywiththemethodofLociZhizhuoYang,ReynoldBailey,JoeGeigel,AlbertoScicaliRecallingpasswordsandothersequencesoflettersanddigitshasbecomearoutineactivityofmodernlife.Toeasethedifficultyofrememberingpasswords,weexploreifhumanmemoryperformancecanbeimprovedbyleveragingAugmentedReality(AR)duringmemorization.Inthispaper,weseektousevisualaugmentationtoreinforcetheassociationofeachcharacterofapasswordwithanobjectorspatialregioninthereal-world3Denvironment.Thisapproachisknownasthemethodofloci.AMicrosoftHoloLenswasusedtoprovidetheuserwithreal-timevisualizationoverlaidonthephysicalenvironment.Userscanassociateeachdigitofthepasswordwithanobjectintheenvironmentwithsimplevoicecommand,andreorientingtowardsoneoftheseregionswilldisplaythecorrespondingdigit.Wehypothesisthatthiswillmakethememorizationprocessmoreefficient.Weconductauserstudywhereparticipantswereaskedtorecallrandomlygeneratednumericpasswordsusingthefollowingmethods:memorizationusingtheparticipant'sowndevisedmethod,themethodofloci,andthemethodoflociwithARvisualization.Wemeasuretheaccuracyofpasswordrecollectionandresponsetimeandreportthesubjectivefeedback.[1]C.-H.Chien,C.-H.Chen,andT.-S.Jeng.Aninteractiveaugmentedrealitysystemforlearninganatomystructure.Inproceedingsoftheinternationalmulticonferenceofengineersandcomputerscientists,volume1.InternationalAssociationofEngineersHongKong,China,2010.[2]K.Ha,Z.Chen,W.Hu,W.Richter,P.Pillai,andM.Satyanarayanan.Towardswearablecognitiveassistance.InProceedingsofthe12thAnnualInternationalConferenceonMobileSystems,Applications,andServices,MobiSys’14,pages68–81,NewYork,NY,USA,2014.ACM.[3]P.Hutton.HistoryasanArtofMemory.UniversityofVermont,1993.[4]M.-C.Juan,M.Mendez-Lopez,E.Perez-Hernandez,andS.Albiol-Perez.Augmentedrealityfortheassessmentofchildren’sspatialmemoryinrealsettings.PLOSONE,9(12):1–26,122014.[5]V.I.Levenshtein.Binarycodescapableofcorrectingdeletions,insertions,andreversals.InSovietphysicsdoklady,volume10,pages707–710,1966.[6]V.Levenstein.Binarycodescapableofcorrectingspuriousinsertionsanddeletionsofones.ProblemsofInformationTransmission,1(1):8–17,1965.[7]G.Navarro.Aguidedtourtoapproximatestringmatching.ACMcomputingsurveys(CSUR),33(1):31–88,2001.[8]J.O’KeefeandL.Nadel.Thehippocampusasacognitivemap.ClarendonPress,1978.[9]O.Rosello,M.Exposito,andP.Maes.Nevermind:Usingaugmentedrealityformemorization.InProceedingsofthe29thAnnualSymposiumonUserInterfaceSoftwareandTechnology,pages215–216.ACM,2016.[10]S.Sridharan,B.John,D.Pollard,andR.Bailey.Gazeguidanceforimprovedpasswordrecollection.InProceedingsoftheNinthBiennialACMSymposiumonEyeTrackingResearch&Applications,pages237–240.ACM,2016.[11]J.Yang.Towardscognitiveassistancewithwearableaugmentedreality.PhDthesis,2016TheauthorswouldliketothankGolisanoCollegeofComputingandInformationSciencesinRochesterInstituteofTechnologyforprovidingequipmentandworkingspace.

31stCVSSymposium:June1-3,2018

PosterAbstracts P15

OptimizingsEMGcontrolofwristmovementintheMicrosoftHoloLensHEART*projectMingjianZhang,RezaRawassizadeh,Mohammed(Ehsan)Hoque,AniaC.BuszaStrokeisamajorcauseofadultdisabilityintheUnitedStates.Currenttreatmentparadigmsforpost-strokemotorrecoverygenerallyfocusonencouragingthepatienttodomultiplerepetitiveexercisesdailywiththeaffectedlimb,whichpresumablystimulateneuroplasticity.Inrecentyears,thereisincreasedpromiseinusingnewtechnologiesinrehabilitationgamestomotivatepatientstoperformmoreexercises.Unfortunately,suchsystemsareoftennotusableforpatientswithsevereweaknesswhoarenotabletolifttheaffectedsideagainstgravity.Inthisproject,wearedesigningarehabilitationexercisesystemforpatientswithsevereweakness.Oursystemuseselectromyographic(EMG)signalsfromtheforearmofpatientswithweaknessduetostroketocontrolavirtuallimbwhichispresentedtopatientsusingMixedReality(MicrosoftHoloLens).WehavedevelopedaninitialprototypewhereEMGfluctuationsintheuser'sforearmcontrolwristmovementinthevirtualarmdisplayedtothewearer.Wearenowoptimizingthesystemtomakethemodelarmmovementpatternsfeelmorerealistictothewearer.Inthisposter,wepresentourcurrentworkusingdifferentmachinelearningalgorithmsonEMGsignalsfromhealthycontrolsfortransformingEMGsignaltosmoothwristmovementinourarmmodeldisplayedwiththeMicrosoftHoloLens.Acknowledgements:UniversityofRochesterVR/ARpilotgrantprogram

top related