gerald nielson. overview  what is augmented reality?  types of ar  computer vision overview...

Download Gerald Nielson. Overview  What is Augmented Reality?  Types of AR  Computer Vision overview  Future of Augmented Reality  AR development

Post on 24-Dec-2015

219 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Slide 1
  • Gerald Nielson
  • Slide 2
  • Overview What is Augmented Reality? Types of AR Computer Vision overview Future of Augmented Reality AR development
  • Slide 3
  • Augmented Virtuality - the merging of real world objects into virtual worlds Virtual Reality - computer-simulated environment that can simulate physical presence in places that are in the real world
  • Slide 4
  • What is Augmented Reality? Augmented- meaning: Amplified Improved Enhanced
  • Slide 5
  • What is AR? Augmented Reality is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer- generated sensory input such as sound, video, graphics or GPS data.
  • Slide 6
  • AR examples On TV NFL - First down marker Team of 4 to run NHL hockey puck tracer NASCAR Racef/x
  • Slide 7
  • Examples on the internet Virtual Shopper tracks movements and portrays virtual model of clothing Ray Ban sunglasses -virtual Mirror USPS mailbox Size
  • Slide 8
  • Useful Applications Many useful applications have been developed a lot of room for improvements Museum guide-less Tours BMW augmented Reality - http://www.youtube.com/watch?v=P9KPJlA5 yds http://www.youtube.com/watch?v=P9KPJlA5 yds
  • Slide 9
  • Mobile AR Many mobile phones have everything needed to create augmented Reality Camera to determine what is being viewed GPS data location Compass what direction you are facing Accelerometer - orientation Internet connection provide relevant data And many people have them - 1/3 of American adults own smartphones
  • Slide 10
  • Mobile AR Layar Browser The worlds leading mobile augmented reality platform Has thousands of layers with different applications Location based layers can help the user find nearby locations Example-BWW Google Goggles Wikitude
  • Slide 11
  • Stella Artois Le Bar Guide Displays the location and information of all bars that serve Stella Artois Many more apps like this one Yelp designed to help people connect with local businesses Wikipedia Displays information from Wikipedia entries near your location
  • Slide 12
  • Types of AR Marker Based GPS based Currently, most are based on marketing and entertainment applications As technology develops, augmented reality will make its way into other areas
  • Slide 13
  • Types of AR Marker based using a camera these applications recognize a marker in the real world, and calculate its position & orientation to augment the reality. In simple words they overlay the marker/image with some content or information. USPS Priority Mail Simulator Advertisements
  • Slide 14
  • Types of AR GPS based these applications take advantage of Global Positioning System [GPS] tools in your phone. The applications use the position of your phone to find landmarks and any other point of interests [POI] near you. Stella Artois Le Bar Guide (iPhone only) Yelp! local businesses Many Restaurants
  • Slide 15
  • Types - GPS based Once the POI or landmark is revealed the user can get additional information about it or get directions to reach there. Things like ratings of a restaurant, distances, and menus or specials can be displayed.
  • Slide 16
  • Overview of Computer Vision Computer vision has great potential for Augmented Reality Instead of relying on specific markers to register the camera, natural features can be used To Register the camera i.e. enable the camera to recognize certain images Requires the process of feature detection and matching from specific points in the images
  • Slide 17
  • Overview of Computer Vision - Terms Marker - used to specify where/what to place the information or content Natural features points/parts of an object being viewed Detector used to search images for points that are repeatable Descriptor used to analyze the image for points to be used in the matching step- they characterize the point/region
  • Slide 18
  • What is a feature A feature is A specific location in the image Unique point/edge/corner in an image Used to find a small set of corresponding locations in different images Points
  • Slide 19
  • Regions Straight Lines Edges Feature could be points or
  • Slide 20
  • Feature Detection and Matching Many methods exist to describe, detect, and match images Points, Edges, regions, and straight lines of images can all be used Four steps are involved in the feature detection and matching process
  • Slide 21
  • What is a Detector? Detectors are used to create the descriptor needs to be repeatable meaning, the same feature needs to be detected in two or more different images of the same scene (Lighting/viewpoint changes) Taken From the image being viewed
  • Slide 22
  • Accounting for viewpoint changes Occlusion Lighting Changes Viewpoint changes Changes in contrast/clarity
  • Slide 23
  • What is a Descriptor? A description of the distinctive point From the image stored in the database/application/service Contains information on the change of color or intensity of an image 2-Dimensional measurements Used for matching
  • Slide 24
  • Feature Detection and Matching Stable detectors are selected in the image Each interest point is represented by a feature vector- a description of the point Descriptor needs to be distinctive - distinguishing for this particular point
  • Slide 25
  • Feature Detection and Matching Four steps Feature Detection Feature Description Feature Matching Feature Tracking Many techniques are used SIFT Scale Invariant Feature Transform SURF- Speeded Up Robust Features Many More
  • Slide 26
  • Feature detection What makes a good feature? some patches can be localized or matched with higher accuracy than others.
  • Slide 27
  • Problems for different image points The two images (yellow and red) are overlaid. Red arrow is the distance between the centers of the points (a) stable (corner-like) flow easily matched (b) classic aperture problem (barber-pole illusion) (c) texture-less region
  • Slide 28
  • simplest matching method It is unknown which other image location(s) the feature will end up being matched against. Therefore, it can only be computed how stable/repeatable it is by comparing the image point against itself Using the auto-correlation function or surface
  • Slide 29
  • Compare the image patch against itself The original image (a) is marked with three red crosses to mark where these surfaces were computed. Shows how unique/repeatable a certain point is
  • Slide 30
  • Auto Correlation Three different auto-correlation surfaces shown as both grayscale images and surface plots. Patch (b) is from the ower bed (good unique minimum), Patch (c) is from the roof edge (one-dimensional aperture problem), Patch (d) is from the cloud (no good peak)
  • Slide 31
  • Obtaining Image Information Image gradients - a directional change in the intensity or color in an image. Image gradients may be used to extract information from images.
  • Slide 32
  • using wavelet responses - SURF To build the descriptor, a grid is laid over the interest point. For each square, the wavelet responses are computed - Which are the sums d x, |d x |, d y, and |d y |(amount of change)
  • Slide 33
  • Using Image Gradients The descriptor entries represent the intensity pattern. Left: A uniform region, all values are relatively low. Middle: In presence of frequencies in x direction, the value of |dx| is high, but all others remain low. If the intensity is gradually increasing in x direction, both values dx and|dx| are high.
  • Slide 34
  • GAFD Gravity Aligned Feature Descriptors Recently researched by Metaio More and more mobile devices are being equipped with inertial sensors (accelerometer) Using the gravity vector to help describe the orientation of a feature speeds up descriptor computation and the matching process
  • Slide 35
  • GAFD Approaches to take advantage of the gravity Regular feature descriptors with relative gravity orientation Gravity aligned feature descriptors Gravity aligned feature descriptors with relative local orientation local orientation - computed from neighboring pixels, it is usually computed so that it provides the same region at any viewpoint and view direction.
  • Slide 36
  • GAFD O g Global Orientation (Blue) O l - Local Orientation (Red) O l - usually computed so that it will provide the same normalized region at any viewpoint.
  • Slide 37
  • Using the local orientation for descriptor alignment (b) leaves the relative global orientation as part of the descriptor. Relative global orientation: O gl = O g O l Alignment with the global orientation (c) allows to enrich the descriptor with the relative local orientation GAFD in Action
  • Slide 38
  • Last step - Feature Matching Once features and their descriptors have been found and calculated from two or more images, the next step is to establish some feature matches between these images. Method used t

Recommended

View more >