an 18 megapixel 4.3 1443 ppi 120 hz oled display for wide field … · 2018. 5. 23. · vieri et...

11
An 18 megapixel 4.31443 ppi 120 Hz OLED display for wide field of view high acuity head mounted displays Carlin Vieri (SID Member) Grace Lee (SID Member) Nikhil Balram (SID Fellow) Sang Hoon Jung (SID Member) Joon Young Yang (SID Member) Soo Young Yoon (SID Member) In Byeong Kang (SID Member) Abstract We developed and fabricated the worlds highest resolution (18 megapixel, 1443 ppi) OLED on glass display panel. The design uses a white OLED with color lter structure for high density pixelization and an n-type LTPS backplane for faster response time than mobile phone displays. A cus- tom high bandwidth driver IC was fabricated. We developed a foveated pixel pipeline appropriate for virtual reality and augmented reality applications, especially mobile systems. Keywords Ultra-high resolution, OLED, high ppi, foveated rendering, virtual reality (VR), augmented reality (AR). DOI # 10.1002/jsid.658 1 Introduction 1.1 Virtual reality displays and the human visual system Virtual and augmented reality offer the promise of amazing immersive experiences. Virtual reality (VR) can take you to new places, and augmented reality can bring these places to you. 1 Enabling these amazing immersive experiences requires great displays that come as close as possible to matching the capabilities of the human visual system (HVS). 2 These displays require lots of pixels, high pixel density, fast response time, high refresh rate, short illumination duty cycle, and of course reasonable brightness, contrast, and color gamut. 1 The primary objective of this work is to develop mobile head-mounted dtyisplay (HMD) prototypes that provide a visual experience that matches the HVS as closely as possible. Developing such displays requires overcoming a number of signicant challenges. Mobile organic light- emitting diode (OLED) displays offer excellent front- of-screen image quality, but they need a number of major improvements to approach the capabilities of the HVS when used in a VR headset. For example, the display diagonal per eye needs to be between 2 and 6 00 depending on the design specications of the headset. A headset providing high immersion and HVS-like acuity requires a wide eld of view (FoV) over 100°, a display with 1000 to 2200 pixels per inch (ppi), and 1525 million pixels per eye. Making an OLED display on glass (as opposed to microdisplays on silicon) with these attributes poses signicant challenges in materials and processes. Driving this class of display poses another big challenge for the circuitry and interfaces, especially given the space and power constraints of a mobile (untethered) system. Several of these challenges are addressed in this work. 1.2 Key challenges of virtual reality displays 1.2.1 Pixel pitch, pixel count, and optics * For head-mounted VR systems to approach the visual acuity and FoV of the HVS, they can make use of signicantly more pixels than handheld displays, and even most large- format displays. Typical human FoV for each eye is approximately 160° (horizontal) by 150° (vertical). 3 At an acuity of 60 pixels per degree (ppd), or 20/20 Snellen acuity, covering this full FoV requires 9600 × 9000 pixels per eye. This of course assumes a uniform acuity over the full FoV, which is an overestimate for the HVS 3 even considering eye roll and may be beyond the resolving limit for some optical systems, but provides a useful upper bound. Alternate assumptions and analysis may give different results for maximum pixel counts. Techniques for dealing with spatially varying acuity, both in the HVS and the HMDs optics, are addressed in later sections of this paper. It is possible to design a panel with a spatially varying pixel pitch to match these acuity variations, but possibly at the cost of undesirable nonuniformity, driving or manufacturing complexity, and other drawbacks. For this Received 02/20/18; accepted 03/18/18. Carlin Vieri, Grace Lee and Nikhil Balram are with Google LLC, Mountain View, CA USA; e-mail: [email protected]. Sang Hoon Jung, Joon Young Yang, Soo Young Yoon and In Byeong Kang are with LG Display Co., Ltd., Seoul, Korea. © Copyright 2018 Society for Information Display 1071-0922/18/0658$1.00. *The term resolutionis often used to mean either display pixel pitch or pixel count. In HMDs, resolutionmore accurately refers to cycles per unit angle that can be resolved. This paper uses pixel countand pixel pitchor ppifor panel attributes and resolutionfor ability to resolve features. Journal of the SID, 2018

Upload: others

Post on 04-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • An 18 megapixel 4.3″ 1443 ppi 120 Hz OLED display for wide field of view high acuityhead mounted displays

    Carlin Vieri (SID Member)Grace Lee (SID Member)Nikhil Balram (SID Fellow)Sang Hoon Jung (SID Member)Joon Young Yang (SID Member)Soo Young Yoon (SID Member)In Byeong Kang (SID Member)

    Abstract—We developed and fabricated the world’s highest resolution (18 megapixel, 1443 ppi) OLEDon glass display panel. The design uses a white OLED with color filter structure for high densitypixelization and an n-type LTPS backplane for faster response time than mobile phone displays. A cus-tom high bandwidth driver IC was fabricated. We developed a foveated pixel pipeline appropriate forvirtual reality and augmented reality applications, especially mobile systems.

    Keywords — Ultra-high resolution, OLED, high ppi, foveated rendering, virtual reality (VR), augmentedreality (AR).

    DOI # 10.1002/jsid.658

    1 Introduction

    1.1 Virtual reality displays and the humanvisual systemVirtual and augmented reality offer the promise of amazingimmersive experiences. Virtual reality (VR) can take you tonew places, and augmented reality can bring these places toyou.1 Enabling these amazing immersive experiencesrequires great displays that come as close as possible tomatching the capabilities of the human visual system (HVS).2

    These displays require lots of pixels, high pixel density, fastresponse time, high refresh rate, short illumination dutycycle, and of course reasonable brightness, contrast, andcolor gamut.1

    The primary objective of this work is to develop mobilehead-mounted dtyisplay (HMD) prototypes that providea visual experience that matches the HVS as closely aspossible. Developing such displays requires overcominga number of significant challenges. Mobile organic light-emitting diode (OLED) displays offer excellent front-of-screen image quality, but they need a number of majorimprovements to approach the capabilities of the HVSwhen used in a VR headset. For example, the displaydiagonal per eye needs to be between 2 and 600 dependingon the design specifications of the headset. A headsetproviding high immersion and HVS-like acuity requires awide field of view (FoV) over 100°, a display with 1000 to2200 pixels per inch (ppi), and 15–25 million pixels pereye. Making an OLED display on glass (as opposed tomicrodisplays on silicon) with these attributes posessignificant challenges in materials and processes. Drivingthis class of display poses another big challenge for the

    circuitry and interfaces, especially given the space andpower constraints of a mobile (untethered) system. Severalof these challenges are addressed in this work.

    1.2 Key challenges of virtual reality displays1.2.1 Pixel pitch, pixel count, and optics*For head-mounted VR systems to approach the visual acuityand FoV of the HVS, they can make use of significantlymore pixels than handheld displays, and even most large-format displays. Typical human FoV for each eye isapproximately 160° (horizontal) by 150° (vertical).3 At anacuity of 60 pixels per degree (ppd), or 20/20 Snellen acuity,covering this full FoV requires 9600 × 9000 pixels per eye.This of course assumes a uniform acuity over the full FoV,which is an overestimate for the HVS3 even considering eyeroll and may be beyond the resolving limit for some opticalsystems, but provides a useful upper bound. Alternateassumptions and analysis may give different results formaximum pixel counts.

    Techniques for dealing with spatially varying acuity, both inthe HVS and the HMD’s optics, are addressed in latersections of this paper. It is possible to design a panel with aspatially varying pixel pitch to match these acuity variations,but possibly at the cost of undesirable nonuniformity, drivingor manufacturing complexity, and other drawbacks. For this

    Received 02/20/18; accepted 03/18/18.Carlin Vieri, Grace Lee and Nikhil Balram are with Google LLC, Mountain View, CA USA; e-mail: [email protected] Hoon Jung, Joon Young Yang, Soo Young Yoon and In Byeong Kang are with LG Display Co., Ltd., Seoul, Korea.© Copyright 2018 Society for Information Display 1071-0922/18/0658$1.00.

    *The term “resolution” is often used to mean either display pixel pitch orpixel count. In HMDs, “resolution” more accurately refers to cycles perunit angle that can be resolved. This paper uses “pixel count” and “pixelpitch” or “ppi” for panel attributes and “resolution” for ability to resolvefeatures.

    Journal of the SID, 2018

    http://orcid.org/0000-0003-1403-9603

  • work, we assume the pixel pitch over the full pixel array isconstant, although the HVS and/or the optical system maybe unable to clearly resolve all the pixels over the full array.

    Pixel pitch may be calculated by considering the opticalsystem that creates a magnified virtual image for the viewer.In the center of the optics, the spacing between two pixelcenters (the pixel pitch) subtends the angle θ and may becalculated from the following equation:

    pixel pitch ¼ 2�focal length�tan θ=2ð Þ

    Typical VR optics may have a focal length of roughly40 mm.4 For 60 ppd, θ is 1 arc minute, and pixel pitch atthis focal length should therefore be 11.6 μm, or 2183 ppi.For comparison, a modern smartphone display may only be400 to 800 ppi.

    Constructing an optical system capable of resolving11.6 μm features over a 160° FoV is extremely challenging.Lenses may become large and heavy, have substantialdistortion or aberrations across the FoV, and have a smalleyebox. System-level tradeoffs between pixel size, FoV,optics, HMD size, and other factors should be consideredcarefully.

    For our system, we tried to balance these tradeoffs,particularly the optical system acuity and FoV. A comparisonbetween the calculated parameters of the “upper bound”display described previously, and our prototype panel isshown in Table 1. We chose a FoV of 120° × 96° per eye andcentral acuity of 40 ppd, for a pixel count of 4800 × 3840.This pixel count is half WHUXGA, so a full system with twodisplays matches the WHUXGA pixel count. Additionaldetails of panel driving tradeoffs are provided in Section 3.

    1.2.2 Display addressability and interconnectbandwidthDriving so many pixels presents engineering hurdles in bothaddressing the pixel array and the interconnect bandwidthrequired between the display and rendering system.

    One metric for driving a line-at-a-time pixel array(representative of most modern LCD and OLED panels) isthe time available to update a single line. During this linetime, the row enable circuitry must transition from off to on;the analog pixel values must be driven along the columnlines, through the pixel logic, and to a storage or display

    node; and the row enable circuitry must store the pixelvalues by transitioning from on to off. Other pixel arrayarchitectures are of course possible; many of them will havesimilar although distinct constraints.

    The line time is a function of the panel refresh rate and thetotal number of lines in a frame, including any blanking lines.VR displays often refresh above 60 Hz to avoid flicker andreduce motion-to-photon latency.5 VR displays may also useshort persistence illumination6 to reduce motion blur. Shortpersistence (low duty cycle) illumination is effectively analternative to very high (500 Hz to 1 kHz) refresh rates,which may be infeasible for the rendering system. Reducingmotion blur using low persistence is generally preferredover high refresh rates, but it has two drawbacks. First,keeping emission duty cycle short can lower displaybrightness. Second, data loading may need to be paused viaa vertical blanking porch during the emission period. Thisincreases the total required bandwidth of the interface. Forthis example, the display is driven at 120 Hz (8.3 ms/frame)and 20% of the frame time (1.7 ms) is used for illumination.No additional time is allocated for pixel transition timealthough it may be required for some display technologies,such as LCDs.

    System mechanical constraints may require VR displays tohave a portrait orientation. This further limits the timeavailable per line. For the 9600 × 9000 pixel theoreticaldisplay mentioned previously, the portrait mode line timemay be calculated as:

    line time ¼ 1=120�0:8ð Þ=9600 ¼ 694 ns

    This is extremely short. For comparison, the line time for a4 k/60 (landscape) display is approximately 7.5 μs, more than10× longer. To support shorter line times, the display mustuse very fast transistors and wires. Capacitive loading of pixeltransistors, RC time constants of row and column wires, andvoltage swings must all be optimized. Using a refresh rate of75 or 90 Hz rather than 120 Hz and reducing the number ofactive lines below the theoretical maximum needed by theHVS are simple ways to reduce this constraint, at the cost ofslightly increased latency and slightly reduced acuity.Lowering illumination duration may unacceptably lowerdisplay brightness or require high current densities forOLED components, but it is also an option.

    Interconnect bandwidth between the rendering systemand display is quite large. Assuming 15% overhead forhorizontal porch and the above mentioned 20% verticalporch, the total number of pixels (active plus porches,keeping in mind the portrait orientation) is 11,520 × 10,350.At 120 Hz, the pixel clock is as follows:

    pixel clock ¼ 11; 520� 10; 350� 120 Hz ¼ 14:3 GHz

    For comparison, a 4 k/60 pixel clock is under 600 MHz,more than 20 times slower.

    TABLE 1 — Comparison of “upper bound” display and our prototypedisplay.

    Specification Upper bound As built

    Pixel count (h × v) 9600 × 9000 4800 × 3840Acuity (ppd) 60 40Pixels per inch (ppi) 2183 1443Pixel pitch (μm) 11.6 17.6FoV (°, h × v) 160 × 150 120 × 96

    Vieri et al. / An 18 megapixel OLED for HMDs

  • The total data rate to the display is a function of the pixelclock and the number of bits per pixel. At 24 bits per pixel,total data rate is 343 Gb/s. For comparison, DisplayPort 1.4supports an uncompressed payload data rate of 25.92 Gb/s.7

    Again, the theoretical upper bound VR display requiresmore than 10 times the bandwidth.

    A thoughtfully practical system, such as the 4800 × 3840panel discussed earlier, requires significantly less bandwidth.Other techniques may also be applied to this bandwidthchallenge. Limiting the refresh rate and total number ofvertical lines (including blanking) lowers bandwidthsubstantially. Reducing total horizontal pixels (includingblanking) also helps. Compression, such as Display StreamCompression (DSC)8 can provide a factor of three or morebandwidth reduction. Subpixel rendering (e.g., using two10-bit subpixels per pixel rather than three 8-bit subpixelsper pixel) can give a 20% bandwidth reduction. Andfoveation techniques, which will be discussed in detail at alater point, can provide large bandwidth benefits in VRsystems.

    1.2.3 Other challengesHigh performance VR displays have a few other challengesrelative to direct view mobile or large format displays.Uniformity requirements are strict in head-tracked systemsto avoid “dirty window” artifacts.9,10 In the pixel pitch rangeunder consideration, very limited space is available foruniformity compensation logic within the pixel array.Displays should be designed either not to require muchcompensation logic, or it should be moved upstream.

    The useful viewing cone of the display depends on theoptical system. The lenses may only collect light from anarrow angle, for example +/�30°. Therefore, VR displaysneed not support the wide viewing angle expected of mobiledevice displays. Maintaining uniform spectral and luminanceoutput within the viewing cone is more important in orderto minimize color variation over the FoV in the HMD.Concentrating the display emission energy into this desiredcone is useful both for power efficiency and to reducestray light in the HMD, but shaping the emission cone isdifficult in high resolution OLED displays. HMDs may alsohave high brightness requirements due to losses in theoptical system.

    Other front-of-screen metrics such as color gamut andcontrast ratio may be similar to conventional displays.

    2 Designing a high performance virtual realitydisplay

    2.1 Panel design and driving2.1.1 Panel structureWe built a 4.300 1443 ppi OLED-on-glass display with a pixelformat of 3840 × 4800, a pixel pitch of 17.6 μm, and a FoV

    appropriate for an immersive HMD computing system.When integrated with a high performance optical systemwith, for example, a 40 mm focal length, the resulting imagespans approximately 120° (H) by 100° (V) per eye, with anacuity of 40 ppd, corresponding to 20/30 on a standardSnellen eye chart.

    The display uses two subpixels per pixel: one green subpixeland either a red or blue subpixel. This subpixel arrangement iswidely used in mobile phone displays. Each subpixel is17.6 μm × 8.8 μm. Fabrication of small pixels for displaysover 1000 ppi is an extreme challenge with conventionalFine Metal Mask systems.11 Advanced Fine Metal Maskmethods can make micrometer-sized holes, but usually havea wide dead zone between subpixels, making fabricationbelow 10 μm pixel pitch extremely difficult. To avoid theseissues and have a lower risk path to mass production, we usea structure with white OLED and color filters. This approachis used in commercial OLED TV panels12,13 and in OLEDon silicon microdisplays.14 Current photolithographytechnology in an LTPS line can also achieve color filterpatterning at pixel densities over 1000 ppi.

    It is desirable for VR HMD panels to emit uniform colorlight over a narrow viewing cone. The conventionalapproach is to bond color filter glass to a white OLEDsubstrate, but this creates a bigger cell gap that exacerbatescolor mixing.15 For this display, we addressed this issue witha new color filter deposition process.

    In the conventional glass–glass bonding between colorfilter and white OLED, the OLED cell gap, black matrix,bank open size, and alignment control between anode andcolor filter are important factors to determine viewing conecharacteristics of display. To improve the color uniformityover a narrow viewing cone, we decided to pattern the colorfilter directly on the encapsulation layer. This can bothimprove the alignment between the color filter and theanode as well as make the OLED cell gap thinner. Figure 1shows the color filter on encapsulation structure used for

    FIGURE 1 — Cross-section of high ppi OLED display for VR.

    Journal of the SID, 2018

  • this display. Because the color filter process is carried out afterthe OLED process, low temperature materials and processesare essential. OLED material can be damaged above 100°C,so the color filter and black matrix materials were treatedbelow 90°C. We did not see any performance degradationfrom the low temperature-cured color filter material, butthe material is sensitive, so the process window is narrow.

    2.1.2 Panel configurationFigure 2 shows our 4.300 1443 ppi OLED panel configuration,with the first two pixels (four subpixels) shown enlarged.Subpixel count is 3840 × 2 × 4800 per panel, so the totalpixel count the driving system should support is WHUXGA(7680 × 4800) when two panels are used for an HMD. Toprovide a wider horizontal FoV, this panel is used in alandscape orientation in an HMD although it is driven in aportrait orientation.

    Two driver ICs and a flexible printed circuit are located onone edge, and 4800 stage scan drivers are located on the topand bottom of the panel. The custom driver IC developedfor this panel has 3840 channels and supports the highbandwidth and tight pixel pitch requirements of this display.The maximum supported total data rate to the panel (bothdriver ICs) is over 80 Gb/s. The panel is driven by 32parallel differential lanes, with each lane running at up toapproximately 2.6 Gb/s.

    As shown in Fig. 3, the panels should be provided as a pair.To maximize viewable pixels towards the nose, at least oneside of the panel should be designed with a narrow bezel.We designed one side of the panel without any circuits orpower lines, and the bezel width along that edge is 1.7 mm.The scan drivers support bidirectional driving for variousimage compositions of left and right panels.

    2.1.3 Viewing angleAs previously described, viewing cone performance is relatedto OLED cell gap, black matrix area, bank open area, and

    misalignment between the color filters and anodes. Also,because the subpixels are rectangular, subpixel orientationgives fundamentally different viewing cones betweenhorizontal and vertical orientations. For this panel, we defineviewing angle in terms of color shift: Δu’v’ ≤ 0.02.

    The viewing angle along the long axis of a subpixel is widerthan along the short axis, as shown in Fig. 4. Our display has anative 4:5 portrait aspect ratio, but it is used in a landscapeorientation in an HMD to achieve appropriate horizontaland vertical FoV. Our pixel orientation is represented inFig. 4(b).

    Figure 5 shows Δu’v’ measurement results of horizontalviewing angle dependence. Green exhibits the best viewingangle because it has no contrasting color subpixels alongthe horizontal direction, as shown in Fig. 4(b). Blueexhibits the smallest viewing angle, but its Δu’v’ remainsbelow 0.02 at ±30°. The viewing angle of white is ±55° forΔu’v’ equal to 0.02.

    2.1.4 Panel driving for VRHigher refresh rates reduce motion-to-photon latency of VRdisplays. This display was designed to refresh at up to120 Hz. To reduce motion blur, this display also supportsshort persistence illumination. For example, Fig. 6 shows aglobal shutter drive scheme with approximately 80% of theframe time used for writing pixel data and 20% for lightemission. Pixels do not emit light until the pixel arraywriting is complete. After addressing, the full pixel arrayemits light simultaneously. At 120 Hz refresh rate, ourdisplay supports an illumination duration of up to 1.65 ms.The panel and driving circuitry were designed to supportthe peak current draw during the illumination time.

    High density and fast driving are challenges for a TFTbackplane. VR displays require fast optical response timeto reduce motion artifacts. The response time of an OLEDdisplay is related to TFT design and pixel circuitcharacteristics. For mobile OLED displays, p-type LTPStechnology is considered mainstream, but is susceptible to a“ghost image” artifact that appears when the display is unableto reach the target brightness level in the first frame afterchanging the image. In order to achieve high resolution andfast driving speed, n-type LTPS TFTs that have highermobility and lower hysteresis characteristics than p-type werechosen for the TFT backplane.9,16

    2.2 Foveated rendering and transport

    Headmounted display systems differ from direct-view displaysin a number of ways that impact how content can be renderedand displayed on them. HMDs include optics (lenses, etc.),that have spatially varying resolving performance; forexample, the center of a lens usually has sharper imagequality than the periphery. Additionally, if the system has avery wide FoV, the periphery of the image may be outsideFIGURE 2 — Panel configuration of 4.3″ VR OLED.

    Vieri et al. / An 18 megapixel OLED for HMDs

  • the area to which the user can comfortably roll their eyes toview with their fovea. HMDs are also usually head-tracked,so the user is able to turn their head to keep content ofinterest near the center of their FoV. These factors allsupport image “foveation” for HMDs, in which only a subset

    of pixels are rendered and displayed at high resolution whilethe others use lower resolution. The total number of pixelsrendered is much smaller than the native pixel count of thedisplay; therefore, lower bandwidth is required, and lowpower mobile application processors can drive high acuity,high pixel count HMDs. Foveated rendering and transportare critical elements for implementation of standalone VRHMDs using this 4.300 OLED display.

    FIGURE 3 — Panel configuration of two OLED displays for a VR headset.

    FIGURE 4 — Viewing angle dependence on subpixel orientation.

    FIGURE 5 — Measured Δu’v’ over horizontal viewing angle.

    FIGURE 6 — Global shuttering for short persistence illumination.

    Journal of the SID, 2018

  • With the use of eye tracking, the foveated (high acuity)region can be made very small (typically less than +/�15°)relative to the overall FoV.17 However, even without eyetracking, the image may be separated into regions withdifferent acuity so the image matches the natural roll-off ofthe system optics and the HVS’s low peripheral acuity.

    The term “foveation” as used here has two parts: foveatedrendering and foveated transport. Foveated rendering is atechnique to reduce rendering computation in the GPU.Foveated transport is a technique for arranging the renderedpixel data for transmission from the GPU to the display. Thedisplay logic then processes the image data of the differentregions to create an image at the native pixel count of thedisplay. A conventional (unfoveated) image is typically sent asa serialized raster, with horizontal and vertical blankingregions. In a foveated system, multiple regions with differentresolutions must be rendered and transmitted. In the systemdeveloped here, the regions are concatenated at the GPUinto a single image frame with a nonstandard pixel count,along with a few bytes of image metadata to direct imagereconstruction, and blanking regions.

    2.2.1 Foveated renderingFoveated rendering reduces the computation load on the GPUby separating the image to be rendered into higher and lowerresolution regions. Multiple rendering passes are made foreach frame generated by the application. For the same headpose and same scene, two (or more) renders are generated: alow acuity render that uses a relatively low pixel count torepresent a wide FoV, and a high acuity render that uses arelatively high pixel count to represent a narrow FoV.18

    In Fig. 7, the lower acuity (LA) region is shown in greenand the higher acuity (HA) region is shown in yellow.Because parts of the LA region are overlapped by the HAregion, we blank the occluded pixels, shown in black, toreduce rendering overhead. After rasterization, both the LAand HA content are warped to correct for the lens distortionin the HMD, as shown in the second stage of the figure.This warping causes a barrel distortion in the resultingimages to counteract the pincushion artifact of a magnifyingVR lens. Since the output distorted images are no longer

    rectangular, we render the original imagery at a larger pixelcount so that a rectangular region can be cropped in thedistorted image that matches the desired display pixel count,shown in the third stage.

    Each region is processed independently, allowing for easyscalability to more than two regions if necessary. Lastly, theGPU composites these output renderings into a singledisplay frame formatted for foveated transport.

    2.2.2 Foveated transportOnce the GPU has rendered the different regions, the pixeldata is reshaped for transport. Consider for example animage with two rendered regions: one high acuity, one LA.The high acuity (HA) region may be relatively small, forexample, 640 × 640 pixels. The LA region may be larger, forexample, 1280 × 1600 pixels. These two regions arecombined into a single image frame by reshaping the HApixel data to be the same width as the LA pixel data. In thiscase, the 640 × 640 pixels are arranged in a block that is1280 × 320 pixels. This is not a scaling operation: the pixelsare not modified, only the arrangement is changed. This HAblock is prepended to the LA block, making an overall imagethat is 1280 × 1920. A line of metadata is added at the top,as is another blank line between the HA and LA blocks tokeep the total number of lines even (which simplifies parts ofthe system). The total image sent to the display electronics is1280 × 1922. For other possible resolutions, if the HAregion does not fit evenly in the LA width, zero-paddingpixels may be added. The concatenated image arrangementis shown in Fig. 8.

    The concatenated image may be sent over a physical layer,such as MIPI DSI or DisplayPort, in the conventional way. Itmay also be compressed to reduce physical layer bandwidthusing DSC8 or other compression algorithms. Note thatcompression algorithms dependent on spatial correlationsmay have difficulty with the reshaped regions, but onedimensional compression should still perform well.

    The metadata may contain information about the size ofthe HA and LA regions and the position of the HA regionin the final processed image. Since the metadata is sent withthe image data, no additional synchronization or timestamps

    FIGURE 7 — Foveated rendering creates separate low and high acuity regions that are combined fortransport and blended later.

    Vieri et al. / An 18 megapixel OLED for HMDs

  • are required. If the system includes eye tracking, therendering system may change the position of the HA regionevery frame as the eyeball position changes.

    The foveated rendering, rearrangement for foveatedtransport, metadata calculation and insertion, optionalcompression, and physical layer transmission may all beperformed on conventional GPU hardware. No hardwaremodifications are required.

    At the panel, custom logic is required to reconstruct theimage for presentation at the panel’s native pixel count. Thepanel’s foveation logic receives the foveated frame imagedata. The metadata is parsed to extract frame attributes. Allof the HA image data is buffered. The LA image data ispassed through upscaling logic and a few lines are buffered.

    In this example, the data is upscaled by 3× in both the xand y directions. Suitable upscaling algorithms includebilinear or nearest neighbor. The input 1280 × 1600 imageis therefore upscaled to 3840 × 4800, the native pixel countof our display. The HA region is composited at theappropriate location (defined by values sent in themetadata). Logic may be added to blend the HA and LAregions, or the blending may be performed around theperimeter of the HA region during the software renderingprocess. The resulting image is sent to the driver ICs with aconventional raster scan.

    The system should be configured to use an appropriate sizeand location for the HA region. If the system is eye-tracked,the HA region should move with the viewer’s gaze, and inthis case may also be quite small, subtending less than 15°of the total FoV.

    Note in this example LA data is transmitted for the entirepixel array, including in the image region that will beoverwritten with the HA image data. Optimizations arepossible both to avoid rendering this part of the LA imageand to reduce transmission bandwidth by not transmittingthis overlapping LA data. It is also possible to extend thisscheme to more than two regions. Intermediate regionsshould be upscaled by intermediate values. For example, theHA region may still be passed to the display unscaled, but a“middle acuity” region might be upscaled by 2× in x and y,and the LA region upscaled by 4× or more in x and y. Asadditional regions are added, the overhead of overlappingregions increases and should be avoided.

    2.2.3 Display foveation logic implementationThe foveation electronics for this display were implementedin an FPGA, suitable for porting to an ASIC. The input isthe foveated transport video stream (either DisplayPort orMIPI DSI). Logic in the FPGA converts the video stream tothe appropriate format for our display, as shown in Fig. 9.The incoming image data is partially buffered but is notstored in a full frame buffer. VR systems typically have strictlatency requirements, so the foveation logic must minimizelatency. Since frame rate conversion is not possible withouta frame buffer, the frame rate of the input and output

    FIGURE 8 — Foveated transport packages the LA and HA regions into asingle frame.

    FIGURE 9 — Block diagram of foveation logic.

    Journal of the SID, 2018

  • streams must be locked. Logic was added to synchronize theoutput stream to a frequency locked, phase offset copy ofthe input vertical sync signal.

    3 Results

    3.1 Panel performanceA photograph of our display is shown in Fig. 10. The displayspecifications are in Table 2.

    Figure 11 shows the display response time measurementwhen the image changes from black to white. One frametime is 8.33 ms as it is driven at 120 Hz. Addressing datatakes 6.68 ms, and OLED light emission uses 1.65 ms. Theentire pixel array is turned on simultaneously after dataaddressing is complete. The response time after the globalillumination is turned on is around 10 μs. The brightness ofthe first frame reaches the target brightness because of then-type LTPS backplane. A p-type LTPS-based OLEDdisplay may take two or three frames to reach the targetbrightness. Even though an n-type LTPS backplane needsmore process steps and higher temperature conditions, it

    can provide outstanding temporal characteristics for highperformance VR systems.

    An OLED LTPS backplane needs mura compensation.The internal compensation methods used in mobile phoneOLED displays are not suitable for high ppi panels. Weemployed an external compensation approach, and Fig. 12shows photographs of our panel’s image quality before andafter mura compensation.

    Figure 13 shows an enlargement of the image shown inFig. 10. The image shown was photographed through VRoptics, although no distortion correction was applied to theimage. The image quality is very good with no visible screendoor effect even when viewed through high quality optics ina wide FoV HMD.

    3.2 Panel driving using foveated renderingand transport

    We implemented the foveated rendering software on astandard mobile SoC and the foveation logic in an FPGA.Foveated rendering implementation details and performanceoptimizations are beyond the scope of this paper.18

    The MIPI DSI interface between the mobile SoC andFPGA was limited to 6 Gb/s (uncompressed), which impliesa 250 MHz pixel clock at 24 bits/pixel. We settled onfoveated transport pixel counts near 1280 × 1920/75 Hz to fitwithin this bandwidth limitation. Both our SoC and FPGAcan support DSC for an up to 3× increase in pixel count, butthis image size is well matched to the GPUs renderingcapability. A higher performance SoC could be used withDSC in future systems to increase image size withoutrequiring higher interface bandwidth.

    The FPGA foveation processing logic is similar to aconventional image upscaler. For our display, the outputdata rates are quite high, but the input rate matchesexisting mobile display panels. Recall from aforementionedthat the theoretical 9600 × 9000/120 Hz display requireda 14.3 GHz pixel clock and 343 Gb/s to the display. Ourimplementation optimizes a number of parameters to reducedata rates.

    First, our panel pixel count is 3840 × 4800, providing asubstantial bandwidth reduction while still matching ouroptical system’s capabilities. Second, while our display iscapable of 120 Hz refresh, we operate at 75 Hz to allowmore complex rendering on our mobile SoC. Third, logic inthe FPGA performs subpixel rendering, so the bandwidth tothe driver ICs is 20 bits/pixel (10 bits/subpixel × 2subpixels/pixel) rather than 24 bits/pixel (8 bits/subpixel × 3subpixels/pixel).

    To achieve our brightness target, the illuminationduration is 20% of the frame time, a persistence of 2.7 msat our 75 Hz refresh rate. The linetime may be calculatedas previously:

    line time ¼ 1=75� 0:8ð Þ=4800 ¼ 2:2 μs

    FIGURE 10 — Photograph of 4.3″ OLED panel.

    TABLE 2 — 4.3″ High resolution OLED-on-glass specifications.

    Attribute Value

    Size (diagonal) 4.3″Subpixel count 3840 × 2 (either RG or BG) × 4800Pixel pitch 17.6 μm (1443 ppi)Brightness 150 cd/m2 @ 20% dutyContrast >15,000:1Color depth 10 bitsViewing angle2 30°(H), 15° (V)Refresh rate 120 Hz

    2Viewing angle is defined in terms of color shift: Δu’v’ ≤ 0.02.

    Vieri et al. / An 18 megapixel OLED for HMDs

  • This is 3.4× faster than a 4 k/60 display.Vertical blanking consumes 1200 lines, and horizontal

    blanking requires 512 pixels. The pixel clock may also becalculated as previously:

    pixel clock ¼ 3840þ 512ð Þ � 4800þ 1200ð Þ � 75 Hz¼ 1:96 GHz

    This is a challenging pixel clock, 3.3× faster than 4 k/60,but it is 7× slower than the theoretical display. Bandwidth tothe driver ICs is helped by the SPR logic:

    driver IC payload bandwidth ¼ 1:96 GHz� 20 bits=pixel¼ 39:2 Gb=s

    The interface between our FPGA and driver ICs requires4 bits of overhead for each 20 bit pixel (similar to 8b/10bencoding), so total bandwidth is higher than the payloadbandwidth but more efficient than 8b/10b systems. Thedriver IC interface is spread over 32 differential pairs, witheach pair running at 1.47 GHz (including overhead), whichis reasonable for interconnect to chip-on-glass driver ICs.

    The FPGA and driver ICs have been tested at 120 Hz(3.13 GHz pixel clock; 62.7 Gb/s bandwidth; 2.35 GHzdriver IC link), but 75 Hz refresh reduces load on the GPUwhile still providing reasonable latency.

    FIGURE 11 — Response time measurement of prototype 4.3″ OLED.

    (a) (b)

    FIGURE 12 — Photographs before (a) and after (b) mura compensation.

    FIGURE 13 — Photograph of 4.3″ OLED fabricated panel through VRoptics.

    Journal of the SID, 2018

  • Within the FPGA, we found the vertical part of the LAupscaler had the most challenging timing constraints. This issimilar to conventional upscalers. We implemented a bilinearupscaler, and composited the HA region without blending,since our foveated rendering software creates a transitionzone around the HA perimeter. MIPI DSI and DisplayPortreceivers are commonly implemented in FPGAs, and timingclosure was relatively straightforward. The FPGA outputs tothe driver ICs have high total bandwidth, but individual lanesare well within capabilities of FPGA gigabit transceivers.

    The FPGA logic memory requirements were alsoreasonable. Our FPGA includes an embedded microcontroller,so some memory is allocated for it. A few of the logic blocksrequire line buffers at the display’s native pixel count. Eachline buffer requires 3840 × 2 × 10 = 76.8 kb. The HA regionbuffer requires 640 × 640 × 2 × 10 = 8.2 Mb. To supportmultiple, larger regions, each region may need a buffer of10–12 Mb.

    Figure 14 shows a photograph of the display taken througha magnifying loupe showing the boundary between the highacuity region and the upscaled low acuity region. Noblending has been applied to the perimeter of the highacuity region so that the boundary may be seen.

    4 Conclusions

    We have designed and fabricated a very high pixel count(>18MP), ultra-high ppi (1443 ppi) OLED display for VRapplications. This is currently the world’s highest resolutionOLED on glass display. White OLED material and colorfilters were used to meet the high ppi requirements, and ann-type LTPS backplane was used to meet the panel driving

    and image ghosting requirements. Foveation logic wasimplemented in an FPGA to convert the low bandwidthfoveated image rendered on a mobile processor to the highbandwidth stream required by the display. The result is astunning visual experience in a mobile VR system.

    Acknowledgments

    The authors would like to thank Florian Kainz for the photo-graph used to drive the display in Fig. 11. We would also liketo thank Dr. David Hoffman for helpful reviewing and editingand Dr. Eric Turner for contributions in the foveated render-ing section.

    References1 C. Bavor, “Enabling rich and immersive experiences in virtual and aug-mented reality,” keynote speech, Display Week 2017.

    2 N. Balram, “Designing great displays for virtual and augmented reality”,keynote, IDMC 2017.

    3 H. M. Traquair, “An introduction to clinical perimetry,” in Chpt, Vol. 1.Henry Kimpton, London, (1938), pp. 4–5.

    4 I. Goradia et al., “A review paper on oculus rift & project morpheus,” In-ternational Journal of Current Engineering and Technology, 4, No. 5,3196–3200 (2014).

    5 J. E. Farrell et al., Predicting flicker thresholds for video display terminals.In Proc. SID (1987, January.) (Vol. 28, No. 4, pp. 449–453).

    6 A. B. Watson, “High frame rates and human vision: a view through thewindow of visibility,” SMPTE Motion Imaging Journal, 122, No. 2,18–32 (2013).

    7 Video Electronics Standards Association (2016) VESA DisplayPort stan-dard, Version 1.4. DP_v1.4_mem.pdf, downloaded from vesa.org

    8 F. Walls and A. MacInnes, “VESA display stream compression: an over-view,” SID International Symposium, 360-363 (2014).

    9 R. Chaji and A. Nathan, 13.2: Invited paper: LTPS vs oxide backplanes forAMOLED displays: system design considerations and compensation tech-niques. In SID Symposium Digest of Technical Papers (2014, June) (Vol.45, No. 1, pp. 153–156).

    10 H. J. Shin et al., 7.1: invited paper: novel OLED display technologiesfor large-size UHD OLED TVs, SID Symposium Digest of TechnicalPapers, 46, (2015), https://doi.org/10.1002/sdtp.10225.

    11 C. C. Chen et al., “Novel approaches to realize high-resolution AMOLEDdisplay,” J. Soc. Inf. Disp., 23, No. 6, 240–245 (2015).

    12 J. S. Yoon et al., “New pixel structure with high g-to-g response time forlarge size and high resolution OLED TVs,” SID Digest, p1010~1013,2013.

    13 Y. K. Jung et al., “3 Stacked top emitting white OLED for high resolutionOLED TV”, SID Digest, 2016.

    14 Y. Onoyama et al., “0.5-inch XGA micro-OLED display on a siliconbackplane with high-definition technologies,” in SID Digest, p950~953,(2012).

    15 K. Yokoyama et al., “Ultra-high-resolution 1058-ppi OLED displayswith 2.78-in size using CAAC-IGZO FETs with tandem OLEDdevice and single OLED device,” Journal of the SID, 24, No. 3,159–167 (2016).

    16 K. A. Stewart and J. F. Wager, “Thin-film transistor mobility limits consid-erations,” J. Soc. Inf. Disp., 24, No. 6, 386–393 (2016).

    17 A. T. Bahill et al., “Most naturally occurring human saccades have magni-tudes of 15 degrees or less,” Investigative Ophthalmology & Visual Sci-ence, 14, No. 6, 468–469 (1975).

    18 B. Bastani et al., “Foveated pipeline for AR/VR head-mounted displays,”Information Display, 33, No. 6 (2017).

    FIGURE 14 — Photograph of 4.3″ OLED showing foveation regions.

    Vieri et al. / An 18 megapixel OLED for HMDs

    https://doi.org/10.1002/sdtp.10225

  • Carlin Vieri received his BS degree in Electrical En-gineering and Computer Science from UC Berke-ley and MS and PhD degrees, also in EECS, fromthe Massachusetts Institute of Technology. He hasspent the last 20 years working on various displaytechnologies, including field sequential LCOSmicrodisplays, multimode transflective LCDs, largearea high pixel count displays, and OLED and LCDdisplays and electronics for mobile and wearabledevices. He has been at Google since 2013.

    Grace Lee received her PhD and BS degrees inChemical Engineering from National Taiwan Uni-versity and her MS degree in Biomedical Engineer-ing from National Yang-Ming University, Taiwan.Grace’s life-long passion is to develop new toolsto enhance human cognitive ability. In pursuit ofher passion, for the past 15 years, Grace has drivenadvances in display manufacturing techniques,and new product/process optimization and inte-gration that included displays for TVs, monitors,notebooks, and tablets. Grace joined Alphabet’sX in 2014 and Google’s Daydream team in 2016,

    where she currently manages advanced-display development projects forAR/VR.

    Dr. Nikhil Balram is senior director of engineeringfor AR/VR at Google. He has won numerousawards including most recently the 2016 OttoSchade Prize from the Society for Information Dis-play. Products and technologies developed byteams lead by him have been used by millions ofpeople. Dr. Balram is also an adjunct professor ofelectrical engineering at Carnegie Mellon Univer-sity (CMU), a guest professor of design and innova-tion at the Indian Institute of Technology inGandhinagar, India, and a former visiting professorof vision science at the University of California,

    Berkeley. He has 120 US and international patents granted or pending,more than 60 technical publications and has given over 30 keynotespeeches at major conferences and events worldwide. He received hisBS, MS, and PhD in electrical engineering from CMU.

    Sang Hoon Jung received his BS, MS, and PhD de-grees in Electrical Engineering from Seoul NationalUniversity, Korea in 1999, 2001, and 2005, re-spectively. He has been developing OLED displaytechnologies since he joined in LG Display in2005. He is a senior research engineer of LG Dis-play Laboratory, and he is currently focusing onhigh resolution OLED technology.

    Joon Young Yang, PhD joined LG Electronics at1995 and had been a team leader at LG Displayfor 9 years with TFT process, devices, and paneldesigns. Since 2016, YANG positioned a head ofOLED advanced research division with responsi-bility of OLED device, transparent OLED paneland flexible OLED panel. He received MS andPhD degree in electric engineering from KoreaUniv., Seoul, Korea.

    Soo Young Yoon is the director of LG Display Lab-oratory. Previously, he had developed large-areatransparent flexible OLED displays as the leaderof OLED research division. He is currently respon-sible for next-generation display development andcore technology as well as OLED. He receivedPhD in Physics from Hanyang University, Korea,in 1999 and worked in Philips Research Center,UK until 2002. Since then, he has been workingon LG Display and developing many technologiesrelated to display.

    In Byeong Kang received the BS and MS degrees inElectronic Engineering from Hanyang University,Seoul, Korea, the PhD degree in Electrical Engi-neering from the University of South Australia, Ad-elaide, SA, Australia, in 1998, and the MBA degreefrom Helsinki University, Helsinki, Finland, in2004. He is currently the Executive Vice Presidentand Chief Technology Officer of LG Display,Seoul, Korea.

    Journal of the SID, 2018