portable visual function diagnostics
TRANSCRIPT
Petteri Teikari, PhDhttp://petteri-teikari.com/
Portable Visual Function Diagnostics
Deep learning based data-driven ophthalmology
beyond unimodal “magical” scalar measures
Future Trends for Healthcare
at Healthtech Funding Forum- Advancing Innovation in Digital Health September 27, 2017| Wellcome Trust, London, UK
Dr Vishal Gulati Responsible for healthcare deals at Draper Esprit plc, a listed Patient Capital VC firm: interpreted for “Healthcare Design”:
“If one would design healthcare systems now, they would not look like the contemporary ones. … just look at how Chinese are automating their healthcare.. and how startups that target health in general rather than solving problems at the hospital only when people have got already sick”
Lost in Thought — The Limits of the Human Mind and the Future of MedicineZiad Obermeyer, M.D., and Thomas H. Lee, M.D.N Engl J Med 2017; 377:1209-1211 September 28, 2017DOI: 10.1056/NEJMp1705348
If a root cause of our challenges is complexity, the solutions are unlikely to be simple. Asking doctors to work harder or get smarter won’t help.
There is little doubt that algorithms will transform the thinking underlying medicine. The only question is whether this transformation will be driven by forces from within or outside the field. If medicine wishes to stay in control of its own future, physicians will not only have to embrace algorithms, they will also have to excel at developing and evaluating them, bringing machine-learning methods into the medical domain.
Future Trends for Healthcare
AI Is Your Doctor’s Next Best FriendMike McCormick Jan 19 2017https://mccormick.vc/ai-is-your-doctors-next-best-friend-2bb33e7cf4e8
Where will intelligent machines affect healthcare?
Eventually machine intelligence will touch virtually all aspects of healthcare. Four areas already being affected are…
● Diagnostics and detection: Examples: radiology, tissue analysis, genomic insights, chatbots, and patient monitoring via external sensors, wearables and implantables.
● Treatment and patient care: Examples: personalized precision drugs and treatment plans, remote patient monitoring, and automated real-time treatment adjustments.
● Drug development: Deep learning will augment the pharmaceutical industry’s increasingly costly R&D processes by identifying patterns in molecular interactions at previously unheard of levels of granularity and efficiency. Machine learning will also better match patients to clinical trials leading to better patient outcomes and faster drug approvals.
● Informatics, system-design and data management: Machines will bring efficiency to the interactions that take place within the complex web of stakeholders and processes that comprise modern healthcare systems.
Barriers and risk factors
Healthcare, perhaps more than most industries, presents several barriers and risk factors to new technologies and would-be disruptors:
High stakes: The literal life-and-death nature of healthcare make for a tiny margin of error in patient-facing technologies.
Legal and regulatory issues: Healthcare is among the world’s most heavily regulated industries.
Data security and access: Making data accessible yet secure is crucial.
Data quality: Though the healthcare industry is sitting on ever-growing mountains of data, the quality and relevance of the datasets isn’t always great, and accessing meaningful datasets can be challenging, particularly for startups.
Causal complexity in biology and disease: Our understanding of biological systems and diseases is incomplete. The cellular progression of cancers and the complexity of moment-to-moment neural interactions, for example, are processes we’re far from fully understanding.
Misaligned incentives: Disparate stakeholders are not always incentivized to share data or play nicely with one another.
Bioethical considerations: These issues will become stickier and more difficult to parse as the lines between biology and technology blur. For example, what will be the ethical implications of advanced genetic engineering that allows for “designer” babies crafted to their parents’ exact specifications?
Future Trends for HealthcareSlowly FDA (and regulators in general) is waking up to the situation
"When you start adding analytical AI for any image analysis—think of detecting cancer or some other serious disease—at that point people need to know when that detection means something and is real," Bakul Patel, FDA’s associate director for digital health, says.
https://spectrum.ieee.org/the-human-os/biomedical/devices/fda-assembles-team-to-oversee-ai-revolution-in-health
New AI Device for Diabetes Eye Screening to Complete FDA Clinical Trial
IDx, an early-stage medical device company focused on developing software-based algorithms that can identify disease in medical images is currently conducting an FDA clinical trial to obtain clearance for its first product, IDx-DR by the end of summer 2017. IDx-DR is a screening solution for diabetic retinopathy. IDx also has algorithms in development for the detection of macular degeneration, glaucoma, Alzheimer’s disease, cardiovascular disease, and stroke risk. http://hitconsultant.net/2017/07/06/new-ai-device-diabetes-eye-screening/
Future Trends for HealthcareDigitalizing hospital processes
The Hospital of the Future is a NetworkFebruary 17, 2017 – Jeroen Tas Chief Innovation & Strategy Officer at Philipshttps://www.linkedin.com/pulse/hospital-future-network-connecting-care-continuous-health-jeroen-tas/
How Google DeepMind's Streams app is laying the foundations for artificial intelligence-powered healthcare in the NHSWednesday 10 May 2017http://www.cityam.com/264463/google-deepminds-streams-app-laying-foundations-artificial
DeepMind's work with the NHS to help alert doctors to patients whose health is at risk is laying the groundwork for delivering information that one day will be powered by artificial intelligence.
The Google-owned British pioneer is working with the Royal Free London NHS Trust on a smartphone app called Streams. It currently uses an NHS created algorithm to provide information to clinicians relating to acute kidney injury (AKI), with DeepMind creating the method of delivery that includes "breaking news" style notifications. It does not use AI despite the companies expertise in the technology.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5333321/
Getting the interoperability and data pipe solved allowing 3rd party “AI apps” and really allowing efficient deep learning mining of patients
Through FHIR (“Fire API”)
Future Trends for HealthcareSystem Thinking rather than module-based optimization reducing system efficiency
https://hbr.org/2017/06/hospitals-are-dramatically-overpaying-for-their-technology
-revolutionise-way-they-work-how-hospitals-could-be-rebuilt-better
Command Center to Improve Patient Flowhttp://www.hopkinsmedicine.org/news/articles/command-center-to-improve-patient-flowhttps://www.ahcmedia.com/articles/139933-hopkins-command-center-improves-quality-with-coordinationhttp://www.modernhealthcare.com/article/20161126/MAGAZINE/311269980
Future Trends for HealthcareDeep learning image analysis for just one type images, let alone full EHR mining is constrained by poor data infrastructure and bad curation with missing segmentation and pathology class labels.
Labeling need medical expertise making the process harder than just crowdsourcing dog vs. cat labels for example
Crowdsourcing to Evaluate Fundus Photographs for the Presence of Glaucoma Wang et al. (2017) doi: 10.1097/IJG.0000000000000660
To assess the accuracy of crowdsourcing for grading optic nerve images for glaucoma using Amazon Mechanical Turk before and after training modules.
Gamification of the electron microscope segmentation through EyeWire project run by Sebastian Seung. A Videogame That Recruits Players to Map the Brain | WIRED EyeWire, A Game to Map the Brain from MIT
Voxeleron Orion has developed a good augmented intelligence for efficient collaboration with the AI and the person segmenting the retinal layers (without yet gamifying the experience)
By ROWLAND MANTHORPE 23 Sep 2017
http://www.wired.co.uk/article/harri-valpola-curious-ai-artificial-intelligence-third-wave
Harri Valpola, 44, is founder of The Curious AI Company (co-founded with Antti Rasmus, Timo Haanpää and Mathias
Berglund), that focuses on semi-supervised learning a 20-person artificial intelligence startup based in Helsinki, which has just raised $3.67 million in funding – small change compared to many tech funding rounds, but an impressive sum for a company that has no products and is only interested in research.
https://doi.org/10.1016/j.patcog.2016.09.030
Future Trends for HealthcareRethinking medicine – novel ways to deliver healthcare by Christina Farr April 7, 2017
https://www.technologyreview.com/s/604053/can-digital-therapeutics-be-as-good-as-drugs/
Jose Hamilton: “The "real" digital therapy won't be a competitor to biological therapy. But a platform where psychological treatments (digital or personal), biological (pills) or even physical would be optimized, personalized and accountable.“
To distinguish themselves from “wellness” gadgets, digital therapeutics companies tend to carry out clinical tests and sometimes seek regulatory approvals
Future Trends for HealthcareRethinking medicine – how clinical profession should change, with the ones not changing, perishing away
Digital evangelists argue that intelligent machines will be able to incorporate the latest data and research immediately, but that is both questionable and a potential weakness. Clinical trials vary in scale and quality, and indiscriminate inclusion would inevitably lead to mistakes. Digital hardliners would argue that machines should judge the quality of the research, but for the foreseeable future the expertise of doctors will be essential to deciding the validity of new approaches.
So perhaps one of the most powerful effects of artificial intelligence will be, perversely, to make healthcare more human and personal. It will remove the dependency on doctors’ fallible memory and incomplete knowledge, and free them to use machine-generated information to work with patients to shape their specific treatment.
11 March 2017https://www.theguardian.com/healthcare-network/2017/mar/11/artificial-intelligence-nhs-doctor-patient-relationship
https://www.newyorker.com/magazine/2017/04/03/ai-versus-md
Geoffrey Hinton now qualifies the provocation. “The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things,” he told me. His prognosis for the future of automated medicine is based on a simple principle: “Take any old classification problem where you have a lot of data, and it’s going to be solved by deep learning. There’s going to be thousands of applications of deep learning.”
Future Trends for HealthcareRethinking medicine – making doctors more human again
https://www.technologyreview.com/s/609060/put-humans-at-the-center-of-ai/
Future Trends for HealthcareAll the “digital natives” will get into the play
January 11, 2017Nokia's vision for digital health: From AI analytics to connected hairbrushes. Nokia bought French health device manufacturer Withings earlier this year to take on the IoT healthcare market. What's next for the company.http://www.zdnet.com/article/nokias-vision-for-digital-health-from-ai-analytics-to-connected-hairbrushes/
January 24, 2017: The Chan Zuckerberg Inititative, a philanthropic initiative from Facebook CEO Mark Zuckerberg and his wife Dr. Priscilla Chan, a pediatrician, has acquired a startup, Meta, focused on using AI and machine learning to sift through recently published scientific studies. The Chan Zuckerberg Initiative is a limited liability company focused on the ambitious goal to "cure, prevent, or manage all diseases by the end of the century." At least $3 billion will be allocated toward that goal, all coming out of Chan and Zuckerberg's Facebook shares. http://www.mobihealthnews.com/content/chan-zuckerberg-initiative-acquires-ai-startup-meta-will-offer-its-services-free
April 30, 2017: Google to commercialize artificial intelligence to detect diseases. Lily Peng, product manager of the medical imaging team at Google Research, shared how the US tech giant is using deep learning to train machines to analyze medical images and automatically detect pathological cues, be it swollen blood vessels in the eye or cancerous tumors, during a video conference with the South Korean media hosted by Google Korea.http://m.theinvestor.co.kr/view.php?ud=20170430000162
June 7, 2017: Apple wants a piece of the artificial intelligence pie. Apple’s ResearchKit, which uses iPhones to collect health information and then makes the data available for research, is showing promise after scientists published data on seizures, asthma attacks and heart disease using the tool. While Apple still faces challenges applying ResearchKit’s results to a broader population (most consumers of Apple products are younger, well-off and well-educated), the company seems determined to carve out a niche in healthcare and AI could help its efforts.http://www.healthcaredive.com/news/apple-wants-a-piece-of-the-artificial-intelligence-pie/444393/
July 28, 2017: Here's what to make of Amazon's potential connected health play … Amazon's potential advantages in the connected healthdevice market likely outnumber it's disadvantages.http://uk.businessinsider.com/amazons-healthcare-play-2017-7?r=US&IR=T
September 25, 2017: Microsoft hires Iain Buchan , world leader in digital healthcare to take personalised health to the next levelhttps://news.microsoft.com/en-gb/2017/09/25/microsoft-hires-world-leader-in-digital-healthcare-to-take-personalised-health-to-the-next-level/
Future Trends for HealthcareData is the new gold
And for the healthcare (especially public systems), to the stay competitive, the organizations should think of their data strategy along with monetization schemes
https://hbr.org/2017/06/to-survive-health-care-data-providers-need-to-stop-selling-data
“Most data-driven healthcare IT (HCIT) providers aren’t going to survive. Their business models are at serious risk of failure in the next three to five years. To beat those odds, they need to evolve dramatically, and fast, to a point where they are not selling data at all.”
Future Trends for Eye Care
Future Trends for Eye CareSimilar slow awakening to “machine medicine” in ophthalmology as in healthcare in general
BIG DATA: CURRENT STATUS AND FUTURE DIRECTIONS AGENDA ARVO 2017 | Baltimore, MDOrganizers: Michael F. Chiang, MD, Anne L. Coleman, MD, PhD, FARVO and Seth Blackshaw, PhD
“Similarly as highlighted in the perspective by Obermeyer and Lee (2017) in previous slide, ophthalmologist training need to keep up-to-date with the machine learning revolution.
With big data, the role of diagnoses have to be re-valued as well as even more fine-grained phenotyping becomes possible. And would a patient with DR and glaucoma be just DR+Glaucoma, or something slightly different with these coexisting pathologies”
Future Trends for Eye CareAI-driven drug discovery for eye care as well
Silicon Valley Computational Drug Startup Takes on Glaucoma
By Tekla S. Perry - Posted 13 Mar 2017 https://spectrum.ieee.org/view-from-the-valley/at-work/start-ups/silicon-valley-computational-drug-startup-takes-on-glaucoma
TwoXAR (Andrew A. Radin) announced a partnership with Santen Inc., the U.S. subsidiary of Japanese ophthalmology company Santen Pharmaceutical, to collaborate on identifying new drug candidates for the treatment of glaucoma.
Benevolent AI is currently largest private AI firm in Europe
21 March 2017https://www.cnbc.com/video/2017/03/21/benevolent-ai-is-currently-largest-private-ai-firm-in-europe.html
http://www.wired.co.uk/article/benevolent-ai-london-unicorn-pharma-startup
Future Trends for Eye CareBattle of the Egos on many fronts.
Optometrists want to to upskill themselves, and essentially make money from surgeries and lucrative VEGF injections
“Both optometrists and ophthalmologists point to experiences in Oklahoma to support their positions.
Bryant said that in Oklahoma, which has allowed expanded work for optometrists the longest, there were only two reported complaints for more than 25,000 procedures.
The ophthalmologists point to a research paper published last October in the medical journal JAMA Ophthalmology (Stein et al. 2016) that found that patients who had a certain type of laser surgery to treat glaucoma had to go back for treatment on the same eye 35.9 percent of the time when an optometrist did the work, as opposed to 15.1 percent of the time when an ophthalmologist did it.”
http://www.newsobserver.com/news/politics-government/state-politics/article131198204.html
MAR 02, 2017 - AAO
Optometrists in Florida Take Brazen Step Toward Primary-Care Provider Status
Proposal sets the bar for audacious assaults on patient safety by attempting to place 100,000-plus non-surgeons on equal footing with ophthalmologists
http://optometrytimes.modernmedicine.com/optometrytimes/news/intravitreal-injections-optometrists
The article, “Implementation of a Nurse-Delivered Intravitreal Injection Service” was published in the June 2014 issue of Eye. The purpose of this study was “to introduce nurse-delivered intravitreal injections to increase medical retina treatment capacity in the United Kingdom.” … “Our preliminary results of a series of 4,000 nurse-delivered injections associated without serious vision-threatening complication is indicative that this procedure can be safely administered by a nurse.” No cases of post-intravitreal anti-VEGF endophthalmitis occurred in this study.
Future Trends for Eye CareMaintaining healthy lifestyle as the most obvious first step to treat.
Not every approach needs to be high-tech and highly scalable digital service
Aerobic Exercise for Neuroprotection "Aerobic exercise is known to lower intraocular pressure (IOP), which we know protects retinal ganglion cells," says Harry A. Quigley, MD, professor and director of glaucoma services at the Wilmer Eye Institute at Johns Hopkins University in Baltimore. "And short-term studies show it may improve blood flow to the retina and optic nerve as well."http://www.glaucoma.org/treatment/aerobic-exercise-for-neuroprotection.php
http://dx.doi.org/10.1111/acel.12512
“These data provide new insight into the mechanisms underlying exercise-mediated protection of retinal cells. We found that daily forced exercise, initiated 24 h after an acute RGC-specific injury in middle-aged mice, led to a substantial improvement in RGC function and survival.”
Lifestyle, Nutrition, and GlaucomaLouis R Pasquale, Jae Hee KangJournal of Glaucoma: August 2009 - Volume 18 - Issue 6 - pp 423-428doi: 10.1097/IJG.0b013e31818d3899
In this review, we have examined the evidence on whether environmental factors are related to developing glaucoma. How do we answer the questions from newly diagnosed glaucoma patients on lifestyle behaviors and their relation to POAG? There is even scarcer data on lifestyle factors and their influence on disease progression. However, rather than default to the view that patients should simply comply with medical therapy and follow-up recommendations (which of course is true), we also suggest advocating for activities consistent with overall good health such as avoidance of smoking, moderate exercise and a diet high in fruits and vegetables. The weight of the current medical is not sufficiently strong to make broad recommendations regarding activities that glaucoma patients should avoid because they elevate IOP such as certain yoga positions, playing high wind instruments for long periods of time, and drinking large amounts of caffeinated coffee.
Future Trends for Eye CareWhere is the innovation happening?
According to the 2017 Centre for World University Rankings (CWUR) rankings by subject, UCL Institute of Ophthalmology (Moorfields Hospital) is the best place in the world to study ophthalmology
http://cwur.org/2017/subjects.php#Ophthalmology
University College London (Moorfields)
Harvard University
Johns Hopkins University (Wilmer Eye Institute)
University of Melbourne
National University of Singapore (Singapore Eye Research Institute, SERI)
University of Sydney
University of Southern California
University of Miami (Bascom Palmer Eye Institute)
University of California, Los Angeles (UCLA Stein Eye Institute Westwood)
University of California, San Diego (Shiley Eye Institute)
Focus on novelportable diagnostic tools
de facto “Standard”of portable future
https://theophthalmologist.com/issues/0116/the-eye-exams-quantum-leap/
http://dx.doi.org/10.1167/tvst.6.4.16
Test 6: Pupillometry Pupil reactions were assessed using simultaneous OCT capture of the anterior segments including the iris plane. Each eye was stimulated independently and sequentially with a single, bright, 250-ms flash of white light. B-scan recordings are captured at regular intervals of 350 ms prior to stimulation and 4000 ms post-stimulation. Measurements of the pupil circumference could subsequently be calculated to identify pupil abnormalities and relative afferent pupillary defects
User Experience
Portable Diagnostics
Easier to carry around just one single device doing most of the stuff rather
than dedicated devices for each task
In hospital settings, one can then use dedicated devices for higher diagnostic
capability after the portable “pre-screening”
Structural Measures
FundusImagingwith some variants
Annidis RHA Multispectral fundus imaging system http://www.annidis.com/page/technology
Optomed Aurora portable fundus imaginghttp://www.annidis.com/page/technology
Optos Ultra-wide fundus imaging”High-end imaging” http://www.nikon.com/about/technology/product/retinal-imaging/index.htm
Do it yourself smartphone fundus camera – DIYretCAMhttps://dx.doi.org/10.4103%2F0301-4738.194325
(a) The do it yourself smartphone fundus camera used as a hand held the device. (b) The do it yourself smartphone fundus camera can be held at the condensing lens and supported with the other hand on the camera. (c) Like in indirect ophthalmoscopy, scleral depression is done after stabilizing the do it yourself smartphone fundus camera
Multi-spectral imaging for in vivo imaging of oxygen tension and -amyloidβDr. Tos TJM Berendschot, Prof. dr. Carroll AB WebersUniversity Eye Clinic Maastricht
FundusImagingSmartphones getting more and more ubiquitous.
Clip-ons and embedded electronics allow cloud-based teleophthalmology both with automated AI approaches and augmented with human expert ophthalmologists for hard-to-reach areas
Peek Vision developed a smartphone app and lens attachment, Peek Retina, to capture sharp images of the back of the eye.https://www.fastcompany.com/3062154/smartphones-are-leading-the-global-charge-against-blindness
Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer” paper in J Ophth. He says the camera can be replicated from parts easily found online for about $185.
An ophthalmology resident at the University of Illinois at Chicago College of Medicine has invented an inexpensive, handheld camera that can photograph the retina without need for pupil dilation.
https://blogs.nvidia.com/blog/2016/02/17/deep-learning-4/. https://youtu.be/wa9OdaRMgO8
As the founder of SocialEyes, Nicholas Bedworth is delivering healthcare via embedded NVIDIA (such as the Jetson TX2) at scale in places where doctors are scarce, and without internet for cloud connection
Eye Spy: SocialEyes Uses Deep Learning to Spot Serious Eye Problems
Centralized cloud GPU inference allows higher throughputs when good enough internet is available for example using the V100 GPUs from NVIDIA (and the even faster generations after that)
SLOScanning LightOphthalmoscope
Adaptive optics scanning light ophthalmoscopy (AOSLO) is an emerging technology for improving in vivo imaging of the human retinal microvasculature, allowing unprecedented visualization of retinal microvascular structure, measurements of blood flow velocity, and microvascular network mapping.
Human retinal microvascular imaging using adaptive optics scanning light ophthalmoscopyChui et al. (2016) https://doi.org/10.1186/s40942-016-0037-8
Distribution differences of macular cones measured by AOSLO: Variation in slope from fovea to periphery more pronounced than differences in total conesAnn E. Elsner, Toco Y.P. Chui, Lei Feng, Hong Xin Song, Joel A. Papay, Stephen A. Burns (2017) https://doi.org/10.1016/j.visres.2016.06.015
Cone density varies among individuals by more than just a scalar factor.
Imaging Foveal Microvasculature: Optical Coherence Tomography Angiography Versus Adaptive Optics Scanning Light Ophthalmoscope Fluorescein AngiographyShelley Mo; Brian Krawitz; Eleni Efstathiadis; Lawrence Geyman; Rishard Weitz; Toco Y. P. Chui; Joseph Carroll; Alfredo Dubra; Richard B. Rosen (2016) http://dx.doi.org/10.1167/iovs.15-18932
Optical coherence tomography angiography is comparable to AOSLO FA at imaging the foveal microvasculature except for differences in FAZ area, lumen diameter, and some qualitative features. These results, together with its ease of use, short acquisition time, and avoidance of potentially phototoxic blue light, support OCTA as a tool for monitoring ocular pathology and detecting early disease.
Photoreceptor-Based Biomarkers in AOSLO Retinal ImagingKatie M. Litts; Robert F. Cooper; Jacque L. Duncan; Joseph Carroll (2017) http://dx.doi.org/10.1167/iovs.17-21868
Resolving cone inner and outer segment structure with AOSLO. Shown are confocal (A) and split-detection (B) images from the parafoveal retina of a patient with CNGA3-associated ACHM. The color-merged image (C) has the confocal image displayed in green and the split-detection image in red.
OCTwith various variants again
(2016) https://doi.org/10.1364/BOE.7.001783
https://dx.doi.org/10.1167/tvst.3.3.10
Biomedical Optics Express Vol. 8, Issue 4, pp. 2287-2300 (2017)https://doi.org/10.1364/BOE.8.002287
Intelligent ImagingEmbed deep learning to the device and the image acquisition process to minimize operator-dependent image quality degradations (see e.g. OSCAR-IB study)
Think of the consumer AI-driven camera systems as inspiration
Better for the patient and the operator that the camera automatically re-acquires the images and even reconstructs the image from partially good quality shots rather than realizing the suboptimal quality later when patient have left the hospital already
4 October 2017: Google has announced a new add-on for the Pixel 2 camera called Clips. The camera is hands-free and works like a photographer you’ve hired for an event. The AI captures moments for you, and then you decide how to use the images later when you can look through them. An AI engine snaps photos when you are not even looking or paying attention.
https://www.fastcompany.com/3059281/introducing-hover-an-ai-powered-indoor-safe-camera-drone
Plan3D: Viewpoint and Trajectory Optimization for Aerial Multi-View Stereo ReconstructionBenjamin Hepp, Matthias Nießner, Otmar Hilliges(Submitted on 25 May 2017)https://arxiv.org/abs/1705.09314
ImagingIn the wild – in “high street” optometry and in hospitals
OCT ROLLOUT IN EVERY SPECSAVERS ANNOUNCED
The multiple will ensure all 740 of its UK practices have an OCT device installed within the next two years. 22 May 2017 by Emily McCormickhttps://www.aop.org.uk/ot/industry/high-street/2017/05/22/oct-rollout-in-every-specsavers-announced
WHAT'S GOING ON IN TECH?
Revealing the latest technology launches for the practice 07 Mar 2017 by Laurence Derbyshirehttps://www.aop.org.uk/ot/industry/equipment-and-suppliers/2017/03/07/whats-going-on-in-tech
https://youtu.be/JS550fDKyOE OT spoke to a range of equipment companies (Optos, Heidelberg, Carl Zeiss and Bib Ophthalmic Instruments) and experts at 100% Optical to find out about the latest technology launches for the practice.
In practice opticians do not have enough skilled staff for image analysis, and the whole imaging process should be made deep learning -driven and high quality always independent of the device operator
This creates opportunities to sell automated image analysis software for opticians, reducing their labor costs, improving diagnostic quality and creating cross-selling opportunities.
https://www.mivision.com.au/taking-financial-control-of-your-practice/
ImagingBeyondRetinopathies
Retina as being the most easily measured part of the brain, makes sense to use it as a proxy for other pathologies
Eye Scans to Detect Cancer and Alzheimer’s DiseaseThe Human OS | Biomedical | DiagnosticsBy Megan Scudellari Posted 31 Aug 2017
https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/eye-scans-to-detect-cancer-and-alzheimers-disease
At the University of Washington, a team led by computer scientist Shwetak Patel created a smartphone app (BiliScreen) to screen for pancreatic cancer with a quick selfie. Developed over the last year and a half, the team recently tested their system in a clinical study of 70 people. They were able to identify cases of concern with 89.7 percent sensitivity and 96.8 percent accuracy.
At Cedars-Sinai and NeuroVision Imaging LLC in California, researchers have developed a sophisticated camera and retinal imaging approach to detect early signs of Alzheimer’s disease (AD). The camera capture beta-amyloid plaques being tagged with curcumin. This system, recently detailed in a proof-of-concept trial published in the journal JCI Insight, relies on a specialized ophthalmic camera that is not yet available on a smartphone.
Photo: Dennis Wise/University of Washington
Functional Measures
Ocular Blood Flowwith Laser speckle flowgraphy (LSFG) or AOSLO
https://doi.org/10.1155/2017/2969064
Changes in Retinal Vessel Architecture and Blood Flow in Multiple Sclerosis (P6.401)Richard Nicholas, Adam Dubis, Ashwini Nandoskar, Jeremy Chataway and John Greenwoodhttp://www.neurology.org/content/88/16_Supplement/P6.401.short
Waveform Analysis of Ocular Blood Flow and the Early Detection of Normal Tension GlaucomaShiga et al. (2013) IOVSdoi: 10.1167/iovs.13-12930
Softcare Co., Ltd. is pleased to announce that it received 510(k) clearance from the U.S. Food and Drug Administration (FDA) for LSFG-NAVI. http://www.softcare-ltd.co.jp/510k_clearance.html
Boston MicromachinesApaeros Retinal Imaging System – Small-vessel blood flowhttp://www.bostonmicromachines.com/qualitative-measures-of-small-vessel-blood-flow-.html
Pupillometry Technically “simple” method that can be deployed easily into a virtual reality headset or into a smartphone clip-on
Headset keeps the eye-camera distance constant (without the more expensive telecentric lenses that is), and reducing the
error of the same pupil size to appear smaller/larger to distance variations.
Pupillometry Evolution of the technique
Chopra et al. (2017)http://dx.doi.org/10.1167/tvst.6.4.16
OCT-based pupillometry
Murray et al. (1981)https://doi.org/10.1016/0165-0270(81)90024-8
Video-based pupillometry
Stark et al. (1959)https://doi.org/10.1109/JRPROC.1959.287206
Iris reflectivity-based pupillometry
850-950 nm infrared lighting for example from LEDs is used illuminate the eye, and the pupil boundary is computed from the video via computer vision techniques
Pupillometry4-step “end-to-end” deep learning pipeline
https://www.slideshare.net/PetteriTeikariPhD/pupillometry-for-clinical-diagnosis
2) Artifact identification 3) PLR Denoising / Reconstruction
4) PLR Classification
“Glaucomasuspect”
Reconstruct the PLR signal with data-driven “priors” and artifacts related uncertainty
(Blind) artifact identification with data-driven approach
Easier to classify correctly PLR jointly with the signal cleaning part
1) Pupil segmentation from raw video
Images from Li et al. (2005) "Starburst Algorithm"
Use deep learning (like e.g. Mariakis et al. 2017) to do the semantic segmentation with uncertainty (see e.g. Kendall and Gal 2017), and propagate this to following steps
PupillometryComponents
Temporal resolutionHigh-freqency biomarkers require
high sampling ates
Spa
tial
reso
luti
onLo
w-a
mpl
itude
bio
mar
kers
requ
ire
mor
e pi
xels
from
cam
era
Blink and artifacts in general
“Real pupil size”
Measured pupil size- Discretization noise (in this case, pupil width in pixels)- Instrumentation noise (image noise, and uncertainty of boundary algorithm)- Physiological noise (“focus fluctuations”, and mental processes)- “Pupil noise” (“pupillary unrest” used as a index of alertness for example, Stanten et al. 1966)
PupillometrySpatial and temporal characteristics of pupillary size
Pupil noise (hippus, pupillary unrest) is characterized as a random noise on top of the “mean pupil size”-signal in the frequency range of 0.05-0.3 Hz (Stark 1959; Usui and Stark 1982). Experiments have indicated skewing of the pupil noise spectrum from Gaussian white noise at high and low pupil areas. This skewing has been attributed the multiplier gain dependence on the expansive range nonlinearity and length-tension relationship of iris muscles (Usui and Stark 1978; Usui and Stark 1982) . In practice, pupil neuromuscular dynamics shape the high frequency cutoff for noise pulses and large and small sinusoids, whereas retinal adaptation accounts for the low-frequency asymptotes (Stanten and Stark 1966; Stark 1984).
Usui and Stark 1982
Stark 1959
PupillometrySpatial and temporal characteristics of pupillary size
Adhikari et al. (2016) http://dx.doi.org/10.1038/srep33373
Maynard et al. (2015) http://dx.doi.org/10.1167/iovs.15-17357
Pupil size do not change especially fast, and the modulation frequencies (e.g. sinusoidal light stimuli) are not very high especially for melanopsin studies. Thus, it makes more sense to have slower sampling rates and higher resolution cameras if one cannot get the both for reasonable price.
PupillometryExample of “spatial” PLR use, also beyond retinopathies
Alzheimer’s disease in the human eye. Clinical tests that identify ocular and visual information processing deficit as biomarkersChang L. Y. L., Lowe J., Ardiles A., Lim J., Grey A. C., Robertson K., et al. . (2014). . Alzheimers Dement. 10, 251–261. 10.1016/j.jalz.2013.06.004
Method followup of above: Infrared Video Pupillography Coupled with Smart Phone LED for Measurement of Pupillary Light ReflexLily Yu-Li Chang, Jason Turuwhenua, Tian Yuan Qu, Joanna M. Black, and Monica L. AcostaFront Integr Neurosci. 2017; 11: 6.Published online 2017 Mar 7. doi: 10.3389/fnint.2017.00006with Point Grey Firefly USB 2.0 infrared camera (752 x 480, 60 fps, $275)
Correlations Between Hourly Pupillometer Readings and Intracranial Pressure ValuesMcNett, Molly; Moran, Cristina; Janki, Clare; Gianakis, Anastasia, Journal of Neuroscience Nursing: August 2017 - Volume 49 - Issue 4 - p 229–234. 10.1097/JNN.0000000000000290
The use and uptake of pupillometers in the Intensive Care UnitMatthew Hao Lee, Biswadev Mitra, Jiun Kae Pui, MarkFitzgeraldAustralian Critical Care Available online 17 July 2017 https://doi.org/10.1016/j.aucc.2017.06.003
Use of Digital Pupillometry to Measure Sedative Response to PropofolHaddock et al (2017)The Ochsner Journal: Fall 2017, Vol. 17, No. 3, pp. 250-253 http://www.ochsnerjournal.org/doi/abs/10.1043/1524-5012-17.3.250?code=occl-site
Infrared pupillometry helps to detect and predict delirium in the post-anesthesia care unitEric Yang, Matthias Kreuzer, September Hesse, Paran Davari, Simon C. Lee, Paul S. García Journal of Clinical Monitoring and Computing (2017), https://doi.org/10.1007/s10877-017-0009-z
PupillometryFocus on non-invasive trauma evaluation
Highlighting also the increased use of crowdfunding platform for scientific uses.
Waiting for the “Scientific ICO” rollouts still...
Dr. Charlene Ong, at Harvard University Hospitals Massachusetts General and Brigham and Women’s, hopes to find a safe and effective method to catch signs of brain swelling earlier.
PupillometryExample of “temporal” PLR use, also beyond retinopathies
Pupillary Motility: Bringing Neuroscience to the Psychiatry Clinic of the FutureSimona Graur and Greg Siegle Current Neurology and Neuroscience Reports August 2013, 13:365https://doi.org/10.1007/s11910-013-0365-0
Reduced Pupillary Unrest: Autonomic Nervous System Abnormality in Diabetes MellitusAstradur B Hreidarsson and Hans Jorgen G Gundersen Diabetes 1988 Apr; 37(4): 446-451https://doi.org/10.2337/diab.37.4.446
Fatigue and cognition: Pupillary responses to problem solving in early multiple sclerosis ‐patientsR Egg, B Högl, S Glatzl, R Beer, T Berger Multiple Sclerosis Journal Volume: 8 issue: 3, page(s): 256-260https://doi.org/10.1191/1352458502ms793oa
Using pupil size and heart rate to infer affective states during behavioral neurophysiology and neuropsychology experimentsSigrid A. de Rodez Benavent et al. | Brain and Behavior 2017 http://dx.doi.org/10.1002/brb3.717
Pupillary unrest correlates with arousal symptoms and motor signs in Parkinson diseaseSamay Jain et al. (2011) Movement Disordershttp://dx.doi.org/10.1002/mds.23628
Comparison of the antidepressants reboxetine, fluvoxamine and amitriptyline upon spontaneous pupillary fluctuations in healthy human volunteersM. A. Phillips, P. Bitsios, E. Szabadi, C. M. BradshawPsychopharmacology March 2000, Volume 149, Issue 1, pp 72–76https://doi.org/10.1007/s002139900334
Assessing Pain Using the Variation Coefficient of Pupillary DiameterDavid J. Charier et al. (2017) The Journal of Pain Available online 13 July 2017https://doi.org/10.1016/j.jpain.2017.06.006
PupillometryCamera selection in practice
33 series - GigE monochrome industrial camerashttps://www.theimagingsource.com/products/industrial-cameras/gige-monochrome/- Windows and Linux software included
~$730
~$330
~$439
~$495
~$730
oemcameras.com
by Micah Singleton@MicahSingleton Feb 7, 2017, 11:15am EST
https://www.theverge.com/circuitbreaker/2017/2/7/14532610/sony-smartphone-camera-sensor-1000-fpshttps://www.sony.net/SonyInfo/News/Press/201702/17-013E/index.html
Ultra-High Resolution Monochrome Camerashttp://www.adept.net.au/cameras/ultraMono.shtml
Genie NANO XL-M5100 5120 x 5120 px, CMOS Mono, 20 fps, GigEHS-20000M 5120 x 3840 px, CMOS, 32 fps, 10 GigEFlare 48MP30-CX 7920 x 6004 px, CMOS, 30.9 fpsEoSens 25CXP+ 5120 x 5120 px, CMOS, 80 fps, CXP-6 CoaXPressCP80-25-M-72 5120 x 5120 px, CMOS, 72 fps, CoaXPressPhantom VEO4K PL for film 4096 x 2304 px, CMOS, 1000 fps
IO Industries Flare 48MP Camera Demohttps://youtu.be/WvW9532k81M
https://youtu.be/dFdU-JjypWs
oemcameras.comPupillometryCamera Specificationsexplained
PupillometryCamera Calibrationand comparison rig
If you want to become a PLR Powerhouse lab, you could measure the same subjects in clinical settings for example with three different quality levels and learn quality improvement with deep learning?
1) Very low-cost camera phone
2a) Entry-level industrial camera2b) High-end smartphone
3) High-end industrial camera
See similar idea fortraining a deep learning
network for 3D reconstructionfrom indoor scans
Slide 10 ofhttps://www.slideshare.net/PetteriTeikariPhD/dataset-creation-for-deep-learningbased-geometric-computer-vision-problems
An example of “path multiplication” with a beamsplitter for a slit lamp setup. This allows simultaneous exam by the clinician and recording of the exam
PupillometryCommercial landscape
Established: NeurOpticsNPi®-200 Pupillometerat https://www.neurocriticalcare.org/
Open-source: Pupil Labs eye tracker with HoloLens, HTC Vive and Oculus add-ons available - https://doi.org/10.1145/2638728.2641695
Emerging: “The PUPIL Study”- Automated, Quantitative Pupil Assessment Using Binocular OCTSponsor: University College, London (Pearse Keane) https://clinicaltrials.gov/ct2/show/NCT03081468
Emerging: “PupilScreen: Using Smartphones to Assess Traumatic Brain Injury. Mariakis et al. (2017) http://doi.org/10.1145/3131896
Emerging: ”BrightLamp use the camera and torch of your phone to measure your pupil's response to light stimulus to diagnose concussion.”
www.brightlamp.orgtechcrunch.com/2017/05/16
“We have a total of 8 people on the team: 2 computer vision specialists, regulatory, financial, legal, app dev, marketing, ML/diagnostics. All of which have very specific roles in growing a startup.”
reddit.com
EyeMovementsSimilar “end-to-end” pipeline as for pupillometry (and even the same hardware)
One can “easily” integrate various measures to a single VR headset
https://github.com/pupil-labs/hmd-eyes
Orlosky et al. (2017) https://doi.org/10.1109/TVCG.2017.2657018
EyeMovementsUseful in virtual reality environments in general
Allows foveated rendering (LoD, level-of-detail), which again reduces computation requirements for convincing VR rendering
Foveated rendering is a process that combines eye tracking and software to adjust the way a VR experience is rendered in real time. With foveated rendering, the PC running your Vive with 7invensun eye tracker only has to render the greatest detail in the small area on which your eyes are directly focused.
https://uploadvr.com/7invensun-eye-tracker-for-vive/
Games like this don’t just look incredible because of ‘hyper-realism’ but because their engineers use all sorts of tricks [LOD’ing, or Level of Detail; Mipmapping; frustum culling, etc.] to save memory.
https://kotaku.com/horizon-zero-dawn-uses-all-sorts-of-clever-tricks-to-lo-1794385026
EyeMovementsBeyond retinopathies
http://doi.org/10.1002/mds.27105
Many high-prevalence neurological disorders involve dysfunctions of oculomotor control and attention, including attention deficit hyperactivity disorder (ADHD), fetal alcohol spectrum disorder (FASD), and Parkinson’s disease (PD).
https://doi.org/10.1007/s00415-012-6631-2
https://doi.org/10.1016/j.bpsc.2016.12.009
https://doi.org/10.1016/j.cobeha.2016.03.008
https://doi.org/10.1016/j.bandc.2008.08.026
EyeMovementsCommercial landscape
Unlocking the potential of eye tracking technologyPosted Feb 19, 2017 by Ben Dickson (@bendee983)https://techcrunch.com/2017/02/19/unlocking-the-potential-of-eye-tracking-technology/
A new brain health app from Neurotrack warns users of memory decline
Posted Dec 1, 2016 by Lora Kolodny (@lorakolodny)https://techcrunch.com/2016/12/01/neurotrack-takes-brain-scans-home/https://qz.com/604397/a-simple-five-minute-test-could-make-earlier-diagnosis-of-alzheimers-possible/
Oculus acquires eye-tracking startup The Eye Tribe
28 Dec 2016 - The startup has developed a $99 eye tracking device developer kitshttps://techcrunch.com/2016/12/28/the-eye-tribe-oculus/
The iPhone 8 Could Soon Usher in Eye-Tracking Ads
Sep 5th, 2017https://www.macobserver.com/columns-opinions/editorial/iphone-8-eye-tracking-ads/
Better drugs could also lead to “better quality of life for the patient, which would ultimately lower health care costs,” says Samir Kaul, the founding managing director of Khosla Ventures. Khosla is the lead investor for Neurotrack’s $6.5 million in new funding, announced Jan. 27. If it passes muster in ongoing studies, Neurotrack’s online test could be a cheap, easy, non-invasive way to detect Alzheimer’s in advance of symptoms.
The Palo Alto, California-based company is inviting physicians to offer the eye-tracking test to their patients. It’s also developing a personalized lifestyle program for prospective patients. The program will be based on emerging research in Finland (FINGER) and elsewhere suggesting that diet, exercise, cognitive training, sleep and stress management could preserve brain health and help prevent Alzheimer’s.
Electro-physiologyElectroretinography (ERG) and electro-oculogram (EOG)
The systems used to be hard to use with long dark adaptations and long setup times.
Recently new devices such as EvokeDx and RETEval have made things easier in practice
http://konanmedical.com/evokedx/
https://youtu.be/aT6dCD_5p5k
Multifocal ERGhttp://vsri.ucdavis.edu/research/electrophysiology
Wearable electrooculography (EOG) goggles. The Swiss Federal Institute of Technology (ETH) Zurich developed these goggles to track relative eye movements.https://doi.org/10.1109/MPRV.2010.86
Electro-physiologyCorrelating function with structure
Documenta Ophthalmologica April 2017, Volume 134, Issue 2, pp 111–128|
Comparing three different modes of electroretinography in experimental glaucoma: diagnostic performance and correlation to structureLaura Wilsey, Sowjanya Gowrisankaran, Grant Cull, Christy Hardin. Claude F. Burgoyne, Brad Fortunehttps://doi.org/10.1007/s10633-017-9578-x
Arq Bras Oftalmol. 2017 Mar-Apr;80(2):118-121. doi: 10.5935/0004-2749.20170028
Structure-functional correlation using adaptive optics, OCT, and microperimetry in a case of occult macular dystrophy.Viana KÍ, Messias A, Siqueira RC, Rodrigues MW, Jorge R
Optical coherence tomography (1-3), adaptive optics (A-H), and microperimetry (2-4) images of a patient with occult macular dystrophy.
(Arrows) Loss of continuity of the outer photoreceptor layer in the central foveal region. (yellow asterisk) Reduced ring photoreceptor density in the
foveal region. (2) Reduction of ring sensitivity in the central foveal region. (Red Asterisk) Reduced photoreceptor density in a central foveal region. (4)
Reduced central sensitivity in the fovea.
Electro-physiologyClinical Uses http://dx.doi.org/10.1007/BF01206208
https://doi.org/10.1159/000450958https://doi.org/10.1016/j.taap.2015.10.008
https://doi.org/10.1016/j.pharmthera.2017.02.009
http://dx.doi.org/10.3233/JAD-150798
Electro-physiologyRelatively simple to develop and integrate to the same VR-type of headset used for pupillometry and eye movement measurement
A filter setting of 1–200 Hz appears most sensitive to detect glaucomatous damage if using a two-global-flash mfERG: High frequencies of 100–300 Hz also contain information that differentiates glaucoma from normal and thus should be included in the analysis.
A 50 Hz notch filter allows grossly contaminated waveforms to be analyzed in a meaningful manner. With a 50 Hz filter, glaucoma patients still differed significantly from normal.
Visual Evoked PotentialsEssentially a EEG headset with reduced electrode count and active dry electrodes
The nGoggle, a Portable Brain-Computer Interface for Assessment of Visual Function and glaucoma diagnosis – Nakanishi et al. (2017) http://dx.doi.org/10.1001/jamaophthalmol.2017.0738
REINVENT: A low-cost, virtual reality brain-computer interface for severe stroke upper limb motor recoverySpicer et al. (2017)https://doi.org/10.1109/VR.2017.7892338
A Virtual-Reality Based Neurofeedback Game Framework for Depression Rehabilitation using Pervasive Three-Electrode EEG CollectorCai et al. (2017)https://doi.org/10.1145/3127404.3127433
HTC Vive Modified With Neurable Reads Your Mind At SIGGRAPHhttps://youtu.be/47WHqDNckI8 https://www.extremetech.com/extreme/254816-eeg-virtual-reality-matrix-just-around-corner
A feasibility study on SSVEP-based interaction with motivating and immersive virtual and augmented realityJosef Faller, Brendan Z. Allison, Clemens Brunner, Reinhold Scherer, Dieter Schmalstieg, Gert Pfurtscheller, Christa Neuper
(Submitted on 15 Jan 2017)https://arxiv.org/abs/1701.03981
Magneto-retinographyWith Diamond Magnetometry
(instead of costly SQUID magnetometers used in magnetoencephalography [MEG] for example).
At some point eventually allowing non-contact electrical measurements when the price goes even well below $150k per instrument?
Today, Matthew Dale and Gavin Morley at the University of Warwick in the U.K. say that diamond sensors are poised to revolutionize the way physicians use magnetic field measurements in diagnostic medicine. They map out the state of the art in this area and say that the business opportunity is significant.
https://arxiv.org/abs/1705.01994
https://www.technologyreview.com/s/607871/how-diamond-sensors-are-set-to-revolutionize-medical-diagnostics/
There are around 100 SQUID MEG systems installed worldwide, at a cost of over $1M each. The MCG market should be much larger if the instrumentation was affordable and portable, because MCG has been shown to be superior to ECG and hence other non-invasive approaches for the diagnosis of coronary artery disease (CAD) [Kwong et al. 2013, Fenici et al. 2005 and 2013]. CAD is the most common type of heart disease and is the leading cause of death in the United States in both men and women.
Several companies have tried and failed to commercialize SQUID-based MCG, held back by the cost of a cryogen-based system. We estimate that 100,000 MCG systems could be sold if the functionality were the same as existing SQUID systems and the price was below $150k. This is based on there being over 100,000 hospitals in China, India, the EU, Japan and the USA.
Diamond magnetometers are at technology readiness level (TRL) 7: the technology has been demonstrated and is moving towards being put on sale. However, this has not yet reached the sensitivity needed for MCG, so an MCG system based on diamond is at TRL 4-5 (technology development).
Visual FieldsVery common measurement even though it can be stressful for the patient with high noise in this psychophysical measurement (reduced by using log units)
Similarly, visual field measurement fail to detect early changes in glaucoma as the brain can compensate for the neurodegeneration of RGCs
ARVO 2017 poster for deep learning in visual field assessment
2846 — B0449 A deep-learning based automatic glaucoma identification. Serife Seda S. Kucur1 , M. Abegg2 , S. Wolf2 , R. Sznitman1. 1 ARTORG Center, University of Bern, Bern, Switzerland; 2 Department of Opthalmology, Inselspital Bern, Bern, Switzerland https://doi.org/10.1016/j.ophtha.2017.06.028
Visual Field Testing with Head-Mounted Perimeter ‘imo’Matsumoto et al. (2016) https://doi.org/10.1371/journal.pone.0161974
The perimeter imo has completely isolated optical systems for the right and left eyes. Stimulus presentation is also independently performed for each eye.
Visual field examination method using virtual reality glasses compared with the Humphrey perimeterTsapakis et al. (2017)doi: 10.2147/OPTH.S131160
Effect of cognitive demand on functional visual field performance in senior drivers with glaucomaGangeddula, Viswa, et al (2017)https://doi.org/10.3389/fnagi.2017.00286
Visual FunctionTowards assessing real-life function, simultaneously for diagnosis and disease progression purposes
Using virtual reality to cause a subject to correct for perceived motion has revealed that glaucoma patients’ reactions are more erratic than those of healthy individuals
Diniz-Filho, et al. (2015)doi: 10.1016/j.ophtha.2015.02.010
Christopher Kent, Senior EditorPUBLISHED 6 JULY 2015
Virtual Reality: A New Frontier in Eye Care?
https://www.reviewofophthalmology.com/article/virtual-reality-a-new-frontier-in-eye-care
Daga, Fábio B., et al. "Wayfinding and Glaucoma: A Virtual Reality Experiment." Investigative Ophthalmology & Visual Science 58.9 (2017): 3343-3349. doi: 10.1167/iovs.17-21849
The VEHuNT consisted of a cave automatic virtual environment (CAVE) used to present an immersive VR environment to study wayfinding tasks.
Using virtual/augmented reality to simulate visual impairments
By Dr. Pete R Jones http://www.ucl.ac.uk/~smgxprj/projects.html
VisionDisordersMyopia, amblyopia, etc.
Virtual reality is still far from being a mainstream technology. But when Facebook bought VR headset-maker Oculus for $2 billion last year, it signaled to the world that virtual reality was no longer sci-fi, kicking off a frenzy of experimentation.
For James Blaha, who’s struggled all his life with strabismus (type of amblyopia)—a vision condition more commonly known as crossed eyes—virtual reality offered a potential cure, and he’s built a venture-backed company See Vividly (Vivid Vision software) based on this promise.
https://qz.com/489048/an-entrepreneur-is-using-virtual-reality-headsets-to-try-to-cure-vision-disorders/
The Cure/Diagnosis and The Cause?
http://dx.doi.org/10.3109/02713683.2016.1158271
https://endmyopia.org/virtual-reality-the-next-myopia-tsunami/
https://essilorusa.com/newsroom/virtual-reality-bad-fo-the-eye
Gamificationof diagnosisand Service Design for Eyecare.
Patients can be given the headsets for the waiting room and make their waiting time less boring and make the “process” more efficient
Similarly it might be hard to get children to be attentive. The gamification might help the children to focus
Amblyopia treatment of adults with dichoptic training using the virtual reality Oculus Rift head mounted display: preliminary resultsPeter Žiak, Anders Holm, Juraj Halička, Peter Mojžiš and David P Piñero BMC Ophthalmology 2017 17:105https://doi.org/10.1186/s12886-017-0501-8
Other Clinical Eye Measures
IntraocularPressureBoucard et al. 2016: “The classic view of glaucomab is that of an eye disease in which elevated intraocular pressure (IOP) mechanically damages the optic nerve (ON) causing the death of retinal ganglion cells (RGCs). Indeed, in high-pressure glaucoma (HPG, the most common form of glaucoma), RGC and ON damage are associated with an elevated IOP (>21 mmHg).[1] However, this view cannot be complete as glaucoma with normal levels of IOP is commonly reported as well. In such normal-pressure glaucoma (NPG), damage occurs to the ON without the eye pressure exceeding the normal range. By definition, NPG only differs from HPG in that the IOP is consistently below 22 mmHG.[1] Moreover, rather than being a disease restricted to the eye, damage of the RCGs extends to the axons that form the primary visual pathways.c
The patient wears the SENSIMED Triggerfish® system up to 24 hours and assumes normal activities including sleep periods. The SENSIMED Triggerfish® Sensor is a soft disposable silicone contact lens embedding a micro-sensor that captures spontaneous circumferential changes at the corneoscleral area.
http://www.sensimed.ch/
Journal of Glaucoma. August 22, 2016. doi: 10.1097/IJG.0000000000000517
The range of IOP fluctuation was larger in the eyes with normal-tension glaucoma (NTG) than in the nonglaucoma eyes. This larger fluctuation might be one of the reasons underlying the aggravation of the visual field by NTG. Measurements of 24-hour continuous IOP might be one of the useful methods to distinguish NTG from nonglaucoma eyes.
Daily variation of intraocular pressureMeasurement once in 6 months might not capture all relevant information
IOP TransientCyclical strain vs. constant strain (i.e. punching your eye with fist vs. applying constant pressure over longer time)
IOP variability by Crawford Downs at World Glaucoma Conference 2017Helsinki Finland, at “New frontiers in glaucoma” session, Saturday July 1, 2017https://youtu.be/1nrV3zztisk | https://youtu.be/QNDzq5Rp5RA
IOP vs. CSFOIntraocular pressure, cerebrospinal pressure or trans lamina pressure difference or what?
http://dx.doi.org/10.1016/j.preteyeres.2015.01.002
IntraocularPressureAlready devices with both FDA and CE approvals for clinical use
German medical device company Implandata has received CE Marking for EyeMate, the first approved IOP-monitoring system to offer 24-hour pressure measurements in patients with primary open-angle glaucoma (POAG).
The sensor consists of eight pressure-sensitive capacitors and a circular microcoil antenna, which is coupled to an external handheld unit that displays readings to the patient and sends real-time data to the physician over the internet. An associated smartphone app can be used to display IOP history and set medication alerts.
https://www.aao.org/headline/continuous-iop-monitoring-implant-approved-in-euro
OMICsGenomics, proteomics, metabolomics, lipidomics, etc
Progress in Retinal and Eye Research Volume 58, May 2017, Pages 89-114: 15.
Characterizing the “POAGome”: A bioinformatics-driven approach to primary open-angle glaucomaDanford et al. (2017) https://doi.org/10.1016/j.preteyeres.2017.02.001
Sci Rep. 2017; 7: 41595.https://dx.doi.org/10.1038/srep41595
An Ocular Protein Triad Can Classify Four Complex Retinal DiseasesKuiper et al. (2017)
In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test.
IOVS July 2017, Vol.58, BIO88-BIO98.
Omics Biomarkers in OphthalmologySusette Lauwen; Eiko K. de Jong; Dirk J. Lefeber; Anneke I. den Hollanderhttps://doi.org/10.1167/iovs.17-21809
Here, we review the application of omics techniques in eye diseases, focusing on age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal detachment (RD), myopia, glaucoma, Fuchs' corneal dystrophy (FCD), cataract, keratoconus, and dry eyes. We observe that genomic analyses were mainly successful in AMD research (almost half of the genomic heritability has been explained), whereas large parts of disease variability or risk remain unsolved in most of the other diseases.
Expert Review of Ophthalmology Volume 11, 2016 - Issue 2
GWAS in myopia: insights into disease and implications for the clinicKatie M Williams & Christopher J HammondDepartment of Ophthalmology, King’s College London, London, UK; Department of Twin Research & Genetic Epidemiology, King’s College London, London, UK
In this review we focus on what a genome-wide association study involves, what studies have been performed in relation to myopia to date, and what they ultimately tell us about myopia variance and functional pathways leading to pathogenesis. The current limitations of genome-wide association studies are reviewed and potential means to improve our understanding of the genetic factors for myopia are described.
Overview of the different layers within a biological system contributing to multifactorial diseases and their relation to each other. For each layer, the name of the corresponding omics technique is indicated in blue boxes.
Unimodal ModelMany proof-of-concept (PoC) published showing that deep learning can replace humans for low-level unimodal tasks such as image analysis
Unimodal ModelThe typical ophthalmologic approach to tackle diagnostics.
Doctors are eager in finding new scalar measures that can quantify the pathology the best
As much as it would be nice to have scalar variables and simple decision trees, it might not be realistic way to model
complex pathogenesis
Examples:
OCT BMO-MRW (Kabbara et al. 2017):“To compare the cube and radial scan patterns of the spectral domain optical coherence tomography (SD-OCT) for quantifying the Bruch's membrane opening minimum rim width (BMO-MRW). The BMO-MRW diagnostic accuracy for glaucoma detection and rates of change derived from the two scan patterns were compared.”
IOP (Chan et al. 2017):“A UK study of 8623 Norfolk residents has found that the use of intraocular pressure (IOP) to detect glaucoma is ‘inaccurate and probably not viable’ … The researchers also report that no single IOP threshold provided adequate sensitivity and specificity for diagnosis of glaucoma. … ‘The evidence around the performance of IOP for either screening or case-finding is not strong,’ Paul Foster emphasised.
UnimodalDiagnostics
JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216
One million anonymised eye scans from Moorfields Eye Hospital will be used to train an artificial intelligence (AI) system from Google DeepMind.
A spokesperson for DeepMind Health told Business Insider:
"DeepMind has reimbursed Moorfields for the direct costs they have incurred de-personalising and manually segmenting eye scans prior to transfer … "We hope this work will help doctors faster analyse the 3,000 eye scans Moorfields carries out every week.”
UnimodalDiseaseManagement
Investigative Ophthalmology & Visual Science June 2017, Vol.58, BIO141-BIO150. doi:10.1167/iovs.17-21789
UnimodalTreatment“Personalized precision” medicine
doi: 10.1016/j.preteyeres.2015.07.007
Multimodal ModelGoing beyond human capabilities, the next generation models will be incorporating “full clinical knowledge” of the patient
Multimodal Model
Traditional numerical methods for “small data” problems might not be enough for the emergence of “network medicine”
Holly F. Ainsworth et al. (2017):The use of causal inference techniques to integrate omics and GWAS data has the potential to improve biological understanding of the pathways leading to disease. Our study demonstrates the suitability of various methods for performing causal inference under several biologically plausible scenarios
Ewen Callaway (2017):“Biologists are likely to find that larger studies turn up more and more genetic variants – or “hits” - that have minuscule influences on disease” - Jonathan Pritchard, Stanford University
matrix.
Multimodal ModelPersonalized precision medicine based on multiple measures with EHR mining.
Example for diagnosticsExample for managementExample for treatment
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
http://doi.org/10.1038/srep26094 https://github.com/greenelab/deep-review/issues/63
https://syncedreview.com/2017/02/26/deep-patient-improving-prognosis-with-electronic-health-records-by-deep-learning/ on-demand.gputechconf.com
Multimodal ModelPersonalized precision medicine based on multiple measures with EHR mining.
Example for diagnosticsExample for managementExample for treatment
The future of health diagnostics. Current diagnostics are based on a “snapshot” in time and limited data points. In the future, large datasets acquired over time through constant monitoring will be analyzed to establish baselines and trends, enabling preventative interventions. Part of this figure reuses a drawing previously published in Swedish et al, 2015. Copyright © 2017 ACM, Inc. Adapted from Swedish T, Roesch K, Lee IK, Rastogi K, Bernstein S, Raskar R. EyeSelfie: Self directed eye alignment using reciprocal eye box imaging. ACM Trans Graph. 2015;34(4):58.39 Karin Roesch, Tristan Swedish, and Ramesh Raskar (2017): “Automated retinal imaging and trend analysis – a tool for health monitoring”https://dx.doi.org/10.2147/OPTH.S116265
Multimodal ModelPersonalized precision medicine based on multiple measures with EHR mining.
Example for diagnosticsExample for managementExample for treatment
Classification of advanced stages of Parkinson’s disease: translation into stratified treatmentsJournal of Neural Transmission August 2017, Volume 124, Issue 8, pp 1015–1027Rejko Krüger, Jochen Klucken, Daniel Weiss, Lars Tönges, Pierre Kolber, Stefan Unterecker, Michael Lorrain, Horst Baas,Thomas Müller, Peter Riederer https://doi.org/10.1007/s00702-017-1707-x
Precision Medicine in Pediatric Oncology: Translating Genomic Discoveries into Optimized TherapiesAmerican Association for Cancer Research June 9, 2017Thai Hoa Tran, Avanthi Tayi Shah and Mignon L. Lohhttps://doi.org/10.1158/1078-0432.CCR-16-0115
Relative frequency of genomic alterations in neuroblastoma at diagnosis compared with relapse
UsabilityMake it for the clinician, technician, eye-selfie taker the most easiest to use
Image ManagementIntegrate deep learning algorithms to existing image management software
The easiest for the end-user Kide Systems – Optoflow
https://www.kidesystems.com/optoflow/
Digisight Paxoshttps://www.digisight.net/ds/
Breaking Down Silos Across SpecialtiesSingapore National Eye Centre advanced its technology to capture all images and fully integrate with its EMR.http://go.merge.com/2017-Q1-OC-CS-Singapore-National-Eye-Centre_LP-Singapore-CS.html?utm_source=CS_Singapore
https://www.aao.org/eyenet/article/image-management-systems-what-you-need-to-know
InterpretabilityVisualize what part of the image, 1D signal, the whole “disease network” is causing the deep learning system to flag up the patient in risk
July–August, 2017 Volume 1, Issue 4, Pages 322–327
Cecilia S. Lee, MD, Doug M. Baughman, BS, Aaron Y. Lee, MD, MSCIDepartment of Ophthalmology, University of Washington School of Medicine, Seattle, Washington. http://dx.doi.org/10.1016/j.oret.2016.12.009
An occlusion test (Zeiler and Fergus, 2016) was performed to identify the areas contributing most to the neural network's assigning the category of AMD. A blank 20 × 20-pixel box was systematically moved across every possible position in the image and the probabilities were recorded. The highest drop in the probability represents the region of interest that contributed the highest importance to the deep learning algorithm.
Examples of identification of pathology by the deep learning algorithm. Optical coherence tomography images showing age-related macular degeneration (AMD) pathology (A, B, C) are used as input images, and hotspots (D, E, F) are identified using an occlusion test from the deep learning algorithm. The intensity of the color is determined by the drop in the probability of being labeled AMD when occluded.
InterpretabilityNot just medical algorithms benefit from opening the “black box”
Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999.
The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.
https://youtu.be/bLqJHjXihK8
ExtraResources
Shallow introduction for Deep Learning Retinal Image Analysishttps://www.slideshare.net/PetteriTeikariPhD/shallow-introduction-for-deep-learning-retinal-image-analysis
Shallow introduction for Deep Learning Retinal Image Analysishttps://www.slideshare.net/PetteriTeikariPhD/artificial-intelligence-in-ophthalmology
Shallow introduction for Deep Learning Retinal Image Analysishttps://www.slideshare.net/PetteriTeikariPhD/datadriven-ophthalmology
Shallow introduction for Deep Learning Retinal Image Analysishttps://www.slideshare.net/PetteriTeikariPhD/understanding-the-investors-medical-ai-startups