national imagery and mapping web viewtechnical volumes that exceed the 40-page limit will be...

22
NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY 17.2 Small Business Innovation Research (SBIR) Proposal Submission Instructions GENERAL INFORMATION The National Geospatial-Intelligence Agency has a responsibility to provide the products and services that decision makers, warfighters, and first responders need, when they need it most. As a member of the Intelligence Community and the Department of Defense, NGA supports a unique mission set. We are committed to acquiring, developing and maintaining the proper technology, people and processes that will enable overall mission success. Geospatial intelligence, or GEOINT is the exploitation and analysis of imagery and geospatial information to describe, assess and visually depict physical features and geographically referenced activities on the Earth. GEOINT consists of imagery, imagery intelligence and geospatial information. With our unique mission set, NGA pursues research that will help guarantee the information edge over potential adversaries. Information on NGA’s SBIR Program can be found on the NGA SBIR website at: https://www.nga.mil/Partners/ResearchandGrants/SmallBusinessInnovation Research/Pages/default.aspx . Additional information pertaining to the National Geospatial-Intelligence Agency’s mission can be obtained by viewing the website at http://www.nga.mil/ . Inquiries of a general nature or questions concerning the administration of the SBIR program should be addressed to: National Geospatial-Intelligence Agency Attn: SBIR Program Manager, IDO, MS: S75-RA 7500 GEOINT Dr., Springfield, VA 22150-7500 Email: [email protected] For technical questions and communications with Topic Authors, see DoD Instructions, Section. 4.15. For general inquiries or problems with electronic submission, contact DoD SBIR Help Desk at [email protected] or 1-800-348-0787 between 9:00 am and 6:00 pm ET. NGA-1

Upload: dangquynh

Post on 31-Jan-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY 17.2 Small Business Innovation Research (SBIR)

Proposal Submission Instructions

GENERAL INFORMATION

The National Geospatial-Intelligence Agency has a responsibility to provide the products and services that decision makers, warfighters, and first responders need, when they need it most. As a member of the Intelligence Community and the Department of Defense, NGA supports a unique mission set. We are committed to acquiring, developing and maintaining the proper technology, people and processes that will enable overall mission success.

Geospatial intelligence, or GEOINT is the exploitation and analysis of imagery and geospatial information to describe, assess and visually depict physical features and geographically referenced activities on the Earth. GEOINT consists of imagery, imagery intelligence and geospatial information.

With our unique mission set, NGA pursues research that will help guarantee the information edge over potential adversaries. Information on NGA’s SBIR Program can be found on the NGA SBIR website at: https://www.nga.mil/Partners/ResearchandGrants/SmallBusinessInnovationResearch/Pages/default.aspx. Additional information pertaining to the National Geospatial-Intelligence Agency’s mission can be obtained by viewing the website at http://www.nga.mil/.

Inquiries of a general nature or questions concerning the administration of the SBIR program should be addressed to:

National Geospatial-Intelligence AgencyAttn: SBIR Program Manager, IDO, MS: S75-RA7500 GEOINT Dr., Springfield, VA 22150-7500Email: [email protected]

For technical questions and communications with Topic Authors, see DoD Instructions, Section. 4.15.For general inquiries or problems with electronic submission, contact DoD SBIR Help Desk at [email protected] or 1-800-348-0787 between 9:00 am and 6:00 pm ET.

PHASE I PROPOSAL INFORMATION

Follow the instructions in the DoD SBIR Program BAA for program requirements and proposal submission instructions at http://www.acq.osd.mil/osbp/sbir/solicitations/index.shtml.

NGA has developed topics to which small businesses may respond to in this fiscal year 2017 SBIR Phase I iteration. These topics are described on the following pages. The maximum amount of SBIR funding for a Phase I award is $100,000 and the maximum period of performance for a Phase I award is nine months. While NGA participates in the majority of SBIR program options, NGA does not participate in the either the Fast Track, Commercialization Pilot, Discretionary Technical Assistance (DTA) or Phase II Enhancement programs.

The entire SBIR proposal submission (consisting of a Proposal Cover Sheet, the Technical Volume, Cost Volume, and Company Commercialization Report) must be submitted electronically through the DoD SBIR/STTR Proposal Submission system located at https://sbir.defensebusiness.org/ for it to be evaluated.

NGA-1

Page 2: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

The Proposal Technical Volume must be no more than 20 pages in length. The Cover Sheet, Cost Volume and Company Commercialization Report do not count against the 20-page Proposal Technical Volume page limit. Any Technical Volume that exceeds the page will not be considered for review. The proposal must not contain any type smaller than 10-point font size (except as legend on reduced drawings, but not tables).

Selection of Phase I proposals will be in accordance with the evaluation procedures and criteria discussed in this BAA (refer to section 6.0 of the BAA).

The NGA SBIR Program reserves the right to limit awards under any topic, and only those proposals of superior scientific and technical quality in the judgment of the technical evaluation team will be funded. The offeror must be responsive to the topic requirements, as solicited.

Federally Funded Research and Development Contractors (FFRDC) and other government contractors, whom have signed Non-Disclosures Agreements, may be used in the evaluation of your proposal. NGA typically provides a firm fixed price level of effort contract for Phase I awards. The type of contract is at the discretion of the Contracting Officer.

Phase I contracts will include a requirement to produce one-page monthly status reports and a more detailed interim report not later than 7 ½ months after award. These reports shall include the following sections:

A summary of the results of the Phase I research to date A summary of the Phase I tasks not yet completed, with an estimated completion date for each

task A statement of potential applications and benefits of the research.

The interim report shall be no more than 750 words long. The report shall be prepared single spaced in 12 pitch Times New Roman font, with at least a one inch margin on top, bottom, and sides, on 8 ½” by 11” paper. The pages shall be numbered.

PHASE II GUIDELINES

Phase II is the demonstration of the technology found feasible in Phase I. All NGA SBIR Phase I awardees from this BAA will be allowed to submit a Phase II proposal for evaluation and possible selection.

Small businesses submitting a Phase II Proposal must use the DoD SBIR electronic proposal submission system (https://sbir.defensebusiness.org/). This site contains step-by-step instructions for the preparation and submission of the Proposal Cover Sheets, the Company Commercialization Report, the Cost Volume, and how to upload the Technical Volume. For general inquiries or problems with proposal electronic submission, contact the DoD SBIR/STTR Help Desk at (1-800-348-0787) or Help Desk email at [email protected] (9:00 am to 6:00 pm ET).

NGA SBIR Phase II Proposals have four Volumes: Proposal Cover Sheets, Technical Volume, Cost Volume and Company Commercialization Report. The Technical Volume has a 40-page limit including: table of contents, pages intentionally left blank, references, letters of support, appendices, technical portions of subcontract documents (e.g., statements of work and resumes) and any attachments. Do not include blank pages, duplicate the electronically generated Cover Sheets or put information normally associated with the Technical Volume in other sections of the proposal as these will count toward the 40-page limit.

NGA-2

Page 3: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

Technical Volumes that exceed the 40-page limit will be reviewed only to the last word on the 40th page. Information beyond the 40th page will not be reviewed or considered in evaluating the offeror’s proposal. To the extent that mandatory technical content is not contained in the first 40 pages of the proposal, the evaluator may deem the proposal as non-responsive and score it accordingly.

The NGA SBIR Program will evaluate and select Phase II proposals using the evaluation criteria in Section 8.0 of the DoD SBIR Program BAA. Due to limited funding, the NGA SBIR Program reserves the right to limit awards under any topic and only proposals considered to be of superior quality will be funded. NGA typically provides a cost plus fixed fee contract as a Phase II award. The type of contract is at the discretion of the Contracting Officer.

Phase II proposals shall be limited to $1,000,000 over a two-year period with a Period of Performance not exceeding 24 months. A work breakdown structure that shows the number of hours and labor category broken out by task and subtask, as well as the start and end dates for each task and subtask, shall be included.

Phase II contracts shall include a requirement to produce one-page monthly status and financial reports, an interim report not later than 10 months after contract award, a prototype demonstration not later than 23 months after contract award and a final report not later than 24 months after contract award. These reports shall include the following sections:

A summary of the results of the Phase II research to date A summary of the Phase II tasks not yet completed with an estimate of the completion date for

each task A statement of potential applications and benefits of the research.

The report shall be no more than 750 words long. The report shall be prepared single spaced in 12 point Times New Roman font, with at least a one inch margin on top, bottom, and sides, on 8 ½” by 11” paper. The pages shall be numbered.

NGA SBIR 17.2 Topic Index

NGA-3

Page 4: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

NGA172-001 Accurate 3D Model Generation from Multiple ImagesNGA172-002 Low-Shot Detection in Remote Sensing ImageryNGA172-003 Signal Matching via Computationally Efficient HashingNGA172-004 Advanced Image Segmentation for Radar ImageryNGA172-005 Super Resolution of Satellite Imagery using Multi-Sensor FusionNGA172-006 Elevation Data in Urban EnvironmentsNGA172-007 Conflation of 3D Foundation DataNGA172-008 Enabling temporal response of Organic Light Emitting Diode (OLED) 4K displays for

smooth and continuous high MTF visualizationNGA172-009 Improved Image Processing for Low Resolution Imagery with Inter-Frame Pose Variation

NGA-4

Page 5: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

NGA SBIR 17.2 Topic Descriptions

NGA172-001 TITLE: Accurate 3D Model Generation from Multiple Images

TECHNOLOGY AREA(S): Electronics, Information Systems, Sensors

OBJECTIVE: Extracting high resolution 3D models from images of textured and non-textured surfaces in an uncontrolled environment.

DESCRIPTION: A primary objective of computer vision is to enable reconstructing 3D scene geometry and camera motion from a set of images [1]. Several toolsets exist that rely on a controlled collection of images from a single calibrated and known camera to generate a model [2] [3] [4]. This research is unique and will focus on maximizing the level of accuracy and detail in an uncontrolled environment [5]. The environmental limitations include a combination of limited images [6], limited viewing angles, and missing camera metadata. The research goal is to generate a highly accurate model of a large object, such as an automobile, using any combination of the following conditions:

Scenario #1: Known Camera in Non-Controlled Environment:• Standard DSLR camera• No more than 15 images• Range greater than 5m• Access limited to <180 degrees

Scenario #2: Known Camera in Non-Controlled Environment:• Standard video camera• No more than 30 seconds of video• Range greater than 5m• Access limited to <180 degrees

Scenario #3: Internet Images [5] [6] [7]:• Automobile must be different color than previous scenarios• No camera metadata

The desired end result is a documented workflow that can generate accurate 30 mesh models against many different types of large objects. Accurately is measured as difference from a manually constructed CAD model of the object. The processing architecture should be implemented using commodity hardware and leverage open-source code and projects if applicable. All results should include a measurement of the accuracy of the model generated. Collection limitations and best-practices shall be documented in final project report along with innovative approaches used to capture data to improve model accuracy.

PHASE I: Document algorithms and/or tools that generate an accurate 3D mesh along with calculated model error and deliver as a proof-of-concept software deliverable. The final project report shall document best practices and potential ideas for innovative approaches to capture data and improve model accuracy.

PHASE II: Build a prototype workflow using innovative data collection approaches that minimize model error and/or reduce manual data manipulation. Software shall be delivered as a proof-of-concept software deliverable.

PHASE III DUAL USE APPLICATIONS: Military Application: Object Modeling, Surveillance, Technical Intelligence, Photogrammetry. Commercial Application: Photogrammetry, Object Modeling.

REFERENCES:

NGA - 5

Page 6: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

1. F. Dellaert, S. Seitz, C. Thorpe and S. Thrun, "Structure from Motion without Correspondence, " in IEEE Computer Vision and Pattern Recognition, 2000.

2. P. Moulon, "openMVG," [Online]. Available: http://imagine.enpc.fr/""'moulonp/openMVG.

3. N. Snavely, "Bundler," [Online]. Available: http://www.cs.cornell.edu/""'snavely/bundler.

4. C. Wu, "VisualSFM,” [Online]. Available: http://www.ccwu.me/vsfm.

5. H. M. Nguyen, W. C. Burkhard, P. Delmas, C. Lutteroth and W. van der Mark, "High resolution 3D content creation using unconstrined and uncalibrated cameras," in 6th International Conference on Human System Interactions, 2013.

6. R. Padmapriya, M. Li and X. Li, "Efficient dense 3D reconstruction using image pairs," in 10th International Conference on Computer Science & Education, 2015.

7. S. Yu, Y. Qi and X. Shen, "Multi-view Stereo Reconstruction for Internet Photos," in International Conference on Virtual Reality and Visualization, 2011.

8. S. Agarwal, N. Snavely, I. Simon, S. M. Seitz and R. Szeliski," Building Rome in a Day," in 12th International Conference on Computer Vision, 2009.

9. Y. Furukawa, B. Curless, S. M. Seitz and R. Szeliski, "Towards Internet-scale multi-view stereo," in Computer Vision and Pattern Recognition, 2010.

KEYWORDS: image processing, structure from motion, 3D modeling, computer vision, photogrammetry

NGA172-002 TITLE: Low-Shot Detection in Remote Sensing Imagery

TECHNOLOGY AREA(S): Battlespace, Electronics, Information Systems, Sensors

OBJECTIVE: Develop novel techniques to identify and locate uncommon targets in overhead and aerial imagery, specifically when few prior examples are available. Initial focus will be on panchromatic electro-optical (EO) imagery with a subsequent extension to multi-spectral imagery (MSI) and/or synthetic aperture radar (SAR) imagery.

DESCRIPTION: The National Geospatial Intelligence Agency (NGA) produces timely, accurate and actionable geospatial intelligence (GEOINT) to support U.S. national security. To continue to be effective in today's environment of simultaneous rapid growth in available imagery data and number and complexity of intelligence targets, NGA must continue to improve its automated and semi-automated methods. Of particular concern is furthering NGA's ability to identify and locate uncommon intelligence targets in large search regions.

Recent advances in computer vision due to deep learning have dramatically improved the state¬ of-the-art in techniques such as object detection and semantic segmentation, to include scenarios where little data is available for training (i.e., low-shot). While these approaches have shown encouraging results on NGA problems, little research has been performed which is specific to the remote sensing domain: a deficiency which this effort aims to address.

It is the intention of the government to provide labeled data (pending availability and releaseability) to assist with algorithm development and capability demonstration; however, the performer is encouraged to identify and use imagery from other sources as well. Baseline performance will be established by a comparison against a state-of-the-art detection network (e.g., Feature Pyramid Networks, YOLO9000) pre-trained on ImageNet or COCO.

NGA - 6

Page 7: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

Performance evaluation will be conducted using metrics derived from precision-recall curves.

PHASE I: Address deep learning low-shot detection in panchromatic EO with consideration for extension to MSI and/or SAR in Phase II. Phase I will result in a proof of concept algorithm suite for low-shot detection and thorough documentation of conducted experiments and results in a final report.

PHASE II: Extend Phase I capabilities to MSI and/or SAR to include researching methodologies to simultaneously exploit multiple modalities and/or develop zero-shot capabilities (i.e., no prior available examples). Develop enhancements to address identified deficiencies from Phase I or those identified when processing MSI and/or SAR data. Deliver updates to the proof of concept algorithm suite and technical reports. Phase II will result in a prototype end-to-end implementation of the Phase I low-shot detection system extended to process MSI and/or SAR imagery and a comprehensive final report.

PHASE III DUAL USE APPLICATIONS: Technology enabling the automated search for uncommon objects in overhead imagery would be widely applicable across the government and commercial sectors. Military applications include national security, targeting, and intelligence. Commercially, it will apply to urban planning, geology, anthropology, economics, and search and rescue; and all other domains that benefit from identifying objects in overhead imagery.

REFERENCES:1. Hariharan B. and Hirshick R. Low-shot Visual Object Recognition by Shrinking and Hallucinating Features. arXiv preprint arXiv:1606.02819v2. 2016 Nov 30. (Updated on 5/16.17.)

2. Wang Y. and Hebert M. Combining low-density separators with CNNs. In Advances in Neural Information Processing Systems 2016 (pp.244-252).

3. Lin T. et al., Feature Pyramid Networks for Object Detection. arXiv preprint arXiv:1612.03144, 2016 Dec 9.

4. Xie M., Jean N., Burke M., Lobell D., Ermon S. Transfer Learning from deep features for remote sensing and poverty mapping. arXiv preprint arXiv:lSl0.00098. 2015 Oct 1.

5. Shrivastava A et al. Learning from Simulated and Unsupervised Images through Adversarial Training. arXiv preprint arXiv: 1612.07828. 2016 Dec 22. (Added on 5/16/17.)

6. Bourmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks. arXiv preprint arXiv: 1602.05424. 2016 Dec 16. (Added on 5/16/17.)

7. Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger. arXiv preprint arXiv preprint arXiv: 1612.08242. 2016 Dec 25. (Added on 5/16/17.)

8. Lin TY, et al. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision 2014 2014 Sept 6 (pp. 740-755). Springer International Publishing. (Added on 5/16/17.)

9. Deng J, et al. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on 2009 Jun 20 (pp. 248-255). IEEE. (Added on 5/16/17.)

KEYWORDS: computer vision, machine learning, deep learning, detection, segmentation, low shot learning, image processing

NGA172-003 TITLE: Signal Matching via Computationally Efficient Hashing

NGA - 7

Page 8: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Develop a fast automated signal matching algorithm based on hashing algorithms.

DESCRIPTION: Hashing-based algorithms for audio-matching were introduced by Shazam Entertainment, Ltd. in 2000 to help customers identify music tracks based on short audio clips collected from cell phones [1] [2]. This algorithm matches a single clip against a massive archive of possible songs in less than a second. The algorithm works by encoding the full song as an "audio fingerprint" and hashing the fingerprints for fast lookup. When a clip is then searched it is also encoded and hashed before being scored against an archive of music. This generates a list of possible matching songs. The documented approach has limited success against discrete time series data that does not generate sharp and reliable peaks in a spectrogram. Mining large volumes of data for similar items has many different automated solutions [3]. A current algorithm that works well and is widely used is normalized cross correlation [4]. A major downside to normalized cross-correlation is that it has a runtime based on the number of points in the input and the number of points in the archive. The goal of this project is to develop a hash-based approach that builds upon prior work, but extends the algorithm to support a wide range of single-band discrete time-series data that is robust to noise and amplitude. We are looking for an open technology development [5] that implements creative algorithms to process single-band discrete time-series data that are robust, tolerant to noise, amplitude invariant, and reliable. The desired end result is an automated algorithm that can run in real-time against a large archive. The processing architecture should demonstrate scalability and be implemented using commodity hardware. The expected output is a list of the top matches in sorted order. The government recommends using stock ticker, EKG, or similar data to compare a chip of time to a historical archive to discover similar signatures within the archive.

PHASE I: Demonstrate an innovative algorithm that provides a rapid search capability via an API against discrete time-series data, document results in a final report, and test in a proof-of concept scalable platform.

PHASE II: Implement the Phase I proof-of-concept algorithm against real sensor data and enhanced to correct any deficiencies identified in Phase I with documentation of the test results and updated algorithm.

PHASE III DUAL USE APPLICATIONS: Military Application: Data Labeling, Surveillance, Technical Intelligence. Commercial Application: Voice Matching, Pattern Recognition

REFERENCES:1. A. L.-C. Wang, "An Industrial Strength Audio Search Algorithm".

2. J. Haitsma and T. Kalker, "A highly robust audio fingerprinting system," in International Conference on Music Information Retrieval, Paris, France, 2002.

3. J. Leskovec, A. Rajaraman and J. D. Ullman, Mining of Massive Datasets, 2014.

4. T. Vigen, Spurious Correlations, Hachette Books, 2015.

5. DoD CIO Office, "Open Technology Development: Lessons Learned and Best Practices for Military Software,” 2011.

KEYWORDS: signal processing, image processing, signal matching, audio fingerprinting, pattern recognition

NGA172-004 TITLE: Advanced Image Segmentation for Radar Imagery

NGA - 8

Page 9: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Construct a computer vision system for synthetic aperture radar imagery (SAR) to demonstrate segmentation into terrain types.

DESCRIPTION: NGA Research seeks image segmentation algorithms for SAR imagery to identify different terrain types (roads, forest, agricultural, urban). These algorithms should be robust against varying collection geometry and image quality with performance that is comparable to human analysis of SAR imagery.

Rather than computer vision systems that rely on tightly controlled and constrained operating conditions, the goal is to develop and demonstrate an approach that is able to classify terrain type and land use in complex scenes with varying image quality and in differing collection geometries. The proposed approaches are expected to lead to systems that can be readily repurposed to imagery from different radar systems. Preference is for fully automated image segmentation systems. Applications could include improved systems for Intelligence, Surveillance, Reconnaissance (ISR), Automated Target Recognition (ATR), attention focusing systems, and automated detection systems.

PHASE I: Produce an architecture and system design. Identify the algorithms to be developed/adapted and where they fit in the design, the metrics by which the technology will be evaluated, and the level of performance against those metrics expected in the evaluation of the working prototype developed by the end of Phase II. Describe each of these elements in detail in the final report.

PHASE II: Construct a working prototype based on the system design developed in Phase I that incorporates all key components of the proposed approach. Evaluate the prototype using prototype data and measure its performance against the metrics defined in Phase I. Demonstrate that the approach can be adapted to varying image quality and other challenges and is adaptable to a radar imagery from a variety of systems. Phase II deliverables are a demonstration of the working prototype and a final report. The final report should describe the as-built architecture, algorithm description document(s) and design of the prototype, the results of the prototype evaluation, and a description of work needed to mature the technology to a point suitable for use in commercial and/or DoD applications.

PHASE III DUAL USE APPLICATIONS: DoD applications include automated surveillance, tracking, and autonomous detection and intelligence analysis of imagery. Commercial applications are expected to include security systems, automated classification and discovery of images, robotic vision, and traffic monitoring.

REFERENCES:1. S. Piramanayagam, W. Schwartzkopf, F. W. Koehler, and E. Saber, "Classification of remote sensed images using random forests and deep learning framework,” in Proc. SPIE 10004, Image and Signal Processing for Remote Sensing XXll, October 18, 2016. doi:10.1117/12.2243169.

KEYWORDS: computer vision, machine learning, image processing, radar, SAR, scene recognition, scene understanding, image segmentation.

NGA172-005 TITLE: Super Resolution of Satellite Imagery using Multi-Sensor Fusion

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Develop algorithms for enhancing resolution of satellite imagery from one source that leverages information from other satellite image sources that might have different resolution and different phenomenology.

NGA - 9

Page 10: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

DESCRIPTION: Analysis of satellite imagery is always limited by image resolution. No matter what the resolution of the available imagery there is always additional information of value that lies just beyond the sensor's ability to resolve. Super resolution is the enhancement of imagery to improve resolution beyond what the sensor collects. Typically, super resolution algorithms synthesize additional detail by exploiting the redundancy of multiple similar images, such as frames of video. Other approaches involve using models of objects to interpolate missing data by projecting edges or using high-pass filtering. These approaches fail to address the common case of when multiple sources of data are available, such as imagery taken at different times by different cameras or different sensors, but are essentially of the same scene. With the proliferation of different kinds of satellite sensors for both military and commercial use, the latter scenario is becoming more common and more salable. For example, commercial small satellite companies like Planet aim to provide imagery with daily revisit rates. DigitalGlobe imagery requires several days for revisits, but can provide greater resolution.

The government seeks implementable algorithms to provide a super resolution capability that can synthesize plausible higher resolution images to enable the interpretation and extraction of information that is not normally present in any single presentation of the data. The algorithmic approach should provide for the possibility of improvement as a function of the location or type of terrain, based on reinforcement from an operator's approval or disapproval, or selection of a candidate super-resolved image, or based on new truth obtained from subsequent higher resolution collection, or other training approaches, or a combination of these.

PHASE I: The small business will develop a proof of concept solution for super resolution of low resolution satellite imagery using examples of co-located satellite imagery from different sources. They will demonstrate the feasibility of the algorithm to generate plausible solutions that agree with training data. The company will propose a design for algorithm enhancements to permit improvement of the super-resolved imagery through a training process.

PHASE II: The small business will develop a demonstration system to implement a prototype multi-source super resolution and demonstrate that the algorithmic approach generates accurate increased interpretability of scenes, and can improve based on further training.

PHASE III DUAL USE APPLICATIONS: Government agencies such as NGA will desire the capability to support geospatial-intelligence analysis using available imagery. Private sector commercial applications will leverage remote sensing as a burgeoning field with large commercial military markets. Improved image resolution will provide value to image analysis in markets such as precision agriculture, financial intelligence, and environmental monitoring.

REFERENCES:1. Liebel, Lukas, and Marco Korner. "Single-Image super resolution for multispectral remote sensing data using convolutional neural networks." XXlll ISPRS Congress proceedings p883-890. 2016.

2. Dong, Chao, et al. "Learning a deep convolutional network for image super-resolution." European Conference on Computer Vision. Springer International Publishing, 2014.

3. Dong, Chao, et al. "Image super-resolution using deep convolutional networks." IEEE transactions on pattern analysis and machine intelligence 38.2 (2016): 295-307.

KEYWORDS: Superresolution, imagery, resolution

NGA172-006 TITLE: Elevation Data in Urban Environments

TECHNOLOGY AREA(S): Information Systems

NGA - 10

Page 11: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

OBJECTIVE: Develop algorithms and techniques that provide superior digital elevation data constituting the surfaces of fixed structures in urban or highly structured environments using remote sensing image data.

DESCRIPTION: Current methods to produce digital elevation maps using remote sensing have difficulty in urban settings or areas with significant vertical structures and features. Much of the digital elevation data (DEM) used in both commercial and national databases attempts to accurately represent the base ground topography, without regard for vertical obstructions that effectively change the overlying surface.

Computational techniques for generating 3D scene geometry from stereo or multi-look electro optical imagery are mature. These techniques typically produce dense point clouds of 3D locations consisting of unstructured noisy data. In many cases, surfaces blend together continuously and smoothly, when sharp discontinuities in surface normals are known to exist due to the location of buildings and other structures. The challenge is to create accurate surface boundary representations from the point clouds using models and known information about locations and shapes of structures. While techniques exist to suppress noise and convert point clouds to boundary representations and to simplify those representations, the increasing availability of foundation data that can be associated with the data provides the opportunity to produce more accurate broad surface representations. The association of point cloud data to foundation features can be mediated by maps, electro-optical (EO) imagery, and/or synthetic aperture radar (SAR) data.

It is expected that sensor fusion techniques should be able to improve elevation maps, for example employing a combination of visible and synthetic aperture radar (SAR) image data. SAR is complementary to EO as an imaging modality. Geometrically, the location ambiguity of each sample has a different structure (a range / cross range circle as opposed to an elevation / azimuth ray). With appropriate algorithms, it should be possible to exploit the complementary properties of both SAR and EO imagery to produce a more robust and higher fidelity geometric description than could be achieved with either modality alone.

PHASE I: Develop algorithms and techniques to enhance and make more robust 3D elevation maps in regions with dense vertical structures such as urban environments. Assess the performance of the developed technique and the utility for both visual exploitation and value added processing such as change detection.

PHASE II: Develop an efficient implementation capable of processing large images. Test on a larger sample of government-provided data. Tune or adapt the algorithms to increase accuracy.

PHASE III DUAL USE APPLICATIONS: Apart from military use, accurate representation of structures and obstructions will be increasingly important for use by commercial firms that need to update and navigate urban canyons for signals propagation and package delivery systems, operating in the 3D environment.

REFERENCES:1. Ruiz, S. Arroyave, D. Acosta, "Fitting of Analytic Surfaces to Noisy Point Clouds," American Journal of Computational Mathematics, 2013, 3, 18-26; http://dx.doi.org/10.4236/ajcm.2013.31A004 Published Online April 2013 (http://www.scirp.org/journal/ajcm)

2. M. Awrangjeb, G. Lu, C. Fraser, "Automated Building Extraction from Lidar Data Covering Complex Urban Scenes," International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-3, 2014, http://www.int-arch-photogramm-remote sens-spatial-inf-sci.net/XL-3/25/

KEYWORDS: Point clouds, elevation, digital elevation maps, DEM, 3D data, sensor fusion

NGA172-007 TITLE: Conflation of 3D Foundation Data

NGA - 11

Page 12: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

TECHNOLOGY AREA(S): Information Systems

OBJECTIVE: Provide highly accurate foundation data of buildings and linear structures by conflating information from multiple available images and volunteered information.

DESCRIPTION: Much manual labor is involved in. assessing and correcting errors in geospatial information concerning roads, railroads, buildings, and other prominent important features to produce accurate maps. Further, information about the 3D positions of points associated with these features is increasingly important for multiple applications. Fortunately, there is a proliferation of sources of information that can be used to curate accurate position information, although errors still need to be corrected due to noise and discrepancies that can arise from these multiple sources. Conflation refers to processes that aggregate data from the different sources, producing the most accurate assessment from the available data by taking into account potential causes of error, discounting outliers, and applying reasoning from models. For example, railroad lines are typically highly limited in vertical grade rates (as are roads), and footprints of building structures do not generally (but not always!) intersect with the spatial extent of roads.

Information sources available for the identification of the location of foundation features include (1) volunteered geographical information (VGI), (2) features extracted automatically from publicly available overhead imagery, such as the automated identification of building rooftops, (3) vector, polyline, and polygon data, or raster scan data, in maps and charts, (4) geometric data extracted from pairs or multiple overhead images by finding correspondences among points, as well as (5) other information that can include imagery, video, and maps that can be used to validate and accurately locate foundation features. Each of these sources represent research and development processes that have now progressed to relatively mature states. At issue is methods to conflate these sources in a way that provides greater accuracy than any single source alone, so as to automate the process of correcting errors and manually improving accuracy of foundation features.

Ideally, foundation data would provide machine-readable representations of shapes of structures that can span 3D volumes, 2D manifolds with boundary (such as surfaces lying in 3D such as ribbons), and D curves lying in 3D space, as appropriate to the foundation feature. Buildings, for example, are 3D volumes, whereas roads are more generally 2D surfaces. Individual information sources for foundation features are not only error prone, but they also rarely represent the information in these ideal formats. Accordingly, the conflation process should provide as much of the representation in the ideal format as possible. However, if full 3D shape information of a building is not available (which is likely to be the case in the usual situation), then approximate representations of the building might be based on the footprint and a height. Similarly, a road might be represented by its center-line and a width. Shortcuts such as these need to be documented and represented in a flexible format that respects various extensions to foundation feature management systems.

PHASE I: Designate a set of foundation feature types, assemble information sources, document discrepancies, and design approaches to conflating information to improve representation and accuracy of the chosen foundation features in those datasets.

PHASE II: Develop and implement software to access information sources and conflate data to produce representations of a class of foundation features, to include roads and buildings, and demonstrate accuracy improvements from the conflation process with metrics.

PHASE III DUAL USE APPLICATIONS: The National Geospatial-lntelligence agency regularly contracts for foundation feature development, and is interested in accurate representations, while commercial uses of publicly-available satellite imagery will increasingly require highly accurate representations for multiple practical applications.

REFERENCES:1. Ohori, K. A.; Ledoux, H.; Biljecki, F.; Stoter, J. "Modeling a 3D City Model and Its Levels of Detail as a True 4D Model." ISPRS Int. J. Geo-Inf. 4 (2015) 1055-1075. Web.

NGA - 12

Page 13: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

2. Huh, Yong, Sungchul Yang, Chillo Ga, Kiyun Vu, and Wenzhong Shi. "Line Segment Confidence Region-based String Matching Method for Map Conflation." ISPRS Journal of Photogrammetry and Remote Sensing 78 (2013): 69-84. Web.

KEYWORDS: Conflation, 3D Feature Extraction, Foundation Data, Volunteered Geographic Information

NGA172-008 TITLE: Enabling temporal response of Organic Light Emitting Diode (OLED) 4K displays for smooth and continuous high MTF visualization

TECHNOLOGY AREA(S): Human Systems, Information Systems

OBJECTIVE: Develop novel ways to update display electronics to maximize native OLED performance within exploitation displays for high-end imagery visualization needs. Design and breadboard prototype techniques that demonstrate sustaining dynamic sharp pixel edge rendering by OLED primitives with novel drive circuits, interfaced with an interoperable display port interface.

DESCRIPTION: Despite recent consequential advances in applied OLED material science, the back-end electronic driver ecosystem for organic light emitting diodes (OLEDs) remains deficient to market suitability for application of pixel fidelity for high motion visualization.

Despite techniques to interpolate and augment frame rate pixels, the presentation and rendering of remote sensing imagery and high speed video with typical commercial displays is inadequate for effortless cognitive imagery analytic purposes. Due to the native temporal response, OLEDs provide a unique opportunity to natively render high-speed, high-quality pixels for analysts. To remedy this deficiency, research is required to: (1) establish latency diagrams with contemporary OLED displays, and (2) design timing and circuit models to engineer electronics to fully address and render visual content in motion imagery, retaining full edge definitions over the requisite operational envelope for addressability, frame-rate and smooth and continuous scene updates.

The standard technique for CRT displays was to fully document the frequency band pass, electron gun options, and transition timelines associated with spot size, sweep rates, frame rates, beam-blanking, retrace, and phosphorus persistence. The result was a continuous presentation of remotely sensed imagery with movement-tuned to retrace frame rates that yielded to the human vision system a visually smooth and continuous motion of raster pixels while maintaining relatively sharp edges. The transition of systems from CRT to AM LCD and recently to OLED technologies has yielded display rendering devices where the back-end electronics has not been tuned to the parametric models of the temporal characteristics that new display technology can provide. Especially in the case of OLED technology, pixel response characteristics should permit far greater motion display fidelity absent blur, or other visually degrading artifacts.

Accordingly, there is a need to design- and breadboard a superior OLED electronic interface that takes advantage of and is tuned to superior temporal models yielding key performance characteristics. Performance data of rendering thresholds using OLED's and standardized visual motion test targets will be required in order to ensure uniformity of performance and accurate transfer of visual information to human analysts.

The National Geospatial-lntelligence Agency searches for novel approaches that provide the necessary understanding of how new electronics can leverage OLED systems to render high performance motion imagery data in a way that maximizes the interpretability of the total information content.

PHASE I: Develop a timing diagram of the native response of OLED's and drive circuits for smooth and continuous sharp-pixel definition for the rendering of motion imagery, such as full motion video from airborne assets. Design proof-of-concepts for the design of electronics according to parametric models that include timing latency, circuit interfaces, and information for interoperability with display port interfaces.

NGA - 13

Page 14: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

PHASE II: Build and demonstrate an electronic prototype (bread board) sufficient to evaluate and demonstrate a 4K OLED display for improved pixel in motion performance yielding measurable gains in visual detection, tracking, and cognition of objects when in motion. Using test targets and imagery datasets, verify performance and compare with commercial off the shelf monitor products.

PHASE III DUAL USE APPLICATIONS: The commercialization of electronics to maximize the improved utility of OLED display systems would service an extremely broad range of military and civilian applications where the visual detection of particular objects in motion are critical analytic functions requiring high end display capabilities. Applications include remote sensing, inspection, analysis and playback of sporting events, rocket launch visualizations, analysis in destructive testing, and other activities where high-temporal frame rate visualization is important.

REFERENCES:1. Display Performance Standard, NGA STND.0015, Version 3.1,13 July 2010, National Geospatial Intelligence Agency (NGA), National Center for Geospatial Intelligence Standards, see https://nsgreg. nga.mil/doc/view?i=1649 (accessed 1/17/2016).

2. Silosky MS, Marsh RM, Scherzinger AL. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures. J Appl Clin Med Phys. 2016 Jul 08; 17(4):6220.

3. Silosky M, Marsh RM. Characterization of Luminance and Color Properties of 6-MP Wide Screen Displays. J Digit Imaging. 2016 Feb; 29(1):7-13. J Digit Imaging. 2016 Feb; 29{1):7-13. doi: 10.1007/s10278-015-9811-7.

4. International Committee for Display Metrology (ICDMS), Society for Information Display (SID), Definitions and Standards Committee, International Committee for Display Metrology (ICDM) Information Display Measurements Standard, Version 1.03, June 1, 2012

5. National Information Display Laboratory {NIDL) Publication No. 171795-036, Display Monitor Measurement Methods under Discussion by EIA (Electronic Industries Association) Committee JT-20, Part 1: Monochrome CRT Monitor Performance, Draft Version 2.0, July 12, 1995. NIDL Publication No. 171795-037

6. DisplayPort (DP} audio/video standard Version 1.4,01 March 2016, Video Electronics Standards Association (VESA}, http://www.vesa.org/featured-articles/vesa-publishes displayport-standard-version-1-4/

7. Fang Fang, Huseyin Boyaci, Daniel Kersten, and Scott 0. Murray, Attention-Dependent Representation of a Size Illusion in Human Vl, Current Biology 18, 1707-1712, November 11, 2008 2008 Elsevier Ltd All rights reserved DOI 10.1016/j.cub.2008.09.025

8. Andrew Dillon et al., "Visual Search and Reading Tasks Using ClearType and Regular Displays," in SIGCHI Conference on Human Factors in Computing Systems, ACM Press, 2006, pp. 503-11.

9. Lee Gugerty et al., "Sub-pixel Addressing Improves Performance," ACM Transactions on Applied Perception, Vol. 1, no. 2, 2004, pp. 81-101.

KEYWORDS: display temporal performance; motion imagery; OLEDs

NGA172-009 TITLE: Improved Image Processing for Low Resolution Imagery with Inter-Frame Pose Variation

NGA - 14

Page 15: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

TECHNOLOGY AREA(S): Sensors

OBJECTIVE: Develop robust, automated super resolution image processing techniques to use on low resolution video of objects under a changing pose.

DESCRIPTION: Super resolution for image processing has been around since the 1980s and multiple techniques exist consisting of various registration, interpolation and restoration methods, with the majority of research and development focused on rigid or static scene applications. Some research is available on using super resolution for moving objects [1] [2] [3], including an approach that addresses translation, rotation and zooming motion [4], but the techniques are not available as automated algorithms to run against real-world data nor have they tested on representative simulated datasets for our area of interest.

Currently available super resolution image processing techniques for low resolution IR sensor data apply bias subtraction and image registration; use Drizzle algorithms to recover sampling losses in point structure targets; and then use generic deconvolution algorithms for compensation of effects from Drizzle itself, focal plane charge diffusion, and optical aberrations and diffraction. While the Drizzle algorithm is simple and fast, it requires image registration that is accurate to a small fraction of a pixel and requires a relatively large number of frames. [S] Additionally, only the translation transform from the image registration algorithm is applied in the Drizzle algorithm currently. When run against non-static scenes, these super-resolution algorithms create image artifacts that limit imagery analysis.

We are looking for an open technology development [6] that implements non-proprietary algorithms that are less sensitive to image registration errors and account for translation, scale, and rotational transforms of the imagery; the algorithms can build on existing techniques or use alternative techniques. The offeror should consider and place higher priority on implementations that run in a cloud processing environment. The desired end result is an automated algorithm that can run in our large scale near-real-time offline processing environment. The expected output is a super resolved image of 4-lOX improved resolution over the original low resolution data that allows for improved measurements of the object dimensions. The proposal should specify datasets to be used for algorithm verification and validation. The government can provide MATLAB code of the existing technique.

PHASE I: The expected product of Phase I is a super-resolution algorithm that takes as input an image sequence containing an object against a uniform background with varying pose and outputs a super resolved image of 4-lOX improved resolution over the original low resolution data that allows for measurement of the dimensions of objects along with an estimated accuracy of the product, documented in a final report and implemented in a proof-of-concept software deliverable.

PHASE II: The expected output of Phase II is a prototype automated implementation of the Phase I proof-of-concept algorithm, tested against real sensor data and enhanced to correct any deficiencies identified in Phase I, with documentation of the test results and updated algorithm.

PHASE III DUAL USE APPLICATIONS: Military Application: Surveillance, Technical Intelligence. Commercial Application: Security and police surveillance, Medical imaging.

REFERENCES:1. Antoine Letienne, Frederic Champagnat, Caroline Kulcsar, Guy Le Besnerais, Patrick Viaris De Lesegno, "Fast Super-Resolution on Moving Objects in Video Sequences," 16th European Signal Processing Conference, August 2008.

2. A. W. M. VanEekeren, K. Schutte, J. Dijk, D. de Lange, L. van Vliet, "Super-resolution on moving objects and background," IEEE International Conference on Image Processing, Atlanta, October 2006, pg. 2709-2712.

3. Frederick Wheeler, Anthony Hoogs, "Moving Vehicle Registration and Super-Resolution," IEEE 36th Applied Imagery Pattern Recognition Workshop, 2007, pg. 101-107.

NGA - 15

Page 16: NATIONAL IMAGERY AND MAPPING Web viewTechnical Volumes that exceed the 40-page limit will be reviewed only to the last word on ... Military Application ... Applications include remote

4. Yushuang Tian, Kim-Hui Yap, Li Chen, "Ll-norm Multi-frame Super-resolution from Images with Zooming Motion," IEEE 13th International Workshop on Multimedia Signal Processing, 2011, pg. 1-6.

5. Eric Shields, Drew Kouri, "An Advanced Iterative Algorithm for Super-Resolution”, OPIR Tech Forum, August 2014.

6. DoD CIO's Office, "Open Technology Development (OTP}: Lessons Learned & Best Practices for Military Software", 2011, https://dodcio.defense.gov/Portals/O/Documents/FOSS/OTD lessons-learned-military-signed.pdf.

KEYWORDS: image processing, super resolution, low resolution video, Drizzle, varying pose

NGA - 16