NASA SBIR 2019-I Solicitation

Proposal Summary


PROPOSAL NUMBER:
 19-1- S5.03-4162
SUBTOPIC TITLE:
 Bridging the Gap of Applying Machine Learning to Earth Science
PROPOSAL TITLE:
 Multi-Resolution Deep Learning for Land Use Applications
SMALL BUSINESS CONCERN (Firm Name, Mail Address, City/State/Zip, Phone)
GeoVisual Technologies, Inc.
9191 Sheridan Boulevard, Suite 300
Westminster, CO 80031- 3025
(720) 323-3399

Principal Investigator (Name, E-mail, Mail Address, City/State/Zip, Phone)

Name:
Jeffrey Orrey
E-mail:
jeffo@geovisual-analytics.com
Address:
1215 Spruce St. Suite 201 Boulder, CO 80302 - 4257
Phone:
(303) 955-1575

Business Official (Name, E-mail, Mail Address, City/State/Zip, Phone)

Name:
Jeffrey Orrey
E-mail:
jeffo@geovisual-analytics.com
Address:
1215 Spruce St. Suite 201 Boulder, CO 80302 - 4257
Phone:
(303) 955-1575
Estimated Technology Readiness Level (TRL) :
Begin: 1
End: 5
Technical Abstract (Limit 2000 characters, approximately 200 words)

Increased spatial and temporal resolution of remotely sensed multispectral imagery is crucial for improved monitoring of land surface dynamics in heterogeneous landscapes undergoing rapid change. Given orbital constraints, satellite imaging sensors such as MODIS and Landsat 8 OLI exhibit tradeoffs between frequent/coarse and sparse/fine scenes, and spatiotemporal fusion techniques have been developed to synthesize images with improved spatial and temporal resolutions from such complementary satellite pairs. In contrast, imagery from manned fixed wing aircraft and UAVs can be acquired both frequently and at high resolution over limited areas. Land surface monitoring would greatly benefit from a capability to combine imagery from these disparate platforms, for which inconsistent or irregular revisit times and variabilities in resolution and spectral bands make existing spatiotemporal fusion techniques insufficient to combine them effectively.  

 

This project will exploit these recent machine learning advances to combine imagery from disparate satellite and airborne platforms, using multi-resolution image time series and transferring fine resolution knowledge gained from higher resolution training images to lower-resolution test scenes. We will test the feasibility of the system to provide improved classification of vegetative land cover and estimations of fractional vegetation cover, particularly for agricultural areas that frequently change on a small spatial scale. During Phase I, we will use an unmanned aerial vehicle (UAV) to make weekly multispectral image collects during the growing cycle of several agricultural crops and combine the scenes with Landsat 8 OLI and Sentinel 2 satellite imagery. We will spatially and temporally subsample the high resolution UAV imagery to simulate imagery acquired from a variety of aerial and additional satellite platforms and compare classifier performance for different spatial resolutions and repeat periods.

Potential NASA Applications (Limit 1500 characters, approximately 150 words)

Related follow-on opportunities for NASA program infusion include integration with the TOPS-SIMs irrigation management program at the Ecological Forecasting Lab at NASA Ames, and NASA Goddard’s Harvest consortium led by the University of Maryland to enhance the use of satellite data in decision making related to food security and agriculture. We will also target its use in more general land cover, land use and change (LCLUC) classification applications such as earth system simulations at the NASA Center for Climate Simulation.

Potential Non-NASA Applications (Limit 1500 characters, approximately 150 words)

Fresh vegetable industry regional forecasting of amounts of different specialty crops.

Duration: 6

Form Generated on 06/16/2019 23:37:17