CCPortal
DOI10.1016/j.rse.2019.111599
Soybean yield prediction from UAV using multimodal data fusion and deep learning
Maimaitijiang M.; Sagan V.; Sidike P.; Hartling S.; Esposito F.; Fritschi F.B.
发表日期2020
ISSN00344257
卷号237
英文摘要Preharvest crop yield prediction is critical for grain policy making and food security. Early estimation of yield at field or plot scale also contributes to high-throughput plant phenotyping and precision agriculture. New developments in Unmanned Aerial Vehicle (UAV) platforms and sensor technology facilitate cost-effective data collection through simultaneous multi-sensor/multimodal data collection at very high spatial and spectral resolutions. The objective of this study is to evaluate the power of UAV-based multimodal data fusion using RGB, multispectral and thermal sensors to estimate soybean (Glycine max) grain yield within the framework of Deep Neural Network (DNN). RGB, multispectral, and thermal images were collected using a low-cost multi-sensory UAV from a test site in Columbia, Missouri, USA. Multimodal information, such as canopy spectral, structure, thermal and texture features, was extracted and combined to predict crop grain yield using Partial Least Squares Regression (PLSR), Random Forest Regression (RFR), Support Vector Regression (SVR), input-level feature fusion based DNN (DNN-F1) and intermediate-level feature fusion based DNN (DNN-F2). The results can be summarized in three messages: (1) multimodal data fusion improves the yield prediction accuracy and is more adaptable to spatial variations; (2) DNN-based models improve yield prediction model accuracy: the highest accuracy was obtained by DNN-F2 with an R2 of 0.720 and a relative root mean square error (RMSE%) of 15.9%; (3) DNN-based models were less prone to saturation effects, and exhibited more adaptive performance in predicting grain yields across the Dwight, Pana and AG3432 soybean genotypes in our study. Furthermore, DNN-based models demonstrated consistent performance over space with less spatial dependency and variations. This study indicates that multimodal data fusion using low-cost UAV within a DNN framework can provide a relatively accurate and robust estimation of crop yield, and deliver valuable insight for high-throughput phenotyping and crop field management with high spatial precision. © 2019 Elsevier Inc.
英文关键词Data fusion; Deep learning; Multimodality; Phenotyping; Remote sensing; Spatial autocorrelation; Yield prediction
语种英语
scopus关键词autocorrelation; crop yield; data set; food security; machine learning; numerical model; phenotype; policy making; prediction; remote sensing; soybean; spectral resolution; unmanned vehicle; Columbia [Missouri]; Missouri; United States; Glycine max
来源期刊Remote Sensing of Environment
文献类型期刊论文
条目标识符http://gcip.llas.ac.cn/handle/2XKMVOVA/179507
作者单位Department of Earth and Atmospheric Sciences, Saint Louis University, St. Louis, MO 63108, United States; Department of Electrical and Computer Engineering, Purdue University Northwest, Hammond, IN 46323, United States; Department of Computer Science, Saint Louis University, St. Louis, MO 63108, United States; Division of Plant Sciences, University of Missouri, Columbia, MO 65211, United States
推荐引用方式
GB/T 7714
Maimaitijiang M.,Sagan V.,Sidike P.,et al. Soybean yield prediction from UAV using multimodal data fusion and deep learning[J],2020,237.
APA Maimaitijiang M.,Sagan V.,Sidike P.,Hartling S.,Esposito F.,&Fritschi F.B..(2020).Soybean yield prediction from UAV using multimodal data fusion and deep learning.Remote Sensing of Environment,237.
MLA Maimaitijiang M.,et al."Soybean yield prediction from UAV using multimodal data fusion and deep learning".Remote Sensing of Environment 237(2020).
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Maimaitijiang M.]的文章
[Sagan V.]的文章
[Sidike P.]的文章
百度学术
百度学术中相似的文章
[Maimaitijiang M.]的文章
[Sagan V.]的文章
[Sidike P.]的文章
必应学术
必应学术中相似的文章
[Maimaitijiang M.]的文章
[Sagan V.]的文章
[Sidike P.]的文章
相关权益政策
暂无数据
收藏/分享

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。