DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Comments
The examiner notes that claim 1 recites a contingent limitation. The contingent limitation is identifying if one or more machine-learning models contain at least one uncertainty. Contingent limitations are not required to be met as they are conditional to a prior condition (MPEP 2111.04). The examiner recommends the applicant amends the independent claim to fix the contingency. For the purposes of compact prosecution, the contingent limitation is interpreted as having its prior condition met.
Response to Arguments
Applicant’s arguments filed 12/22/2025 (“Remarks/Arguments”) have been fully considered but are not persuasive.
Regarding the 103 rejections, applicant's arguments filed with respect to the prior art rejections have been fully considered but they are not persuasive. Additionally, applicant has amended elements of canceled claim 6 into claim 1 but does not argue that Xu does not teach the amended elements. Therefore, Xu still teaches the amended elements of now canceled claim 6 as seen in the previous office action.
Alleged Zhang reference not teaching limitations of claim 1.
In Remarks/Arguments pg. 9, applicant contends:
“Zhang has very little to do with the present amended claims. Zhang seemingly teaches a reinforcement learning (RL) technique that is very dependent on spatial ensembles. The purpose is to provide accurate crop health maps BUT by providing automated flight patterns. While Zhang does discuss agricultural maps but the innovation is concentrated on flight times and costs associated with them. Specifically, Zhang discusses the problems and provide solutions to introduce autonomous aerial scouting system. There are very specific spatial concerns to pilot software autonomously. In addition, the features provided in claim 1 below are not provided in Zhang such as: (all claim 1 limitations)”
The relevant claim limitations appear to be: training a plurality of machine-learning models for farming a region containing uncertainty, wherein uncertainty is defined as information about said region not being known due to lack of sensor or human presentation and said region being remote,…and wherein each of one or more machine-learning models is trained for a specific farm field region utilizing training data for the plurality of machine-learning models; …identifying, during the training, another region having a similarity to said specific farm field, wherein the identifying one of the farm field regions having a similarly comprises clustering farm field regions based upon a similar aspects of the farm field regions, in claim 1. As noted in the previous Office Action, Zhang teaches:
(Zhang, abstract and see Figure 3, “Unmanned aerial systems (UAS) are increasingly used in precision agriculture to collect crop health related data. UAS can capture data more often and more cost-effectively than sending human scouts into the field…Our approach uses reinforcement learning (RL) and convolutional neural networks (CNN) to accurately and autonomously sample the field. To develop and test the approach, we ran flight simulations on an aerial image dataset collected from an 80-acre corn field. The excess green vegetation Index was used as a proxy for crop health condition. Compared to the conventional UAS scouting approach, the proposed scouting approach sampled 40% of the field”).
(Zhang, pg. 5, “We chose to modify the VGG16 [28] neural network as our CNN model to predict crop health conditions. Our design trained multiple CNN models, one for each of the eight neighbors adjacent to the center management zone in a 3 × 3 grid.”).
(Zhang, pg. 5, “To generate a real-time crop health prediction map of a field for each flight step of a UAS mission, two questions need to be answered. The first question is how to fulfill an entire crop health prediction map by applying spatial CNN models to the unknown area. To solve this, we introduced the concept of a reference dataset. The reference dataset leverages the fact that the texture of a crop field is similar throughout, which means you can potentially find two zones that are quite similar given enough samples.”).
In other words, the examiner respectfully disagrees that Zhang does not have anything to do with the claimed invention. In the cited sections above, Zhang expresses that they are concerned with using unmanned aerial systems to reduce the cost of gathering crop information from unknown farming areas as seen in the abstract and Figure 3 of Zhang. Additionally, Zhang shows the use of CNN models to model specific farm fielding regions to identify clusters of farm field regions based on similar farm field region aspects. Therefore, Zhang teaches the following limitations of claim 1: training a plurality of machine-learning models for farming a region containing uncertainty, wherein uncertainty is defined as information about said region not being known due to lack of sensor or human presentation and said region being remote,…and wherein each of one or more machine-learning models is trained for a specific farm field region utilizing training data for the plurality of machine-learning models; …identifying, during the training, another region having a similarity to said specific farm field, wherein the identifying one of the farm field regions having a similarly comprises clustering farm field regions based upon a similar aspects of the farm field regions. The cited sections above show support for Zhang teaching the mentioned limitations as seen in the previous Office Action. Regarding the other limitations of claim 1, Hao, Jeffrey, and Xu teach the other limitations of claim 1 and the combination of all four references teaches claim 1. Therefore, applicant’s arguments are not persuasive.
Alleged Jeffrey reference not teaching limitations of claim 1.
In Remarks/Arguments pg. 10-11, applicant contends:
“Jeffrey does not cure the deficiencies of Zhang as it seemingly teaches an adaptive
oracle-trained learning framework for automatically building and maintaining models that are developed using machine learning algorithms based on crowd input alone. In embodiments, the framework leverages at least one oracle (e.g., a crowd) for automatic generation of high-quality training data to use in deriving a model. Once a model is trained, the framework monitors the performance of the model and, in embodiments, leverages active learning and the oracle to generate feedback about the changing data for modifying training data sets while maintaining data quality to enable incremental adaptation of the model.”
The relevant claim limitations appear to be: identifying a plurality of types of data needed for updating at least one of the plurality of machine-learning models to address at least one uncertainty within the at least one of the plurality of machine-learning model, wherein the identifying comprises determining a type of data that is needed for and similar across a subset of the plurality of machine-learning models; collecting at least one of the plurality of types of data and clustering similar data together, wherein the clustering optimizes a cost associated with collection the at least one of the plurality of types of data; identifying if one or more machine-learning models contain at least one uncertainty;…triggering data collection, when uncertainty is identified, to refine one or more models that contained uncertainty, and re-training the subset of the plurality of machine-learning models utilizing the at least one of the plurality of types of data. in claim 1. As noted in the previous Office Action, Jeffrey teaches:
(Jeffery, ⁋65, “FIG. 1B illustrates a second embodiment of an example system that can be configured to implement an adaptive oracle-trained learning framework 100B that is further configured to include a training data manager component 156 for curating the training data 120 used to train and/or re-train the predictive model 130. In various embodiments, curating the training data 120 may include one or a combination of determining the composition of the training data set 120 and determining when to re-train the model 130.”).
(Jeffery, ⁋66, “In embodiments in which the input data instances are multi-dimensional data, the criteria used by the training data manager 156 for selecting the optimal subset of training data samples may be based at least in part on a feature analysis used to generate the initial training data set from which the model is derived, as described previously with reference to FIGS. 3-4” and Jeffery ⁋32, “implement an adaptive oracle-trained learning framework for automatically building and maintaining models that are developed using machine learning algorithms. In embodiments, the framework leverages at least one oracle (e.g., a crowd) for automatic generation of high-quality training data to use in deriving a model. Once a model is trained, the framework monitors the performance of the model and, in embodiments, leverages active learning and the oracle to generate feedback about the changing data for modifying training data sets while maintaining data quality to enable incremental adaptation of the model.”).
(Jeffery, ⁋45, “Turning to FIG. 3 for illustration, in embodiments, an input multi-dimensional data instance having k attributes is represented by a feature vector x 305 having k elements (x1, x2, . . . , xk), where each element in feature vector x represents the value of a corresponding attribute. Each of the elements is assigned to a particular cluster/distribution of the corresponding attribute using a clustering/distribution algorithm”).
(Jeffery, ⁋42, “In some embodiments, the feature analysis includes clustering the collected data instances into homogeneous groups across multiple dimensions using an unsupervised learning approach that is dependent on the distribution of the input data”).
In other words, the examiner respectfully disagrees that Jeffrey does not combine with Zhang. In the cited sections above, Jeffrey teaches a system where training data is curated based on determining what type of data is needed by a model and then retraining the model with the curated training data in an incremental learning process. As shown above, Zhang already teaches training with uncertainty data related to a remote region. Therefore, it would have been obvious to one of ordinary skill in the art to combine Jeffrey’s teachings of incremental learning using a curated training dataset to Zhang’s teaching of training with uncertainty data because incrementally updating models with new data improves model accuracy to changing input data and reduces the cost of training a new model from scratch (cf. Jeffery, see ⁋6-8). Additionally, Jeffrey teaches the following limitations of claim 1: identifying a plurality of types of data needed for updating at least one of the plurality of machine-learning models to address at least one uncertainty within the at least one of the plurality of machine-learning model, wherein the identifying comprises determining a type of data that is needed for and similar across a subset of the plurality of machine-learning models; collecting at least one of the plurality of types of data and clustering similar data together, wherein the clustering optimizes a cost associated with collection the at least one of the plurality of types of data; identifying if one or more machine-learning models contain at least one uncertainty;…triggering data collection, when uncertainty is identified, to refine one or more models that contained uncertainty, and re-training the subset of the plurality of machine-learning models utilizing the at least one of the plurality of types of data. The cited sections above show support for Jeffrey teaching the mentioned limitations as seen in the previous Office Action. Regarding the other limitations of claim 1, Hao and Xu teach the other limitations of claim 1, not taught by Zhang and Jeffrey, and the combination of all four references teaches claim 1. Therefore, applicant’s arguments are not persuasive.
Alleged Hao reference not teaching limitations of claim 1.
In Remarks/Arguments pg. 11, applicant contends:
“Hao does not cure the deficiencies of Zhang or Jeffrey. Hao is a research paper that
discusses techniques to provide training samples for crop mapping from remotely sensed images in regions that are difficult to acquire information about. Hao uses identified and well documented information to compare crop (wheat) regions in different areas. In this paper, a transfer learning (TL) workflow is proposed to use the classification model trained in contiguous U.S.A. (CONUS) to identify crop types in other regions. However, there is no uncertainty with the information that is being gathered. In the present amended claims, uncertainty of data is a factor to be considered that is not a concern of Hao.”
The relevant claim limitations appear to be: wherein the utilizing training data comprises identifying one of the farm field regions having a similarity to another of the farm field regions and transferring training data of the machine-learning model for the one of the farm field regions to the machine-learning model for the another of the another of the farm field regions; …generating at least one graph for each farm field region based upon similar identified (i) spatial aspects across a field region and (ii) temporal aspects across the field region; in claim 1. As noted in the previous Office Action, Hao teaches:
(Hao, Section 1, “Thus, the objective of this paper are (1) to propose a transfer-learning (TL) workflow for crop classification in the ground sample shortage regions, by training a machine-learning classifier with CDL high-confidence pixels and corresponding remote sensing images and using this trained classifier to identify crop types in other study regions, (2) to estimate the effect of time series length on classification accuracies, and evaluate the potential of in-season crop identification using TL.”).
(Hao, see Figure 5 and its description below, Fig. 5. Wall-to-wall comparison of TL and LO results with DOY 60– 210 and DOY 60– 330 monthly NDVI time series in HS. (a) (b) and (c) are the NDVI color composite images with the locations of validation samples collected in each sub-region. The NDVI images are composed by R: NDVI of DOY 270, G: NDVI of DOY 210, B: NDVI of DOY 150. (d) (e) and (f) are the classification result derived by TL using DOY 60– 210 NDVI time series and (g) (h) and (i) are classification results derived by LO using DOY 60– 210 NDVI time series. (j) (k) and (l) are classification results derived by TL using DOY 60– 330 NDVI time series and (m) (n) and (l) are classification result derived by LO using DOY 60– 330 NDVI time series.”).
In other words, the examiner respectfully disagrees that Hao does not teach limitations in claim 1. As shown above, Zhang already teaches training with uncertainty data related to a remote region. Hao was relied upon to teach the transfer learning and graph generation elements of claim 1. Hao teaches a system where training data is transferred from a known farm region to another region with a training data shortage based on similarities between the farming regions so that knowledge of a known region to be applied to a region with a sample shortage. Additionally, Hao also teaches generating graphs that shows spatial and temporal aspects across field regions as seen in Figure 5. Therefore, Hao teaches the following limitations of claim 1: wherein the utilizing training data comprises identifying one of the farm field regions having a similarity to another of the farm field regions and transferring training data of the machine-learning model for the one of the farm field regions to the machine-learning model for the another of the another of the farm field regions; …generating at least one graph for each farm field region based upon similar identified (i) spatial aspects across a field region and (ii) temporal aspects across the field region. The cited sections above show support for Hao teaching the mentioned limitations as seen in the previous Office Action. Therefore, applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-2, 4, 8-13, 15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, et al., Non-Patent Literature “Whole-Field Reinforcement Learning: A Fully Autonomous Aerial Scouting Method for Precision Agriculture” (“Zhang”) in view of Hao, et al., Non-Patent Literature “Transfer Learning for Crop classification with Cropland Data Layer data (CDL) as training samples” (“Hao”) and further in view of Jeffery, et al., US Pre-Grant Publication 2019/0378044A1 (“Jeffery”) and Xu, et al., Non-Patent Literature “Farmland Extraction from High Spatial Resolution Remote Sensing Images Based on Stratified Scale Pre-Estimation” (“Xu”).
Regarding claim 1, Zhang discloses:
A computer implemented method, (Zhang, pg. 4, “It is assumed that UAS have access to edge computing systems powerful enough for RL and CNN inference. Edge servers or laptops [A computer implemented method,] sufficiently augment compute available on UAS and wireless networks allow data transfer between UAS and compute devices.”).
comprising: training a plurality of machine-learning models for farming a region containing uncertainty, wherein uncertainty is defined as information about said region not being known due to lack of sensor or human presentation and said region being remote, (Zhang, abstract and see Figure 3, “Unmanned aerial systems (UAS) are increasingly used in precision agriculture to collect crop health related data. UAS can capture data more often and more cost-effectively than sending human scouts into the field [wherein uncertainty is defined as information about said region not being known due to lack of sensor or human presentation]…Our approach uses reinforcement learning (RL) and convolutional neural networks (CNN) to accurately and autonomously sample the field [comprising: training a plurality of machine-learning models for farming a region containing uncertainty,]. To develop and test the approach, we ran flight simulations on an aerial image dataset collected from an 80-acre corn field. The excess green vegetation Index was used as a proxy for crop health condition. Compared to the conventional UAS scouting approach, the proposed scouting approach sampled 40% of the field; Figure 3 shows that there are unknown zones which are interpreted as fields with uncertainty and are remote (i.e. and said region being remote,)”).
and wherein each of one or more machine-learning models is trained for a specific farm field region utilizing training data for the plurality of machine-learning models; (Zhang, pg. 5, “We chose to modify the VGG16 [28] neural network as our CNN model to predict crop health conditions. Our design trained multiple CNN models, one for each of the eight neighbors adjacent to the center management zone in a 3 × 3 grid [and wherein each of one or more machine-learning models is trained for a specific farm field region utilizing training data for the plurality of machine-learning models;].”).
identifying, during the training, another region having a similarity to said specific farm field, wherein the identifying one of the farm field regions having a similarly comprises clustering farm field regions based upon a similar aspects of the farm field regions, (Zhang, pg. 5, “To generate a real-time crop health prediction map of a field for each flight step of a UAS mission, two questions need to be answered. The first question is how to fulfill an entire crop health prediction map by applying spatial CNN models to the unknown area. To solve this, we introduced the concept of a reference dataset. The reference dataset leverages the fact that the texture of a crop field is similar throughout, which means you can potentially find two zones that are quite similar given enough samples; finding similarities between fields is interpreted as clustering (i.e. identifying, during the training, another region having a similarity to said specific farm field, wherein the identifying one of the farm field regions having a similarly comprises clustering farm field regions based upon a similar aspects of the farm field regions,).”).
While Zhang teaches a system for training multiple models for a farming region containing uncertainty, Zhang does not explicitly:
wherein the similar aspects are weighted based upon an importance across the field regions;
wherein the utilizing training data comprises identifying one of the farm field regions having a similarity to another of the farm field regions and transferring training data of the machine-learning model for the one of the farm field regions to the machine-learning model for the another of the another of the farm field regions;
identifying a plurality of types of data needed for updating at least one of the plurality of machine-learning models to address at least one uncertainty within the at least one of the plurality of machine-learning model, wherein the identifying comprises determining a type of data that is needed for and similar across a subset of the plurality of machine-learning models;
collecting at least one of the plurality of types of data and clustering similar data together, wherein the clustering optimizes a cost associated with collection the at least one of the plurality of types of data;
generating at least one graph for each farm field region based upon similar identified (i) spatial aspects across a field region and (ii) temporal aspects across the field region;
identifying if one or more machine-learning models contain at least one uncertainty; triggering data collection, when uncertainty is identified, to refine one or more models that contained uncertainty, and re-training the subset of the plurality of machine-learning models utilizing the at least one of the plurality of types of data.
Hao teaches:
wherein the utilizing training data comprises identifying one of the farm field regions having a similarity to another of the farm field regions and transferring training data of the machine-learning model for the one of the farm field regions to the machine-learning model for the another of the another of the farm field regions; (Hao, Section 1, “Thus, the objective of this paper are (1) to propose a transfer-learning (TL) workflow for crop classification in the ground sample shortage regions, by training a machine-learning classifier with CDL high-confidence pixels and corresponding remote sensing images and using this trained classifier to identify crop types in other study regions [wherein the utilizing training data comprises identifying one of the farm field regions having a similarity to another of the farm field regions and transferring training data of the machine-learning model for the one of the farm field regions to the machine- learning model for the another of the another of the farm field regions], (2) to estimate the effect of time series length on classification accuracies, and evaluate the potential of in-season crop identification using TL.”).
generating at least one graph for each farm field region based upon similar identified (i) spatial aspects across a field region and (ii) temporal aspects across the field region; (Hao, see Figure 5 and its description below,
PNG
media_image1.png
983
623
media_image1.png
Greyscale
“Fig. 5. Wall-to-wall comparison of TL and LO results with DOY 60– 210 and DOY 60– 330 monthly NDVI time series in HS. (a) (b) and (c) are the NDVI color composite images with the locations of validation samples collected in each sub-region. The NDVI images are composed by R: NDVI of DOY 270, G: NDVI of DOY 210, B: NDVI of DOY 150. (d) (e) and (f) are the classification result derived by TL using DOY 60– 210 NDVI time series and (g) (h) and (i) are classification results derived by LO using DOY 60– 210 NDVI time series. (j) (k) and (l) are classification results derived by TL using DOY 60– 330 NDVI time series and (m) (n) and (l) are classification result derived by LO using DOY 60– 330 NDVI time series.”; the three columns represent 3 different sub-regions and are interpreted as a set of similar farms. The spatial similarities are interpreted as being in the same sub-region since being placed in a sub-region is interpreted as having similarity within a bigger region. DOY, or day of year, is interpreted as the temporal similarities (i.e. generating at least one graph for each farm field region based upon similar identified (i) spatial aspects across a field region and (ii) temporal aspects across the field region;)).
Zhang and Hao are both in the same field of endeavor (i.e. farming applications using machine learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Zhang and Hao to teach the above limitation(s). The motivation for doing so is that transfer learning allows for the transfer the knowledge of a known region to be applied to a region with sample shortage and utilizing spatial-temporal graphs aids in visualizing similarities between different regions (cf. Hao, abstract, “Training samples is fundamental for crop mapping from remotely sensed images, but difficult to acquire in many regions through ground survey, causing significant challenge for crop mapping in these regions. In this paper, a transfer learning (TL)workflow is proposed to use the classification model trained in contiguous U.S.A. (CONUS) to identify crop types in other regions.”).
While Zhang in view of Hao teaches a system for training multiple models for a farming region containing uncertainty using transfer learning, the combination does not explicitly teach:
wherein the similar aspects are weighted based upon an importance across the field regions;
identifying a plurality of types of data needed for updating at least one of the plurality of machine-learning models to address at least one uncertainty within the at least one of the plurality of machine-learning model, wherein the identifying comprises determining a type of data that is needed for and similar across a subset of the plurality of machine-learning models;
collecting at least one of the plurality of types of data and clustering similar data together, wherein the clustering optimizes a cost associated with collection the at least one of the plurality of types of data;
identifying if one or more machine-learning models contain at least one uncertainty; triggering data collection, when uncertainty is identified, to refine one or more models that contained uncertainty, and re-training the subset of the plurality of machine-learning models utilizing the at least one of the plurality of types of data.
Jeffery teaches:
identifying a plurality of types of data needed for updating at least one of the plurality of machine-learning models to address at least one uncertainty within the at least one of the plurality of machine-learning model, (Jeffery, ⁋65, “FIG. 1B illustrates a second embodiment of an example system that can be configured to implement an adaptive oracle-trained learning framework 100B that is further configured to include a training data manager component 156 for curating the training data 120 used to train and/or re-train the predictive model 130 [for updating at least one of the plurality of machine-learning models]. In various embodiments, curating the training data 120 may include one or a combination of determining the composition [identifying a plurality of types of data needed] of the training data set 120 and determining when to re-train the model 130 [to address at least one uncertainty within the at least one of the plurality of machine-learning model,].”).
wherein the identifying comprises determining a type of data that is needed for and similar across a subset of the plurality of machine-learning models; (Jeffery, ⁋66, “In embodiments in which the input data instances are multi-dimensional data, the criteria used by the training data manager 156 for selecting the optimal subset of training data samples may be based at least in part on a feature analysis used to generate the initial training data set from which the model is derived, as described previously with reference to FIGS. 3-4; Figure 3 shows that the input data is clustered based on similarity (i.e. wherein the identifying comprises determining a type of data that is needed for and similar)” and Jeffery ⁋32, “implement an adaptive oracle-trained learning framework for automatically building and maintaining models that are developed using machine learning algorithms. In embodiments, the framework leverages at least one oracle (e.g., a crowd) for automatic generation of high-quality training data to use in deriving a model. Once a model is trained, the framework monitors the performance of the model and, in embodiments, leverages active learning and the oracle to generate feedback about the changing data for modifying training data sets while maintaining data quality to enable incremental adaptation of the model; updating a single model out of multiple models is interpreted as a subset of the plurality of models (i.e. across a subset of the plurality of machine-learning models;).”).
collecting at least one of the plurality of types of data and clustering similar data together, (Jeffery, ⁋45, “Turning to FIG. 3 for illustration, in embodiments, an input multi-dimensional data instance having k attributes is represented by a feature vector x 305 having k elements (x1, x2, . . . , xk), where each element in feature vector x represents the value of a corresponding attribute. Each of the elements is assigned to a particular cluster/distribution of the corresponding attribute using a clustering/distribution algorithm [collecting at least one of the plurality of types of data and clustering similar data together,]”).
wherein the clustering optimizes a cost associated with collection of the at least one of the plurality of types of data; (Jeffery, ⁋42, “In some embodiments, the feature analysis includes clustering the collected data instances into homogeneous groups across multiple dimensions using an unsupervised learning approach that is dependent on the distribution of the input data; using an unsupervised approach for the clustering is interpreted as having a cost or objective function to guide the unsupervised approach (i.e. wherein the clustering optimizes a cost associated with collection of the at least one of the plurality of types of data;)”).
identifying if one or more machine-learning models contain at least one uncertainty; triggering data collection, when uncertainty is identified, to refine one or more models that contained uncertainty, and re-training the subset of the plurality of machine-learning models utilizing the at least one of the plurality of types of data. (Jeffery, ⁋65, “FIG. 1B illustrates a second embodiment of an example system that can be configured to implement an adaptive oracle-trained learning framework 100B that is further configured to include a training data manager component 156 for curating the training data 120 used to train and/or re-train the predictive model 130 [and re-training the subset of the plurality of machine-learning models utilizing the at least one of the plurality of types of data.]. In various embodiments, curating the training data 120 may include one or a combination of determining the composition [triggering data collection, when uncertainty is identified, to refine one or more models that contained uncertainty,] of the training data set 120 and determining when to re-train the model 130 [identifying if one or more machine-learning models contain at least one uncertainty;].”).
Zhang, in view of Hao, and Jeffery are both in the same field of endeavor (i.e. machine learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Zhang, in view of Hao, and Jeffery to teach the above limitation(s). The motivation for doing so is that incrementally updating models with new data improves model accuracy to changing input data and reduces the cost of training a new model from scratch (cf. Jeffery, see ⁋6-8).
While Zhang in view of Hao and Jeffery teaches a system for training multiple models for a farming region containing uncertainty using transfer learning, the combination does not explicitly teach:
wherein the similar aspects are weighted based upon an importance across the field regions;
Xu teaches wherein the similar aspects are weighted based upon an importance across the field regions; (Xu, Section 4.2, “A small amount of mis-classification or missed-extraction of farmland parcels still exists in the experiment. The influence factors of FEA can be considered from two aspects based on the regional division. Firstly, spectral similarity between high vegetation covered farmland and woodland (vegetation except farmland) is the main factor that causes mis-classification in the farmland region. For example, in the Quickbird image-based experiment, the vegetation located in the middle of the image is mis-classified as farmland. Secondly, farmland in the urban region is not the dominant object and the low vegetation covered farmland is often confused with construction land. The category confusion degrades the FEA, which can be presented by Table 5.”; the spectral similarity and dominant object aspects are interpreted as aspects with weighted importance across the field regions because these similarity aspects cause the most uncertainty in classification (i.e. wherein the similar aspects are weighted based upon an importance across the field regions;)).
Zhang, in view of Hao and Jeffery, and Xu are both in the same field of endeavor (i.e. remote sensing). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Zhang, in view of Hao and Jeffery, and Xu to teach the above limitation(s). The motivation for doing so is that weighting similar features by importance improves model robustness. (cf. Xu, Section 5, “Regional division on a coarse scale can extract the farmland region on a rough scale, which not only improves the efficiency of farmland extraction, but also ensures the method’s universality”).
Regarding claim 2, Zhang in view of Hao, Jeffery, and Xu teaches the computer implemented method of claim 1. Hao further teaches wherein a farm field region comprises a set of similar farms. (Hao, see Figure 5 and its description below,
PNG
media_image1.png
983
623
media_image1.png
Greyscale
“Fig. 5. Wall-to-wall comparison of TL and LO results with DOY 60– 210 and DOY 60– 330 monthly NDVI time series in HS. (a) (b) and (c) are the NDVI color composite images with the locations of validation samples collected in each sub-region. The NDVI images are composed by R: NDVI of DOY 270, G: NDVI of DOY 210, B: NDVI of DOY 150. (d) (e) and (f) are the classification result derived by TL using DOY 60– 210 NDVI time series and (g) (h) and (i) are classification results derived by LO using DOY 60– 210 NDVI time series. (j) (k) and (l) are classification results derived by TL using DOY 60– 330 NDVI time series and (m) (n) and (l) are classification result derived by LO using DOY 60– 330 NDVI time series.”; the three columns represent 3 different sub-regions and are interpreted as a set of similar farms. (i.e. wherein a farm field region comprises a set of similar farms.)).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Hao with the teachings of Zhang and Jeffery for the same reasons disclosed in claim 1.
Regarding claim 4, Zhang in view of Hao, Jeffery, and Xu teaches the computer implemented method of claim 2. Hao further teaches wherein the transferring training data comprises updating the at least one graph for each farm field region with the training data. (Hao, see Figure 5 above and its description below, “Fig. 5. Wall-to-wall comparison of TL and LO results with DOY 60– 210 and DOY 60– 330 monthly NDVI time series in HS. (a) (b) and (c) are the NDVI color composite images with the locations of validation samples collected in each sub-region. The NDVI images are composed by R: NDVI of DOY 270, G: NDVI of DOY 210, B: NDVI of DOY 150. (d) (e) and (f) are the classification result derived by TL using DOY 60– 210 NDVI time series and (g) (h) and (i) are classification results derived by LO using DOY 60– 210 NDVI time series. (j) (k) and (l) are classification results derived by TL using DOY 60– 330 NDVI time series and (m) (n) and (l) are classification result derived by LO using DOY 60– 330 NDVI time series.”; the three columns represent 3 different sub-regions and d, e, and f is the result of the transfer learning and are interpreted as updating the region graphs with training data ” (i.e. wherein the transferring training data comprises updating the at least one graph for each farm field region with the training data)).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Hao with the teachings of Zhang and Jeffery for the same reasons disclosed in claim 1.
Regarding claim 8, Zhang in view of Hao, Jeffery, and Xu teaches the computer implemented method of claim 1. Jeffery further teaches wherein the re-training comprises iteratively performing the identifying, collecting, and retraining until a level of the at least one uncertainty reaches a predetermined value. (Jeffery, ⁋68, “In embodiments, selecting a set of labeled data instances from the labeled data reservoir 155 [wherein the re-training comprises iteratively performing the identifying, collecting,] is based on a determination that re-training the model 130 with updated training data likely will result in improved model performance. In some embodiments, this determination is based at least in part on analyzing the distribution and quality of the training data. For example, in some embodiments in which the predictive model 130 is a classifier, the selection may be based at least in part on maintenance and/or improvement of class balance in the training data (e.g., adding training examples of rare categories) [and retraining until a level of the at least one uncertainty reaches a predetermined value.].”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Jeffery with the teachings of Zhang and Hao for the same reasons disclosed in claim 1.
Regarding claim 9, Zhang in view of Hao, Jeffery, and Xu teaches the computer implemented method of claim 1. Zhang further teaches wherein the training comprises utilizing at least one of: historical remote sensing indices, weather data, farming practices, and crop health (Zhang, pg. 3, “Whole-field RL uses a full history of images captured by a UAS during a scouting mission and implements complex CNN models and a RL algorithm to extrapolate a whole-field crop health map from sensed data [wherein the training comprises utilizing at least one of: historical remote sensing indices, weather data, farming practices, and crop health.].”).
Regarding claim 10, Zhang in view of Hao, Jeffery, and Xu teaches the computer implemented method of claim 1. Jeffery further teaches wherein the data comprises crowd-sourced data. (Jeffery, ⁋32, “an adaptive oracle-trained learning framework for automatically building and maintaining models that are developed using machine learning algorithms. In embodiments, the framework leverages at least one oracle (e.g., a crowd) [wherein the data comprises crowd-sourced data.] for automatic generation of high-quality training data to use in deriving a model.”).
Zhang, Hao, and Jeffery are all in the same field of endeavor (i.e. machine learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Zhang, Hao, and Jeffery to teach the above limitation(s). The motivation for doing so is that using crowdsourced data as a data source ensures that the data is verified before use (cf. Jeffery, ⁋32, “in embodiments, leverages active learning and the oracle to generate feedback about the changing data for modifying training data sets while maintaining data quality to enable incremental adaptation of the model.”).
Regarding claim 11, the claim is similar to claim 1. Zhang further teaches the additional limitations An apparatus, comprising: at least one processor; and a computer readable storage medium having a computer readable program code embodied therewith and executable by the at least one processor; wherein the computer readable program code is configured to (Zhang, pg. 4, “It is assumed that UAS have access to edge computing systems powerful enough for RL and CNN inference. Edge servers or laptops; it is well known in the art that a laptop contains a processor and computer readable storage medium that contains program code (i.e. An apparatus, comprising: at least one processor; and a computer readable storage medium having a computer readable program code embodied therewith and executable by the at least one processor; wherein the computer readable program code is configured to) sufficiently augment compute available on UAS and wireless networks allow data transfer between UAS and compute devices.”).
Regarding claim 12, the claim is similar to claim 1. Zhang further teaches the additional limitations A computer program product, comprising: a computer readable storage medium having a computer readable program code embodied therewith and executable by the at least one processor; wherein the computer readable program code is configured to (Zhang, pg. 4, “It is assumed that UAS have access to edge computing systems powerful enough for RL and CNN inference. Edge servers or laptops; it is well known in the art that a laptop contains a processor and computer readable storage medium that contains program code (i.e. A computer program product, comprising:a computer readable storage medium having a computer readable program code embodied therewith and executable by the at least one processor; wherein the computer readable program code is configured to) sufficiently augment compute available on UAS and wireless networks allow data transfer between UAS and compute devices.”).
Regarding claims 13, 15, 19, and 20, the claims are similar to claims 2, 4, 8, and 9. Therefore, the claims are rejected under the same rationales.
Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, et al., Non-Patent Literature “Whole-Field Reinforcement Learning: A Fully Autonomous Aerial Scouting Method for Precision Agriculture” (“Zhang”) in view of Hao, et al., Non-Patent Literature “Transfer Learning for Crop classification with Cropland Data Layer data (CDL) as training samples” (“Hao”) and further in view of Jeffery, et al., US Pre-Grant Publication 2019/0378044A1 (“Jeffery”), Xu, et al., Non-Patent Literature “Farmland Extraction from High Spatial Resolution Remote Sensing Images Based on Stratified Scale Pre-Estimation” (“Xu”), and Rydberg, et al., Non-Patent Literature “Integrated method for boundary delineation of agricultural fields in multispectral satellite images” (“Rydberg”).
Regarding claim 3, Zhang in view of Hao, Jeffery, and Xu teaches the computer implemented method of claim 2. While Zhang in view of Hao, Jeffery, and Xu teaches a system for training multiple models for a farming region containing uncertainty using transfer learning and re-training, the combination does not explicitly teach wherein the generating comprises identifying edge information between neighboring farms in the field region.
Rydberg teaches wherein the generating comprises identifying edge information between neighboring farms in the field region. (Rydberg, See Figure 5 below,
PNG
media_image2.png
768
936
media_image2.png
Greyscale
In Figure 5, the satellite image of the farmland is processed to identify the neighboring farm fields by their edge information (i.e. wherein the generating comprises identifying edge information between neighboring farms in the field region)).
Zhang, in view of Hao, Jeffery, and Xu, and Rydberg are both in the same field of endeavor (i.e. remote sensing). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Zhang, in view of Hao, Jeffery, and Xu, and Rydberg to teach the above limitation(s). The motivation for doing so increase accuracy of crop classification (cf. Rydberg, Section 1, “Agricultural statistics are often obtained per field, which makes it essential to accurately define field boundaries of agricultural land parcels. Crop classification produces better results from a per-field classification than from a per-pixel classification”).
Regarding claim 14, the claim is similar to claim 3 and rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wiles, et al., US9563852B1 discloses a system of precision agriculture that identifies pest occurrence of farming fields based on similarities with neighboring fields.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.S.W./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148