DETAILED ACTION
This Action is in response to Applicant’s response filed on 01/05/2026. Claims 1-2, 4-15, 17-20 and newly adding claims 21-22 are still pending in the present application. Claims 3 and 16 are canceled. This Action is made FINAL.
Response to Arguments
Applicant's arguments filed on 01/05/2026 have been fully considered but they are not persuasive. In the present application, applicant argues: “These cited passages of Hassanzadeh describe preprocessing, filtering, normalization, coordinate transformation, smoothing, and ECDF-based statistical processing of yield and agronomic data, but do not teach or suggest using a trained encoder model. As amended, claim 1 sets forth "processing a portion of the image data using a trained encoder model to generate an intermediate representation of the instance of agricultural satellite image data." Hassanzadeh fails to teach or suggest using a trained encoder model to generate intermediate representations. Hassanzadeh never discloses any training step, any training data, or any learning process associated with the alleged encoder model. Notably, while Hassanzadeh describes preprocessing operations that are applied to received data, Hassanzadeh does not teach or suggest any training of an encoder model, any use of training data, or any learning process by which model parameters are adjusted based on examples. The preprocessing operations in Hassanzadeh are applied using predetermined algorithms and parameters, rather than a trained encoder model as required by claim 1.” (Remark Pages 10-11)
Examiner respectfully disagrees. With respect to the Applicant’s arguments that “These cited passages of Hassanzadeh describe preprocessing, filtering, normalization, coordinate transformation, smoothing, and ECDF-based statistical processing of yield and agronomic data, but do not teach or suggest using a trained encoder model. As amended, claim 1 sets forth "processing a portion of the image data using a trained encoder model to generate an intermediate representation of the instance of agricultural satellite image data." Hassanzadeh fails to teach or suggest using a trained encoder model to generate intermediate representations. Hassanzadeh never discloses any training step, any training data, or any learning process associated with the alleged encoder model.” (Remark Page 10)
Hassanzadeh discloses the agricultural intelligence computer system 130 may use a preconfigured agronomic model to calculate agronomic properties related to currently received location and crop information for one or more fields. The preconfigured agronomic model is based upon previously processed field data, including but not limited to, identification data, harvest data, fertilizer data, and weather data. The preconfigured agronomic model may have been cross validated to ensure accuracy of the model. Cross validation may include comparison to ground truthing that compares predicted results with actual results on a field is read as “ a trained encoder model”. (Paragraph 124) Also, Hassan teaches in Fig. 3 and agronomic model creation may implement multivariate regression techniques to create preconfigured agronomic data models … the agricultural intelligence computer system 130 is configured or programmed to store the preconfigured agronomic data models for future field data evaluation is consider as “trained encode model”. (Paragraph 130)
Furthermore, Hassanzadeh discloses Selecting one or more of blocks 704, 706, 708 may be based on manual or machine-based inspection of the received data as part of block 702 … Preprocessing of permanent characteristics data may include adjusting the soil samples to the resolution of samples per acre that was reported in the longitude and latitude coordinate system if the received data was sampled in a different resolution, and programmatically projecting the soil samples data onto UTM coordinates. Missing sample values may be interpolated at the UTM coordinates from the available data using a Gaussian process model with a constant trend whose parameters are obtained with maximum likelihood estimation is interpreted as “ processing a portion of the image data using a trained encoder model to generate an intermediate representation of the instance of agricultural satellite image data”. For purposed of the examination, the “preconfigured agronomic model and/or Gaussian process model with parameters obtained with maximum likelihood estimation” is interpreted as “a trained encode model”. (Paragraphs 186-195)
The Examiner states that the Applicant is interpreting the claim narrowly compared to the prior art cited in the Non-Final Office Action and in light of MPEP 2111, the Examiner has interpreted the claims properly. Specifically, during patent prosecution, the pending claims must be “given their broadest reasonable interpretation assistant with the specification.” The Examiner has interpreted the claim language in reference to the specification. Because applicant has the opportunity to amend the claims during prosecution, given a claim in its broadest reasonable interpretation will reduce the possibility that the claim, once issued will be interpreted more or broadly than is justified. Although the cited reference is different from the invention disclosed, the language of Applicant's claims is sufficiently broad to reasonably read on the cited reference. A broad reading does not constitute “teaching away.”
Further, it has been held that nonpreferred embodiments failing to assert discovery beyond that known in the art does not constitute a “teaching away” unless such disclosure criticizes, discredits, or otherwise discourages the solution claimed. In re Susi, 440 F.2d 442, 169 USPQ 423 (CCPA 1971), In re Gurley, 27 F.3d 551, 554, 31 USPQ2d 1130, 1132 (Fed. Cir. 1994), In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004), (see MPEP §2124).
It has been show that these limitation are taught in the Hassanzadeh reference. If
the applicant intends to different between “Hassanzadeh reference” and the present
application, then such differences should be made explicit in the claims. As a result, the argued
features are written such that they read upon the cited references; therefore, the previous
rejection still applies.
Claim Status
Claim 19 is objected to because of the minor informalities:
Claim(s) 1-2, 4, 10, 13-15 and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by HASSANZADEH et al (U.S. 20180132422 A1; Hassanzadeh).
Claim(s) 5, 11-12, 17 and 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over HASSANZADEH et al (U.S. 20180132422 A1; Hassanzadeh), in view of Guo et al (U.S. 20190228224 A1; Guo).
Claims 6-9 and 18-19 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Objections
Claim 19 is objected to because of the following informalities:
In claim 19, line 3, “at least one of an the unlabeled instance” should read “at least one of the unlabeled instance”.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 4, 10, 13-15 and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by HASSANZADEH et al (U.S. 20180132422 A1; Hassanzadeh).
Regarding claim 1, Hassanzadeh discloses a method implemented by one or more processors, (Paragraph 134: “Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Such instructions, when stored in non-transitory storage media accessible to processor 404, render computer system 400 into a special-purpose machine that is customized to perform the operations specified in the instructions.”; Paragraph 181: “FIG. 7 depicts an example embodiment of a management zone creation pipeline”) the method comprising:
identifying a set of agricultural satellite image data, where each instance of agricultural satellite image data includes image data capturing at least a portion of an agricultural plot; (Fig. 7 and Paragraphs 182-184: “ Block 701 represents program instructions for storing data representing transient and permanent characteristics of an agricultural field. … Permanent characteristics data may be provided as soil maps 701b, soil survey maps 701c, topology maps 701d, baresoil maps 701e, and satellite images 701f … Block 702 represents program instructions for receiving data. In block 702, data is received; for example, system 130 (FIG. 1) receives yield data and permanent characteristics data as part of the field data 106. … yield data may also include metadata such as a field boundary, a field size, and a location of each sub-field within the field”)
for each instance of agricultural satellite image data, in the set of agricultural satellite image data, processing a portion of the image data using a trained encoder model to generate an intermediate representation of the instance of agricultural satellite image data; (Fig.7 and Paragraphs 186 : “Blocks 704, 706 and 708 represent program instructions for preprocessing, density processing and data smoothing of the received yield data. Preprocessing may also include identifying, and removing, the yield observations if multiple crops were planted within the field in the same season.”; Paragraph 195: “Preprocessing of permanent characteristics data may include adjusting the soil samples to the resolution of samples per acre that was reported in the longitude and latitude coordinate system if the received data was sampled in a different resolution, and programmatically projecting the soil samples data onto UTM coordinates. Missing sample values may be interpolated at the UTM coordinates from the available data using a Gaussian process model with a constant trend whose parameters are obtained with maximum likelihood estimation”; Paragraphs 124-130)
processing each of the intermediate representations of the agricultural satellite image data to generate a plurality of clusters of the agricultural satellite image data; for each cluster, in the plurality of clusters: identifying a centroid of the cluster, wherein the centroid corresponds to one or more of the intermediate representations of the agricultural satellite image data in the cluster; Fig.7 and Paragraph 208-209: “Clustering is performed on data representing transient and permanent characteristic of an agricultural field to determine a plurality of cluster labels associated with pixels represented by the preprocessed data. In an embodiment, k-means clustering may be used. … In block 710, preprocessed data representing transient and permanent characteristic of an agricultural field is used to delineate a set of management zones for an agricultural field. The set of delineated management zones may be represented using stored digital zone data, and created by applying centroid-based approaches, such as the K-means approach, or a fuzzy C-means approach.”)
generating output indicating a location of the agricultural plot captured in the agricultural satellite image data corresponding to the centroid of the cluster. (Fig.7 and Paragraphs 211-214: “The process executed in block 714 may be repeated one or more times until no small zones are identified in the set of management zones … In block 718, a set of management zones is post-processed. Post-processing of the management zones may include eliminating the zones that are fragmented or unusable.)
Regarding claim 2, Hassanzadeh discloses further including: for each cluster, in the plurality of clusters: deploying a ground truth collection entity, to the location of the agricultural plot captured in the agricultural satellite image data corresponding to the centroid of the cluster, to collect additional image data of the agricultural plot; and generating a label for the instance of agricultural satellite image data based on the additional image data collected at the agricultural plot, wherein the label indicates the one or more crops captured in the instance of agricultural satellite image data. (Paragraph 124: “ The preconfigured agronomic model may have been cross validated to ensure accuracy of the model. Cross validation may include comparison to ground truthing that compares predicted results with actual results on a field, such as a comparison of precipitation estimate with a rain gauge or sensor providing weather data at the same or nearby location or an estimate of nitrogen content with a soil sample measurement. “Paragraph 200:“Block 701 represents program instructions for storing data representing transient and permanent characteristics of an agricultural field … Transient characteristics data may include yield data 701a. Permanent characteristics data may be provided as soil maps 701b, soil survey maps 701c, topology maps 701d, baresoil maps 701e, and satellite images 701f. … Received data may include information about yield of crops harvested from an agricultural field within one year or multiple years. ”, it shows that “yield data 701” is interpreted as additional image data; Paragraph 208:“ Clustering is performed on data representing transient and permanent characteristic of an agricultural field to determine a plurality of cluster labels associated with pixels represented by the preprocessed data.”)
Regarding claim 4, Hassanzadeh discloses the ground truth collection entity is an unmanned aerial vehicle. (Paragraphs 71-73: “ Examples of field data 106 include (a) identification data … (i) imagery data (for example, imagery and light spectrum information from an agricultural apparatus sensor, camera, computer, smartphone, tablet, unmanned aerial vehicle, planes or satellite),”).
Regarding claim 10, Hassanzadeh discloses processing each of the intermediate representations of the agricultural satellite image data to generate the plurality of clusters of the agricultural satellite image data includes: processing each of the intermediate representations of the agricultural satellite image data using k-means clustering to generate the plurality of clusters of the agricultural satellite image data. (Paragraphs 218-220: “a clustering algorithm is applied to the smoothed training yield maps with different number of classes and for each class. … Examples of a clustering approach may include centroid-based multivariate clustering approaches, such as a K-means approach and a fuzzy C-means approach.”)
Regarding claim 13, Hassanzadeh discloses a non-transitory computer-readable storage medium storing instructions executable by at least one processor of a computing system to at least: (Paragraph 134: “Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Such instructions, when stored in non-transitory storage media accessible to processor 404, render computer system 400 into a special-purpose machine that is customized to perform the operations specified in the instructions.”; Paragraph 181: “FIG. 7 depicts an example embodiment of a management zone creation pipeline”)
identify a set of agricultural satellite image data, where each instance of agricultural satellite image data includes image data capturing at least a portion of an agricultural plot; (Fig. 7 and Paragraphs 182-184: “ Block 701 represents program instructions for storing data representing transient and permanent characteristics of an agricultural field. … Permanent characteristics data may be provided as soil maps 701b, soil survey maps 701c, topology maps 701d, baresoil maps 701e, and satellite images 701f … Block 702 represents program instructions for receiving data. In block 702, data is received; for example, system 130 (FIG. 1) receives yield data and permanent characteristics data as part of the field data 106. … yield data may also include metadata such as a field boundary, a field size, and a location of each sub-field within the field”)
for each instance of agricultural satellite image data, in the set of agricultural satellite image data, process a portion of the image data using a trained encoder model to generate an intermediate representation of the instance of agricultural satellite image data; (Fig.7 and Paragraphs 186 : “Blocks 704, 706 and 708 represent program instructions for preprocessing, density processing and data smoothing of the received yield data. Preprocessing may also include identifying, and removing, the yield observations if multiple crops were planted within the field in the same season.” Paragraph 195: “Preprocessing of permanent characteristics data may include adjusting the soil samples to the resolution of samples per acre that was reported in the longitude and latitude coordinate system if the received data was sampled in a different resolution, and programmatically projecting the soil samples data onto UTM coordinates. Missing sample values may be interpolated at the UTM coordinates from the available data using a Gaussian process model with a constant trend whose parameters are obtained with maximum likelihood estimation”; Paragraphs 124-130)
process each of the intermediate representations of the agricultural satellite image data to generate a plurality of clusters of the agricultural satellite image data; for each cluster, in the plurality of clusters: identify a centroid of the cluster, wherein the centroid corresponds to one or more of the intermediate representations of the agricultural satellite image data in the cluster; Fig.7 and Paragraph 208-209: “Clustering is performed on data representing transient and permanent characteristic of an agricultural field to determine a plurality of cluster labels associated with pixels represented by the preprocessed data. In an embodiment, k-means clustering may be used. … In block 710, preprocessed data representing transient and permanent characteristic of an agricultural field is used to delineate a set of management zones for an agricultural field. The set of delineated management zones may be represented using stored digital zone data, and created by applying centroid-based approaches, such as the K-means approach, or a fuzzy C-means approach.”)
generate output indicating a location of the agricultural plot captured in the agricultural satellite image data corresponding to the centroid of the cluster. (Fig.7 and Paragraphs 211-214: “The process executed in block 714 may be repeated one or more times until no small zones are identified in the set of management zones … In block 718, a set of management zones is post-processed. Post-processing of the management zones may include eliminating the zones that are fragmented or unusable.)
Regarding claim 14, Hassanzadeh discloses for each cluster, in the plurality of clusters: deploy a ground truth collection entity, to the location of the agricultural plot captured in the agricultural satellite image data corresponding to the centroid of the cluster, to collect additional image data of the agricultural plot; and generate a label for the instance of agricultural satellite image data based on the additional image data collected at the agricultural plot, wherein the label indicates the one or more crops captured in the instance of agricultural satellite image data. (Paragraph 124: “ The preconfigured agronomic model may have been cross validated to ensure accuracy of the model. Cross validation may include comparison to ground truthing that compares predicted results with actual results on a field, such as a comparison of precipitation estimate with a rain gauge or sensor providing weather data at the same or nearby location or an estimate of nitrogen content with a soil sample measurement. “Paragraph 200:“Block 701 represents program instructions for storing data representing transient and permanent characteristics of an agricultural field … Transient characteristics data may include yield data 701a. Permanent characteristics data may be provided as soil maps 701b, soil survey maps 701c, topology maps 701d, baresoil maps 701e, and satellite images 701f. … Received data may include information about yield of crops harvested from an agricultural field within one year or multiple years. ”, it shows that “yield data 701” is interpreted as additional image data; Paragraph 208:“ Clustering is performed on data representing transient and permanent characteristic of an agricultural field to determine a plurality of cluster labels associated with pixels represented by the preprocessed data.”)
Regarding claim 15, Hassanzadeh discloses the ground truth collection entity is a human reviewer. (Paragraph 157: “Yield data may also include additional information such as a field boundary, a field size, and a location of each sub-field within the field. Yield data may be provided from different sources. Examples of the sources may include research partners, agricultural agencies, agricultural organizations, growers, governmental agencies, and others.”; Paragraph 210: “the computer system may display information about the set of first management zones to a crop grower in a graphical user interface that is programmed with widgets or controls to allow the grower to remove undesirable fragmented small zones, or to merge the fragmented small zones with larger zones.”; Paragraph 276)
Regarding claim 20, Hassanzadeh discloses the instructions for processing each of the intermediate representations of the agricultural satellite image data to generate the plurality of clusters of the agricultural satellite image data cause one or more of the at least one processor to: process each of the intermediate representations of the agricultural satellite image data using k-means clustering to generate the plurality of clusters of the agricultural satellite image data. (Paragraphs 218-220: “a clustering algorithm is applied to the smoothed training yield maps with different number of classes and for each class. … Examples of a clustering approach may include centroid-based multivariate clustering approaches, such as a K-means approach and a fuzzy C-means approach.”)
Regarding claim 21, Hassanzadeh discloses further including processing, using active learning, unlabeled instances of agricultural satellite image data or corresponding intermediate representations of the agricultural satellite image data to select one or more additional instances of agricultural satellite image data for labeling.
Regarding claim 22, Hassanzadeh discloses the instructions cause one or more of the at least one processor to process, using active learning, unlabeled instances of agricultural satellite image data or corresponding intermediate representations of the agricultural satellite image data to select one or more additional instances of agricultural satellite image data for labeling.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5, 11-12, 17 and 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over HASSANZADEH et al (U.S. 20180132422 A1; Hassanzadeh), in view of Guo et al (U.S. 20190228224 A1; Guo).
Regarding claim 5, Hassanzadeh discloses further including: for each of the labels generated based on additional image data collected at the locations of the agricultural plots captured in the agricultural satellite image data corresponding to the centroid of the clusters: (Paragraph 124: “ The preconfigured agronomic model may have been cross validated to ensure accuracy of the model. Cross validation may include comparison to ground truthing that compares predicted results with actual results on a field, such as a comparison of precipitation estimate with a rain gauge or sensor providing weather data at the same or nearby location or an estimate of nitrogen content with a soil sample measurement. “Paragraph 200:“Block 701 represents program instructions for storing data representing transient and permanent characteristics of an agricultural field … Transient characteristics data may include yield data 701a. Permanent characteristics data may be provided as soil maps 701b, soil survey maps 701c, topology maps 701d, baresoil maps 701e, and satellite images 701f. … Received data may include information about yield of crops harvested from an agricultural field within one year or multiple years. ”, it shows that “yield data 701” is interpreted as additional image data; Paragraph 208:“ Clustering is performed on data representing transient and permanent characteristic of an agricultural field to determine a plurality of cluster labels associated with pixels represented by the preprocessed data.”)
However, Hassanzadeh does not disclose processing the corresponding instance of agricultural satellite image data using a crop classification model to generate predicted crop output, wherein the predicted crop output indicates one or more crops captured in the instance of agricultural satellite image data; comparing the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data; updating one or more portions of the crop classification model based on comparing the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data.
Guo discloses for each of the labels generated based on additional image data collected at the locations of the agricultural plots captured in the agricultural satellite image data corresponding to the centroid of the clusters: (Paragraph 37: “Training logic 124 may facilitate selection of images, presentation of selected images for human labelling, use of human labeled images, obtaining governmental/publicly available crop boundary and/or crop type identified data, and/or the like. Ground truth data may also be referred to as training data, model building data, model training data”; Paragraph 84: “In addition to use of the crop/non-crop heat map, the crop boundary location determination may also be in accordance with prior knowledge information, application of de-noising techniques, application of clustering and region growing techniques,”)
processing the corresponding instance of agricultural satellite image data using a crop classification model to generate predicted crop output, wherein the predicted crop output indicates one or more crops captured in the instance of agricultural satellite image data; (Figs. 2 and 5; Paragraph 89: “FIG. 5 depicts a flow diagram illustrating an example process 500 that may be implemented by the system 100 to perform crop type classification using an existing crop type classification model and modifying the crop type classification model on an as needed basis, blocks 502, 504, 506, 508 may be similar to respective blocks 210, 212, 214, 216 of FIG. 2, except that the image sets for which the crop type classification is performed may be associated with a geographical region and/or time period that differs from the geographical region and/or time period associated with the crop type model used in block 506.”)
comparing the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data; (Figs. 2 and 5: Paragraph 92: “At block 514, training logic 124 may be configured to evaluate the accuracy of at least a subset of crop types predicted using the existing crop type model in block 508 by comparison against crop types identified in the (filtered) ground truth data provided in blocks 510, 512. In some embodiments, crop type(s) classified for the same (or nearly the same) geographical areas in the two sets of identified crop type data may be compared to each other.”) and
updating one or more portions of the crop classification model based on comparing the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data. (Figs. 2 and 5; Paragraphs 93-94: “ If the accuracy of the predicted crop types equals or exceeds a threshold (yes branch of block 514), then process 500 may proceed to blocks 516-522. … if crop type classification are to be updated (yes branch of block 518), then process 500 may return to block 502. … If the accuracy of the predicted crop boundaries is less than a threshold (no branch of block 514), then process 500 may proceed to block 524. … At block 524, the training logic 124 may be configured to generate a new crop type model based on (filtered) ground truth data of block 512 applied to one or more machine learning techniques/systems. Block 524 may be similar to block 206 of FIG. 2.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Hassanzadeh by including a crop type classification model that is taught by Guo, to make the invention that a system, apparatus, and method for crop type classification in images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of the crop type prediction and identify agricultural land on a sufficiently granular level for one or more particular geographical regions and the crop(s) growing on the agricultural land.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 11, Hassanzadeh discloses wherein processing each of the intermediate representations of the agricultural satellite image data using k-means clustering to generate the plurality of clusters of the agricultural satellite image data is unsupervised clustering.
However, Hassanzadeh does not discloses k-means clustering to generate the plurality of clusters of the agricultural satellite image data is unsupervised clustering.
Guo discloses processing each of the intermediate representations of the agricultural satellite image data using k-means clustering to generate the plurality of clusters of the agricultural satellite image data is unsupervised clustering. (Paragraph 85: “Non-supervised clustering and region growing techniques may be used to reclassify stray pixels from non-crop to crop or vice versa in areas in which a few pixels deviate from a significantly larger number of pixels surrounding them.)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Hassanzadeh by including Non-supervised clustering and region growing techniques that is taught by Guo, to make the invention that a system, apparatus, and method for crop type classification in images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving determine or refine the crop boundaries.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 12, Hassanzadeh discloses all the claims invention except the encoder model is an encoder portion of a trained recurrent neural network transformer (RNN-T) model, wherein the RNN-T model is trained for crop classification using supervised learning.
Guo discloses the encoder model is an encoder portion of a trained recurrent neural network transformer (RNN-T) model, wherein the RNN-T model is trained for crop classification using supervised learning. (Paragraphs 78-80: “The machine learning technique/system may comprise, for example, a convolutional neural network (CNN) or supervised learning system. The crop/non-crop model may be configured to provide a probabilistic prediction of crop or non-crop for each pixel corresponding to a particular geographic location associated with an image set provided as the input.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Hassanzadeh by including a machine learning model that is taught by Guo, to make the invention that a system, apparatus, and method for crop type classification in images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the identify crop boundaries in images for which crop boundaries may be unknown.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 17, Hassanzadeh discloses for each of the labels generated based on additional image data collected at the locations of the agricultural plots captured in the agricultural satellite image data corresponding to the centroid of the clusters: (Paragraph 124: “ The preconfigured agronomic model may have been cross validated to ensure accuracy of the model. Cross validation may include comparison to ground truthing that compares predicted results with actual results on a field, such as a comparison of precipitation estimate with a rain gauge or sensor providing weather data at the same or nearby location or an estimate of nitrogen content with a soil sample measurement. “Paragraph 200:“Block 701 represents program instructions for storing data representing transient and permanent characteristics of an agricultural field … Transient characteristics data may include yield data 701a. Permanent characteristics data may be provided as soil maps 701b, soil survey maps 701c, topology maps 701d, baresoil maps 701e, and satellite images 701f. … Received data may include information about yield of crops harvested from an agricultural field within one year or multiple years. ”, it shows that “yield data 701” is interpreted as additional image data; Paragraph 208:“ Clustering is performed on data representing transient and permanent characteristic of an agricultural field to determine a plurality of cluster labels associated with pixels represented by the preprocessed data.”)
However, Hassanzadeh does not disclose process the corresponding instance of agricultural satellite image data using a crop classification model to generate predicted crop output, wherein the predicted crop output indicates one or more crops captured in the instance of agricultural satellite image data; compare the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data; and update one or more portions of the crop classification model based on comparing the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data.
Guo discloses for each of the labels generated based on additional image data collected at the locations of the agricultural plots captured in the agricultural satellite image data corresponding to the centroid of the clusters: (Paragraph 37: “Training logic 124 may facilitate selection of images, presentation of selected images for human labelling, use of human labeled images, obtaining governmental/publicly available crop boundary and/or crop type identified data, and/or the like. Ground truth data may also be referred to as training data, model building data, model training data”; Paragraph 84: “In addition to use of the crop/non-crop heat map, the crop boundary location determination may also be in accordance with prior knowledge information, application of de-noising techniques, application of clustering and region growing techniques,”)
process the corresponding instance of agricultural satellite image data using a crop classification model to generate predicted crop output, wherein the predicted crop output indicates one or more crops captured in the instance of agricultural satellite image data; (Figs. 2 and 5; Paragraph 89: “FIG. 5 depicts a flow diagram illustrating an example process 500 that may be implemented by the system 100 to perform crop type classification using an existing crop type classification model and modifying the crop type classification model on an as needed basis, blocks 502, 504, 506, 508 may be similar to respective blocks 210, 212, 214, 216 of FIG. 2, except that the image sets for which the crop type classification is performed may be associated with a geographical region and/or time period that differs from the geographical region and/or time period associated with the crop type model used in block 506.”)
compare the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data; (Figs. 2 and 5: Paragraph 92: “At block 514, training logic 124 may be configured to evaluate the accuracy of at least a subset of crop types predicted using the existing crop type model in block 508 by comparison against crop types identified in the (filtered) ground truth data provided in blocks 510, 512. In some embodiments, crop type(s) classified for the same (or nearly the same) geographical areas in the two sets of identified crop type data may be compared to each other.”) and
update one or more portions of the crop classification model based on comparing the predicted crop output and the label generated based on the additional image data collected at the location corresponding to the instance of agricultural satellite image data. (Figs. 2 and 5; Paragraphs 93-94: “ If the accuracy of the predicted crop types equals or exceeds a threshold (yes branch of block 514), then process 500 may proceed to blocks 516-522. … if crop type classification are to be updated (yes branch of block 518), then process 500 may return to block 502. … If the accuracy of the predicted crop boundaries is less than a threshold (no branch of block 514), then process 500 may proceed to block 524. … At block 524, the training logic 124 may be configured to generate a new crop type model based on (filtered) ground truth data of block 512 applied to one or more machine learning techniques/systems. Block 524 may be similar to block 206 of FIG. 2.”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Hassanzadeh by including a crop type classification model that is taught by Guo, to make the invention that a system, apparatus, and method for crop type classification in images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of the crop type prediction and identify agricultural land on a sufficiently granular level for one or more particular geographical regions and the crop(s) growing on the agricultural land.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 21, Hassanzadeh discloses all the claims invention except processing, using active learning, unlabeled instances of agricultural satellite image data or corresponding intermediate representations of the agricultural satellite image data to select one or more additional instances of agricultural satellite image data for labeling.
Guo discloses processing, using active learning, unlabeled instances of agricultural satellite image data or corresponding intermediate representations of the agricultural satellite image data to select one or more additional instances of agricultural satellite image data for labeling. (Figs. 4-5 and Paragraph 78-80: “The machine learning technique/system may comprise, for example, a convolutional neural network (CNN) or supervised learning system. The crop/non-crop model may be configured to provide a probabilistic prediction of crop or non-crop for each pixel corresponding to a particular geographic location associated with an image set provided as the input. … the machine learning technique/system may learn which land surface features in images are indicative of crops or not crops. … If the model's accuracy is less than the pre-determined threshold (no branch of block 408), then process 400 may return to block 402 to obtain/receive additional ground truth data to apply to the machine learning techniques/systems to refine the current crop/non-crop model. Providing additional ground truth data to the machine learning techniques/systems comprises providing additional supervised learning data so that the crop/non-crop model may be better configured to predict whether a pixel depicts a crop (or is located within a crop field) or not a crop (or is not located within a crop field).”, is show that “additional supervised learning data” is interpreted as “one or more additional instances of agricultural satellite image data for labeling”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Hassanzadeh by including crop type model that is taught by Guo, to make the invention that a system, apparatus, and method for crop type classification in images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of the crop type prediction and identify agricultural land on a sufficiently granular level for one or more particular geographical regions and the crop(s) growing on the agricultural land.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 22, Hassanzadeh discloses all the claims invention except process, using active learning, unlabeled instances of agricultural satellite image data or corresponding intermediate representations of the agricultural satellite image data to select one or more additional instances of agricultural satellite image data for labeling.
Guo discloses process, using active learning, unlabeled instances of agricultural satellite image data or corresponding intermediate representations of the agricultural satellite image data to select one or more additional instances of agricultural satellite image data for labeling. (Fig. 4 and Paragraph 78-80: “The machine learning technique/system may comprise, for example, a convolutional neural network (CNN) or supervised learning system. The crop/non-crop model may be configured to provide a probabilistic prediction of crop or non-crop for each pixel corresponding to a particular geographic location associated with an image set provided as the input. … the machine learning technique/system may learn which land surface features in images are indicative of crops or not crops. … If the model's accuracy is less than the pre-determined threshold (no branch of block 408), then process 400 may return to block 402 to obtain/receive additional ground truth data to apply to the machine learning techniques/systems to refine the current crop/non-crop model. Providing additional ground truth data to the machine learning techniques/systems comprises providing additional supervised learning data so that the crop/non-crop model may be better configured to predict whether a pixel depicts a crop (or is located within a crop field) or not a crop (or is not located within a crop field).”, is show that “additional supervised learning data” is interpreted as “one or more additional instances of agricultural satellite image data for labeling”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Hassanzadeh by including crop type model that is taught by Guo, to make the invention that a system, apparatus, and method for crop type classification in images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the accuracy of the crop type prediction and identify agricultural land on a sufficiently granular level for one or more particular geographical regions and the crop(s) growing on the agricultural land.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention
Allowable Subject Matter
Claims 6-9 and 18-19 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 6, Hassanzadeh, Guo and Xian does not disclose further comprising:
processing, using active learning, the unlabeled instances of agricultural satellite image data and/or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more additional instances of agricultural satellite image data to label; for each of the additional instances of agricultural satellite image data selected to label: generating additional output indicating the location of an additional agricultural plot captured in the additional instance of agricultural satellite image data; deploying an additional ground truth collection entity, to the location of the additional agricultural plot captured in the additional instance of agricultural satellite image, to collect further image data of the additional agricultural plot; generating an additional label for the additional instance of agricultural satellite image data based on the further image data collected at the additional agricultural plot, wherein the additional label indicates the one or more crops captured in the additional instance of agricultural satellite image data; processing the additional instance of agricultural satellite image data using the crop classification model to generate additional crop prediction output, wherein the additional crop prediction output indicates the one or more crops captured in the additional agricultural plot captured in the additional instance of agricultural image data; and updating one or more portions of the crop classification model based on comparing the additional label and the additional crop prediction output.
Claim 6 is subject allowable matter by the limitation further including: processing, using active learning, at least one of an unlabeled instances of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more additional instances of agricultural satellite image data to label; for each of the additional instances of agricultural satellite image data selected to label: generating additional output indicating the location of an additional agricultural plot captured in the additional instance of agricultural satellite image data; deploying an additional ground truth collection entity, to the location of the additional agricultural plot captured in the additional instance of agricultural satellite image, to collect further image data of the additional agricultural plot; generating an additional label for the additional instance of agricultural satellite image data based on the further image data collected at the additional agricultural plot, wherein the additional label indicates the one or more crops captured in the additional instance of agricultural satellite image data; processing the additional instance of agricultural satellite image data using the crop classification model to generate additional crop prediction output, wherein the additional crop prediction output indicates the one or more crops captured in the additional agricultural plot captured in the additional instance of agricultural image data; and updating one or more portions of the crop classification model based on comparing the additional label and the additional crop prediction output.
Regarding claim 7, Hassanzadeh, Guo and Xian does not disclose further including: processing, using active learning, at least one of the unlabeled instance of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more further instances of agricultural satellite image data to label.
Claim 7 would be allowable because it is depend on claim 6.
Regarding claim 8, Hassanzadeh, Guo and Xian does not disclose wherein processing, using active learning, at least one of the unlabeled instance of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more additional instances of agricultural satellite image data to label includes: for each unlabeled instance of agricultural image data: processing the unlabeled instance of agricultural satellite image data using the crop prediction model to generate candidate output, wherein the candidate output includes a confidence measure indicating the probability one or more crops are captured in the corresponding unlabeled instance of agricultural satellite image data; and selecting one or more of the additional instances of agricultural satellite image data based on the corresponding confidence measures.
Claim 8 would be allowable because it is depend on claim 6.
Regarding claim 9, Hassanzadeh, Guo and Xian does not disclose wherein processing, using active learning, at least one of the unlabeled instance of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more additional instances of agricultural satellite image data to label includes: identifying one or more unlabeled instances of agricultural satellite image data at a border of two or more clusters; and selecting the one or more additional instances of agricultural satellite image data to label based on the identified one or more unlabeled instances of agricultural satellite image data at the border of two or more clusters.
Claim 9 would be allowable because it is depend on claim 6.
Regarding claim 18, Hassanzadeh, Guo and Xian does not disclose the instructions cause one or more of the at least processor to: process, using active learning, at least one of an unlabeled instance of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more additional instances of agricultural satellite image data to label; for each of the additional instances of agricultural satellite image data selected to label: generate additional output indicating the location of an additional agricultural plot captured in the additional instance of agricultural satellite image data; deploying an additional ground truth collection entity, to the location of the additional agricultural plot captured in the additional instance of agricultural satellite image, to collect further image data of the additional agricultural plot; generate an additional label for the additional instance of agricultural satellite image data based on the further image data collected at the additional agricultural plot, wherein the additional label indicates the one or more crops captured in the additional instance of agricultural satellite image data; process the additional instance of agricultural satellite image data using the crop classification model to generate additional crop prediction output, wherein the additional crop prediction output indicates the one or more crops captured in the additional agricultural plot captured in the additional instance of agricultural image data; and update one or more portions of the crop classification model based on comparing the additional label and the additional crop prediction output.
Claim 18 is subject allowable matter by the limitation the instructions cause one or more of the at least processor to: process, using active learning, at least one of an unlabeled instance of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more additional instances of agricultural satellite image data to label; for each of the additional instances of agricultural satellite image data selected to label: generate additional output indicating the location of an additional agricultural plot captured in the additional instance of agricultural satellite image data; deploying an additional ground truth collection entity, to the location of the additional agricultural plot captured in the additional instance of agricultural satellite image, to collect further image data of the additional agricultural plot; generate an additional label for the additional instance of agricultural satellite image data based on the further image data collected at the additional agricultural plot, wherein the additional label indicates the one or more crops captured in the additional instance of agricultural satellite image data; process the additional instance of agricultural satellite image data using the crop classification model to generate additional crop prediction output, wherein the additional crop prediction output indicates the one or more crops captured in the additional agricultural plot captured in the additional instance of agricultural image data; and update one or more portions of the crop classification model based on comparing the additional label and the additional crop prediction output.
Regarding claim 19, Hassanzadeh, Guo and Xian does not disclose wherein the instructions cause one or more of the at least one processor to: process, using active learning, at least one of an the unlabeled instance of agricultural satellite image data or the corresponding intermediate representations of the agricultural satellite image data, in the set of agricultural satellite image data, to select one or more further instances of agricultural satellite image data to label.
Claim 19 would be allowable because it is depend on claim 18.
Relevant Prior Art Directed to State of Art
Xian et al (U.S. 20210019522 A1), “Automatic Crop Classification System and Method”, teaches about methods and systems used for the classification of a crop grown within an agricultural field using remotely-sensed image data. It also teaches about the method involves unsupervised pixel clustering, which includes gathering pixel values and assigning them to clusters to produce a pixel distribution signal. The pixel distribution signals of the remotely-sensed image data over the growing season are summed up to generate a temporal representation of a management zone. Location information of the management zone is added to the temporal data and ingested into a Recurrent Neural Network (RNN). The output of the model is a prediction of the crop type grown in the management zone over the growing season.
Dutta et al (U.S. 20200302223 A1), “Artificial Intelligence-Based Generation Of Sequencing Metadata”, teaches about neural networks to determine analyte metadata by (i) processing input image data derived from a sequence of image sets through a neural network and generating an alternative representation of the input image data, the input image data has an array of units that depicts analytes and their surrounding background, (ii) processing the alternative representation through an output layer and generating an output value for each unit in the array, (iii) thresholding output values of the units and classifying a first subset of the units as background units depicting the surrounding background, and (iv) locating peaks in the output values of the units and classifying a second subset of the units as center units containing centers of the analytes.
Young (U.S. 20210103728 A1), “Hybrid Vision System for Crop Land Navigation”, teaches about machine vision as applied to recognition of plants and weeds in agriculture. It also teaches about an autonomous mobile vehicle is equipped with sensing and vision capability and is programmed to selectively switch between two forms of automatic hybrid product recognition based on machine vision; a first mode of hybrid recognition uses machine vision techniques to recognize plant material while the autonomous vehicle traverses an agricultural field; he first mode provides coarse image recognition based on rapid travel through a field while scanning objects for recognition, in combination with a simplified algorithm for recognition; a second mode of hybrid recognition is more computationally intense, is directed to accurate discrimination between field crops and weeds and is executed while the vehicle is stationary.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUY TRAN/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674