Prosecution Insights
Last updated: April 19, 2026
Application No. 18/226,215

Methods And Systems For Use In Mapping Tillage Based On Remote Data

Non-Final OA §103§112
Filed
Jul 25, 2023
Examiner
LE, SARAH
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Climate LLC
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
172 granted / 258 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Election/Restrictions Due to discovered art, the Examiner has withdrawn the restriction/election requirement mailed on 07/30/2025. Claim Objections Claims 6,8,16 are objected to because of the following informalities: Claims 6,8, 16 recite the limitation “the model” in line 1. It should be “the trained model”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11, 14-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 recites limitation “perform one or more of the steps in the claims above.” in lines 3-4. It is unclear which claims refer to claims above. Claim 11 also recites two period (.) in line 4 and line 12. It looks like the claim 11 is drafted and needs to rewrite to correct issue. Claim 14 recites the limitation “any 13” in line 1. It is unclear which any 13 refers to. Claim 15 rejected based on rejection of claim 14. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1-5, 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Guan et al, IDS, U.S Patent Application Publication No.2019/0087682 (“Guan”) in view of She et al, U.S Patent Application Publication No. 2020/0125844 (“She”) further in view of Beeson, Peter C., et al. "Multispectral satellite mapping of crop residue cover and tillage intensity in Iowa." Journal of Soil and Water Conservation 71.5 (2016): 385-395.(“Beeson”) Regarding independent claim 1, Guan teaches a computer-implemented method for use in processing image data associated with fields ([0110] At block 305, the agricultural intelligence computer system 130 is configured or programmed to implement agronomic data preprocessing of field data received from one or more data sources. The field data received from one or more data sources may be preprocessed for the purpose of removing noise and distorting effects within the agronomic data including measured outliers that would bias received field data values. Embodiments of agronomic data preprocessing may include, but are not limited to, removing data values commonly associated with outlier data values, specific measured data points that are known to unnecessarily skew other data values, data smoothing techniques used to remove or reduce additive or multiplicative effects from noise, and other filtering or data derivation techniques used to provide clear distinctions between positive and negative data inputs.”), the method comprising: accessing, by a computing device (see at least [0066] When field data 106 is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system”), an image of one or more fields (see at least [0085] In one embodiment, nitrogen instructions 210 are programmed to provide tools to inform nitrogen decisions by visualizing the availability of nitrogen to crops. This enables growers to maximize yield or return on investment through optimized nitrogen application during the season. Example programmed functions include displaying images such as SSURGO images to enable drawing of application zones and/or images generated from subfield soil data, such as data obtained from sensors, at a high spatial resolution (as fine as 10 meters or smaller because of their proximity to the soil); upload of existing grower-defined zones; providing an application graph and/or a map to enable tuning application(s) of nitrogen across multiple zones; output of scripts to drive machinery; tools for mass data entry and adjustment; and/or maps for data visualization, among others. “Mass data entry,” in this context, may mean entering data once and then applying the same data to multiple fields that have been defined in the system; example data may include nitrogen application data that is the same for many fields of the same grower, but such mass data entry applies to the entry of any type of field data into the mobile computer application 200. For example, nitrogen instructions 210 may be programmed to accept definitions of nitrogen planting and practices programs and to accept user input specifying to apply those programs across multiple fields. “Nitrogen planting programs,” in this context, refers to a stored, named set of data that associates: a name, color code or other identifier, one or more dates of application, types of material or product for each of the dates and amounts, method of application or incorporation such as injected or knifed in, and/or amounts or rates of application for each of the dates, crop or hybrid that is the subject of the application, among others. “Nitrogen practices programs,” in this context, refers to a stored, named set of data that associates: a practices name; a previous crop; a tillage system; a date of primarily tillage; one or more previous tillage systems that were used; one or more indicators of application type, such as manure, that were used. Nitrogen instructions 210 also may be programmed to generate and cause displaying a nitrogen graph, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. In one embodiment, a nitrogen graph comprises a graphical display in a computer display device comprising a plurality of rows, each row associated with and identifying a field; data specifying what crop is planted in the field, the field size, the field location, and a graphic representation of the field perimeter; in each row, a timeline by month with graphic indicators specifying each nitrogen application and amount at points correlated to month names; and numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude”), the image including multiple pixels(see at least [0125] In an embodiment, the feature extractor 500 receives as input remote sensing imagery 510 and extracts features from the image for use by the high-precision pixel classifier 501. The exact features extracted from the image may vary depending on the exact classification technique employed by the high-precision pixel classifier 501. For example, different classification schemes may work better if the features include and/or exclude specific bands from the remote sensing imagery 510. Furthermore, the features extracted may be based on the type of remote sensing imagery 510 used. For example, features which correspond to information that is not present in and cannot be derived from remote sensing imagery 510 will often not be available for consideration in the classification process. However, in other embodiments the remote sensing imagery 510 may be supplemented by additional data and features extracted from different external sources.”) each of the pixels including a value (see at least [0136] In an embodiment, the optimal cloud height estimator 507 estimates the most likely heights of the clouds indicated by the cloud mask 512, which is received as input, based on pixels within the remote sensing imagery 510 that are likely to be shadows. In most cases, dips in the NIR band are a useful metric for identifying shadows. Thus, in some embodiments, the optimal cloud height estimator 507 identifies candidate shadow pixels in the remote sensing imagery 510 by applying a thresholds, where if the NIR of the band is less than s, the pixel is marked as a candidate shadow pixel. The aforementioned technique generally contains a large number of false positives because many non-shadow areas also have low NIR values.”); deriving, by the computing device, at least one index value for the image ( see least [0166] NDVI—The normalized difference vegetation index, a function of NIR and R bands:”); generating a map of tillage for the one or more fields (see at least [0066] When field data 106 is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system. [0067] In an example embodiment, the agricultural intelligence computer system 130 is programmed to generate and cause displaying a graphical user interface comprising a data manager for data input. After one or more fields have been identified using the methods described above, the data manager may provide one or more graphical user interface widgets which when selected can identify changes to the field, soil, crops, tillage, or nutrient practices. The data manager may include a timeline view, a spreadsheet view, and/or one or more editable programs”), using a trained model and the at least one index value for each of the pixels of the image, the map of tillage indicating a location and an intensity of the tillage for one or more segments of the one of more fields (see at least [0182] At block 615, the high-precision pixel classifier 501 is trained on labeled training data. The training of the high-precision pixel classifier 501 will differ depending on the machine learning technique used to implement the high-precision pixel classifier 501. However, there are many commercially available tools, such as Vowpal Wabbit, Spark, PyBrain, and so forth that implement a variety of machine learning techniques that could potentially be used to implement the high-precision pixel classifier 501. In some embodiments, the high-precision pixel classifier 501 includes a component that processes the labeled training data into a format expected by the utilized library, and then invokes a training routine of the library to train the machine learning model. However, although there are many well-known machine learning techniques that may be used to implement the high-precision pixel classifier 501, many classifiers have configurable settings or coefficients that may need to be adjusted to provide adequate results. For example, in the case of a SVM, the per-class penalty may be set to 5:1 and the kernel function may be set to Radial Basis Function (RBF) with (γ=0.25 and C=1.0). [0068] FIG. 10 depicts an example embodiment of a timeline view for data entry. Using the display depicted in FIG. 10, a user computer can input a selection of a particular field and a particular date for the addition of event. Events depicted at the top of the timeline include Nitrogen, Planting, Practices, and Soil. To add a nitrogen application event, a user computer may provide input to select the nitrogen tab. The user computer may then select a location on the timeline for a particular field in order to indicate an application of nitrogen on the selected field. In response to receiving a selection of a location on the timeline for a particular field, the data manager may display a data entry overlay, allowing the user computer to input data pertaining to nitrogen applications, planting procedures, soil application, tillage procedures, irrigation practices, or other information relating to the particular field. For example, if a user computer selects a portion of the timeline and indicates an application of nitrogen, then the data entry overlay may include fields for inputting an amount of nitrogen applied, a date of application, a type of fertilizer used, and any other information related to the application of nitrogen.”); storing, by the computing device, the map of tillage for the one or more fields in a memory (see at least [0065] Data management layer 140 may be programmed or configured to manage read operations and write operations involving the repository 160 and other functional elements of the system, including queries and result sets communicated between the functional elements of the system and the repository. Examples of data management layer 140 include JDBC, SQL server interface code, and/or HADOOP interface code, among others. Repository 160 may comprise a database. As used herein, the term “database” may refer to either a body of data, a relational database management system (RDBMS), or to both. As used herein, a database may comprise any collection of data including hierarchical databases, relational databases, flat file databases, object-relational databases, object oriented databases, and any other structured collection of records or data that is stored in a computer system. Examples of RDBMS's include, but are not limited to including, ORACLE®, MYSQL, IBM® DB2, MICROSOFT® SQL SERVER, SYBASE®, and POSTGRESQL databases. However, any database may be used that enables the systems and methods described herein.[0066] When field data 106 is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system.”) ; and causing display of the map of tillage for the one or more fields at an output device (see at least [0066] When field data 106 is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system. [0067] In an example embodiment, the agricultural intelligence computer system 130 is programmed to generate and cause displaying a graphical user interface comprising a data manager for data input. After one or more fields have been identified using the methods described above, the data manager may provide one or more graphical user interface widgets which when selected can identify changes to the field, soil, crops, tillage, or nutrient practices. The data manager may include a timeline view, a spreadsheet view, and/or one or more editable programs.[0068] FIG. 10 depicts an example embodiment of a timeline view for data entry. Using the display depicted in FIG. 10, a user computer can input a selection of a particular field and a particular date for the addition of event. Events depicted at the top of the timeline include Nitrogen, Planting, Practices, and Soil. To add a nitrogen application event, a user computer may provide input to select the nitrogen tab. The user computer may then select a location on the timeline for a particular field in order to indicate an application of nitrogen on the selected field. In response to receiving a selection of a location on the timeline for a particular field, the data manager may display a data entry overlay, allowing the user computer to input data pertaining to nitrogen applications, planting procedures, soil application, tillage procedures, irrigation practices, or other information relating to the particular field. For example, if a user computer selects a portion of the timeline and indicates an application of nitrogen, then the data entry overlay may include fields for inputting an amount of nitrogen applied, a date of application, a type of fertilizer used, and any other information related to the application of nitrogen.”) Guan is understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, She teaches accessing, by a computing device, an image of one or more fields, the image including multiple pixels, each of the pixels including a value for each of multiple bands ( see at least [0115] At step 702, images of agronomic fields produced using one or more frequency bands are received. The images of the agronomic field may be produced by a satellite configured to capture images in a plurality of frequency bands. For example, the SENTINEL-2 satellite operated by the EUROPEAN SPACE AGENCY produces images in a plurality of frequency bands including a red frequency band, a blue frequency band, a green frequency band, a near infrared frequency band, and a water vapor frequency band. [0116] The agricultural intelligence computer system may receive a plurality of sets of images directly or indirectly from the satellite where each set of images comprises an image in each frequency band corresponding to a same location. The system may receive any number of the above described frequency bands and/or different frequency bands for use in detecting cloud and cloud shadow in an image. The data may be received as a series of pixel values for images of each frequency band, the pixel values corresponding to pixel locations. In an embodiment, each image in a set of images is of a same size with pixels corresponding to overlapping locations. For instance, the Nth pixel of the red frequency band image of an agronomic field may correspond to the Nth pixel of the blue frequency band image of the agronomic field in the same set as the red frequency band image. [0117] At step 704, corresponding data identifying cloud and cloud shadow locations in the images is received. The data may indicate, for each pixel of the images in a set of images, whether the pixel is a non-cloud pixel, a cloud pixel, or a cloud shadow pixel. For example, a set of images may include three frequency band images, each of which comprising 300×300 pixels with the Nth pixel of each image corresponding to the same location on the agronomic field. The corresponding data may thus indicate, for each pixel of the 300×300 pixel images, whether the pixel is a non-cloud pixel, a cloud pixel, or a cloud shadow pixel. Thus, if the system receives 300×300 pixel images of an agronomic field in three frequency bands, the system may store three 300×300 matrices, each of which corresponding to a frequency band, and a fourth 300×300 matrix where each element of the matrix is one of three values, such as 0, 1, and 2, which correspond to non-cloud pixels, cloud pixels, or cloud shadow pixels.” 0126] In an embodiment, the machine learning system is trained using a plurality of stacked matrices as inputs and a single matrix identifying locations of clouds as outputs. For example, if the machine learning system is trained on three frequency bands, each input/output pairing would include three stacked matrices as inputs and a single cloud identifying matrix as an output. Each of the three input matrices for the input/output pairing may correspond to a different frequency band and comprise pixel values for that frequency band. An example of the output single cloud identifying matrix is a matrix of 0s and 1s where a 0 indicates that the pixel does not correspond to a cloud location and a 1 indicates that the pixel corresponds to a cloud location.”); deriving, by the computing device, at least one index value for the image (see at least [0155] In an embodiment, the system generates an overlay for the image based on the data identifying clouds and cloud shadows in the image. For example, the system may compute an index value, such as an NDVI value, for each location on the field that has not been identified as including a cloud or cloud shadow. The system may then generate an overlay for the image that identifies the NDVI of the locations that do not include cloud or cloud shadow. Thus, if the system receives a request for display of a current image of a field with NDVI values, the system may use the methods described herein to identify cloud and cloud shadow locations and generate an overlay for the remaining locations. The system may then cause display of the image with the overlay on a requesting client computing device.”); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the computer-implemented method of Guan with including identifying pixel value for each of multiple bands as seen in She because this modification would indicate, for each pixel of the images in a set of images, whether the pixel is a non-cloud pixel, a cloud pixel, or a cloud shadow pixel ([0117] of She) Both Guan and She are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Beeson teaches accessing, by a computing device, an image of one or more fields; deriving, by the computing device, at least one index value for the image (whole paper, see at least page 386, left column. “Currently, several satellites with broadband multispectral sensors are orbiting the Earth and can provide frequent, wide area coverage of agricultural lands. Various methods of classifying these multispectral data have been developed to identify agricultural management practices and soil properties (Bricklemyer et al. 2006, 2007; van Deventer et al. 1997). Other methods include linear logistic models (Gowda et al. 2008) and clustering and principal component analysis protocols to discriminate tillage practices and nutrient sources (Hache et al. 2007). Minimum values of the Normalized Difference Tillage Index NDTI (minNDTI) extracted from a time series of Landsat images spanning the interval from soil preparation through early crop growth reliably tracked changes in tillage intensity over agricultural regions (Zheng et al. 2012, 2013). However, the 16 day revisit cycle of Landsat (or 8 day revisit with two Landsat) and clouds have severely limited the minNDTI approach. Pacheco et al. (2006) showed that spectral residue indices, such as Normalized Difference Index (NDI) and Modified Soil Adjusted Crop Residue Index (MSACRI), did not provide better results than supervised classification techniques like Spectral Mixing Analysis (SMA) and Spectral Angle Mapping (SAM). However, these statistical analyses are not robust and are often affected by soil type, crop residue type, and soil and residue moisture contents when the image (or scene) is collected”); generating a map of tillage for the one or more fields, the map of tillage indicating a location and an intensity of the tillage for one or more segments of the one of more fields (whole paper, see at least section Crop Residue in the South Fork Watershed. “When the entire South Fork watershed was classified, residue cover maps were produced (figure 7) and summarized (table 6) for each sensor and year. Classifications of SPOT and Landsat images produced similar proportions for each residue class, not varying by more than 6% over the three years. Both AWiFS and Deimos overestimated high residue classes for fields with corn residue, but not for fields with soybean residue. However, with only one year of data available for those two sensors, these results were inconclusive for mapping tillage intensity over large areas; Figure 7 Soil tillage intensity maps for the South Fork watershed by year ([a, d] 2009, [b, e, g] 2010, and [c, f, h] 2011) and sensor ([a, b, c] SPOT, [d, e, f] LANDSAT, [g] AWiFS, and [h] Deimos). Other land cover types and clouds were masked out (white) ; storing, by the computing device, the map of tillage for the one or more fields in a memory; causing display of the map of tillage for the one or more fields at an output device ( whole paper, see at least Figure. 7) PNG media_image1.png 360 470 media_image1.png Greyscale Figure 7 of Beeson Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the computer-implemented method of Guan and including identifying pixel value for each of multiple bands of She with generating tillage intensity maps as seen in Beeson because this modification would monitor applications including crop residue cover and soil tillage intensity assessments (Summary and Conclusions of Beeson). Thus, the combination of Guan, She and Beeson teaches a computer-implemented method for use in processing image data associated with fields, the method comprising: accessing, by a computing device, an image of one or more fields, the image including multiple pixels, each of the pixels including a value for each of multiple bands; deriving, by the computing device, at least one index value for the image; generating a map of tillage for the one or more fields, using a trained model and the at least one index value for each of the pixels of the image, the map of tillage indicating a location and an intensity of the tillage for one or more segments of the one of more fields; storing, by the computing device, the map of tillage for the one or more fields in a memory; and causing display of the map of tillage for the one or more fields at an output device. Regarding Claim 2, Guan, She and Beeson teach the computer-implemented method of claim 1, wherein the multiple bands include red, blue, green and near infrared (see at least [0155] of Guan “ Many of the examples presented herein assume that the satellite imagery is capable of detecting various bands, such as the blue, green, red, red edge, and near infrared (NIR) bands at various resolutions. As a concrete example, the satellite imagery used as input to the model may be RapidEye satellite image data, which offers multispectral images at a spatial resolution of 5 m. As another example, Deimos satellite imagery may also be used as input to the techniques discussed herein. The main significant difference between Deimos and RapidEye imagery, besides spatial and temporal resolution, is the absence of blue and red-edge bands. As a result, embodiments which utilize Deimos satellite imagery may require slightly different processing than embodiments which utilize RapidEye imagery in order to compensate for the reduced feature set that is available. However, the techniques described herein are not limited to any particular type of satellite imagery and the features utilized by the techniques described herein can be adjusted to use the features available to a given type of satellite imagery.”; [0115] of She “At step 702, images of agronomic fields produced using one or more frequency bands are received. The images of the agronomic field may be produced by a satellite configured to capture images in a plurality of frequency bands. For example, the SENTINEL-2 satellite operated by the EUROPEAN SPACE AGENCY produces images in a plurality of frequency bands including a red frequency band, a blue frequency band, a green frequency band, a near infrared frequency band, and a water vapor frequency band.) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 3, Guan, She and Beeson teach the computer-implemented method of claim 1, wherein deriving the at least one index value for the image includes deriving at least one index value for each of the pixels of the image (see at least [0164] of Guan “ The exact features selected for consideration by the high-precision pixel classifier 501 and the high-recall pixel classifier 503 may be dependent on the types of classification techniques each classifier employs. For example, a classifier which utilizes logistic regression may perform better using a different set of features than a classifier which utilizes a SVM. [0165] Furthermore, not all features are necessarily supplied directly from the remote sensing imagery 510. In some cases, features can be derived from the “raw” features supplied by the imagery by creating linear or non-linear combinations of different band values. The following are non-limited examples of such derived features. In the following examples, B is the blue band, G is the green band, R is the red band, RE is the red edge band, and NIR is the near infrared band. [0166] NDVI—The normalized difference vegetation index, a function of NIR and R bands: [00001]NDVI=NIR-RNIR+R [0167] NDW1—The normalized difference water index, a function of NIR and G bands: [00002]NDWI=G-NIRG+NIR”) [0155] of She “ In an embodiment, the system generates an overlay for the image based on the data identifying clouds and cloud shadows in the image. For example, the system may compute an index value, such as an NDVI value, for each location on the field that has not been identified as including a cloud or cloud shadow. The system may then generate an overlay for the image that identifies the NDVI of the locations that do not include cloud or cloud shadow. Thus, if the system receives a request for display of a current image of a field with NDVI values, the system may use the methods described herein to identify cloud and cloud shadow locations and generate an overlay for the remaining locations. The system may then cause display of the image with the overlay on a requesting client computing device.”; whole paper, see at least page 386, left column of Beeson “Currently, several satellites with broadband multispectral sensors are orbiting the Earth and can provide frequent, wide area coverage of agricultural lands. Various methods of classifying these multispectral data have been developed to identify agricultural management practices and soil properties (Bricklemyer et al. 2006, 2007; van Deventer et al. 1997). Other methods include linear logistic models (Gowda et al. 2008) and clustering and principal component analysis protocols to discriminate tillage practices and nutrient sources (Hache et al. 2007). Minimum values of the Normalized Difference Tillage Index NDTI (minNDTI) extracted from a time series of Landsat images spanning the interval from soil preparation through early crop growth reliably tracked changes in tillage intensity over agricultural regions (Zheng et al. 2012, 2013). However, the 16 day revisit cycle of Landsat (or 8 day revisit with two Landsat) and clouds have severely limited the minNDTI approach. Pacheco et al. (2006) showed that spectral residue indices, such as Normalized Difference Index (NDI) and Modified Soil Adjusted Crop Residue Index (MSACRI), did not provide better results than supervised classification techniques like Spectral Mixing Analysis (SMA) and Spectral Angle Mapping (SAM). However, these statistical analyses are not robust and are often affected by soil type, crop residue type, and soil and residue moisture contents when the image (or scene) is collected”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 4, Guan, She and Beeson teach the computer-implemented method of claim 3, wherein deriving the at least one index value for each of the pixels of the image includes deriving the at least on index value for each of the pixels of the image based on the following: NDVI = (nir - red) / (nir +red); wherein nir is a near infrared band value and red is a red band value (see at least [0164] of Guan “The exact features selected for consideration by the high-precision pixel classifier 501 and the high-recall pixel classifier 503 may be dependent on the types of classification techniques each classifier employs. For example, a classifier which utilizes logistic regression may perform better using a different set of features than a classifier which utilizes a SVM. [0165] Furthermore, not all features are necessarily supplied directly from the remote sensing imagery 510. In some cases, features can be derived from the “raw” features supplied by the imagery by creating linear or non-linear combinations of different band values. The following are non-limited examples of such derived features. In the following examples, B is the blue band, G is the green band, R is the red band, RE is the red edge band, and NIR is the near infrared band. [0166] NDVI—The normalized difference vegetation index, a function of NIR and R bands: [00001]NDVI=NIR-RNIR+R [0167] NDW1—The normalized difference water index, a function of NIR and G bands: [00002]NDWI=G-NIRG+NIR”; [0115-0117], [0155] of She; whole paper of Beeson) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 5, Guan, She and Beeson teach the computer-implemented method of claim 1, wherein generating the map of tillage for the one or more fields includes identifying, on the map, at least one intensity of the tillage for the one or more fields (see at least [0079] of Guan “ In an embodiment, field manager computing device 104 sends field data 106 to agricultural intelligence computer system 130 comprising or including, but not limited to, data values representing one or more of: a geographical location of the one or more fields, tillage information for the one or more fields, crops planted in the one or more fields, and soil data extracted from the one or more fields. Field manager computing device 104 may send field data 106 in response to user input from user 102 specifying the data values for the one or more fields. Additionally, field manager computing device 104 may automatically send field data 106 when one or more of the data values becomes available to field manager computing device 104. For example, field manager computing device 104 may be communicatively coupled to remote sensor 112 and/or application controller 114. In response to receiving data indicating that application controller 114 released water onto the one or more fields, field manager computing device 104 may send field data 106 to agricultural intelligence computer system 130 indicating that water was released on the one or more fields. Field data 106 identified in this disclosure may be input and communicated using electronic digital data that is communicated between computing devices using parameterized URLs over HTTP, or another suitable communication or messaging protocol.”;; whole paper, see at least section Crop Residue in the South Fork Watershed, Figure 7 of Beeson. “When the entire South Fork watershed was classified, residue cover maps were produced (figure 7) and summarized (table 6) for each sensor and year. Classifications of SPOT and Landsat images produced similar proportions for each residue class, not varying by more than 6% over the three years. Both AWiFS and Deimos overestimated high residue classes for fields with corn residue, but not for fields with soybean residue. However, with only one year of data available for those two sensors, these results were inconclusive for mapping tillage intensity over large areas; Figure 7 Soil tillage intensity maps for the South Fork watershed by year ([a, d] 2009, [b, e, g] 2010, and [c, f, h] 2011) and sensor ([a, b, c] SPOT, [d, e, f] LANDSAT, [g] AWiFS, and [h] Deimos). Other land cover types and clouds were masked out (white)) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 11, Guan, She and Beeson teach a non-transitory computer-readable storage medium including executable instructions for processing image data associated with fields, which when executed by at least one processor, cause the at least one processor to ([0109] of Guan” FIG. 3 illustrates a programmed process by which the agricultural intelligence computer system generates one or more preconfigured agronomic models using field data provided by one or more data sources. FIG. 3 may serve as an algorithm or instructions for programming the functional elements of the agricultural intelligence computer system 130 to perform the operations that are now described. [0141] Computer system 400 also includes a main memory 406, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 402 for storing information and instructions to be executed by processor 404. Main memory 406 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Such instructions, when stored in non-transitory storage media accessible to processor 404, render computer system 400 into a special-purpose machine that is customized to perform the operations specified in the instructions. [0232] 12. One or more non-transitory computer-readable media storing instructions that, when executed by one or more computing devices, causes performance of any one of the methods recited in Clauses 1-11.”). perform one or more of the steps in the claims above( see claims rejection above). access an image of one or more fields, the image including multiple pixels (see at least [0085] of Guan; [0115-0117] of She), each of the pixels including a value for each of multiple bands (see at least [0125], [0136] of Guan; [0115-0117] of She); derive at least one index value for the image (see at least [0166] of Guan; [0155] of She; whole paper, see at least page 386, left column of Beeson); generate a map for the one or more fields(see at least [0066][0067] of Guan; whole paper, see at least section Crop Residue in the South Fork Watershed and Figure 7 of Beeson), using a trained model and the at least one index value for each of the pixels of the image (see at least [0182] of Guan), the map indicating a location and an intensity of at least one characteristic for one or more segments of the one of more fields( see at least [0068] of Guan; whole paper, see at least section Crop Residue in the South Fork Watershed and Figure 7 of Beeson); store the map for the one or more fields in a memory (see at least [0065][0066] of Guan; whole paper, see at least section Crop Residue in the South Fork Watershed and Figure 7 of Beeson); and cause display of the map for the one or more fields at an output device (see at least [0067-0068] of Guan; whole paper, see at least section Crop Residue in the South Fork Watershed and Figure 7 of Beeson.) In addition, the same motivation is used as the rejection for claim 1. Regarding independent claim 12, Gaun teaches a system for use in processing image data associated with fields, the system comprising a computing device ([0110] At block 305, the agricultural intelligence computer system 130 is configured or programmed to implement agronomic data preprocessing of field data received from one or more data sources. The field data received from one or more data sources may be preprocessed for the purpose of removing noise and distorting effects within the agronomic data including measured outliers that would bias received field data values. Embodiments of agronomic data preprocessing may include, but are not limited to, removing data values commonly associated with outlier data values, specific measured data points that are known to unnecessarily skew other data values, data smoothing techniques used to remove or reduce additive or multiplicative effects from noise, and other filtering or data derivation techniques used to provide clear distinctions between positive and negative data inputs.”; [0116] In an embodiment, the agricultural intelligence computer system 130, among other components, includes a cloud detection subsystem 170. The cloud detection subsystem 170 collects images and other information related to an area, such as an agricultural field, from the model data and field data repository 160 and/or external data 110 and determines which portions correspond to clouds and/or cloud shadows.”) configured to: remaining limitations of claim 12 is similar scope to claim 1, and therefore rejected under the same rationale. Regarding claim 13, Guan, She and Beeson teach the system of claim 12, remaining limitations of claim 13 is similar scope to claim 2, and therefore rejected under the same rationale. Regarding claim 14, Guan, She and Beeson teach the system of any 13, wherein the computing device is configured, remaining limitations of claim 14 is similar scope to claim 3, and therefore rejected under the same rationale. Regarding claim 15, Guan, She and Beeson teach the system of claim 14, wherein the computing device is configured, remaining limitations of claim 15 is similar scope to claim 4, and therefore rejected under the same rationale. 2. Claims 6-7, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Guan et al, IDS, U.S Patent Application Publication No.2019/0087682 (“Guan”) in view of She et al., U.S Patent Application Publication No. 2020/0125844 (“She”) further in view of Beeson, Peter C., et al. "Multispectral satellite mapping of crop residue cover and tillage intensity in Iowa." Journal of Soil and Water Conservation 71.5 (2016): 385-395.(“Beeson”) further in view of Dai et al, IDS, U.S Patent Application Publication No.20210027088 (“Dai”) Regarding claim 6, Guan, She, Beeson teach the computer-implemented method of claim 1, Guan, She, Beeson are understood to be silent on the remaining limitations of claim 6. In the same field of endeavor, Dai teaches wherein the model includes a Residual Network (RESNET) model ([0079] Optionally, the convolution layer and the deconvolution layer are connected by a cascade structure, and the deconvolution module 720 is further configured to acquire, through the second number of deconvolution layers, convolution processing information of the convolution layer cascaded with the corresponding deconvolution layer, and obtain a deconvolution result of the corresponding deconvolution layer by superimposing a deconvolution result of an upper layer of the corresponding deconvolution layer and the convolution processing information of the convolution layer cascaded with the corresponding deconvolution layer. The first number may be equal to the second number, or may be smaller than the second number. The deep neural network may be a residual network. [0080] Due to the requirements of plant protection accuracy of mobile devices, when selecting a network structure of the deep learning network model, there may be sufficient network parameters to extract the features of the farmland image, and it is also ensured that more precise farmland location information can be extracted. Therefore, in the embodiment of the application, a 101-layer Convolutional Neural Network (CNN) structure is used to ensure a sufficient amount of parameters, which can extract farmland boundary features based on any background or any shape. In specific implementation, the deep learning neural network in the embodiment of the application is a neural network with a RESNET101 convolution structure. That is, the convolution module 710 includes 101 convolution layers, and the deconvolution module 720 includes 101 deconvolution layers. The input farmland image is convolved by the convolution module 710. Each convolution layer down-samples an input image of a current layer to extract image features and output the image features to the next convolution layer until the desired farmland image features are obtained.”) Therefore, in combination of Guan, She, Beeson, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the computer-implemented method of Guan with including where the model includes Residual Network (RESNET) model as seen in Dai because this modification would improve the robustness of farmland image recognition under the condition of satisfying the accuracy of farmland boundary recognition ([0122] of Dai). Regarding claim 7, Guan, She, Beeson and Dai teach the computer-implemented method of claim 6, further comprising: accessing images of multiple fields ([0066] of Guan “ When field data 106 is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system.”; whole paper of Beeson, Figure.7); accessing tillage data associated with the multiple fields ([0069] of Guan” In an embodiment, the data manager provides an interface for creating one or more programs. “Program,” in this context, refers to a set of data pertaining to nitrogen applications, planting procedures, soil application, tillage procedures, irrigation practices, or other information that may be related to one or more fields, and that can be stored in digital data storage for reuse as a set in other operations. After a program has been created, it may be conceptually applied to one or more fields and references to the program may be stored in digital storage in association with data identifying the fields. Thus, instead of manually entering identical data relating to the same nitrogen applications for multiple different fields, a user computer may create a program that indicates a particular application of nitrogen and then apply the program to multiple different fields. For example, in the timeline view of FIG. 10, the top two timelines have the “Fall applied” program selected, which includes an application of 150 lbs N/ac in early April. The data manager may provide an interface for editing a program. In an embodiment, when a particular program is edited, each field that has selected the particular program is edited. For example, in FIG. 10, if the “Fall applied” program is edited to reduce the application of nitrogen to 130 lbs N/ac, the top two fields may be updated with a reduced application of nitrogen based on the edited program.”; whole paper of Beeson, Figure.7)); aggregating the images of the multiple fields and the tillage data associated with the multiple fields into a composite data set ([0079] of Guan “In an embodiment, field manager computing device 104 sends field data 106 to agricultural intelligence computer system 130 comprising or including, but not limited to, data values representing one or more of: a geographical location of the one or more fields, tillage information for the one or more fields, crops planted in the one or more fields, and soil data extracted from the one or more fields. Field manager computing device 104 may send field data 106 in response to user input from user 102 specifying the data values for the one or more fields. Additionally, field manager computing device 104 may automatically send field data 106 when one or more of the data values becomes available to field manager computing device 104. For example, field manager computing device 104 may be communicatively coupled to remote sensor 112 and/or application controller 114. In response to receiving data indicating that application controller 114 released water onto the one or more fields, field manager computing device 104 may send field data 106 to agricultural intelligence computer system 130 indicating that water was released on the one or more fields. Field data 106 identified in this disclosure may be input and communicated using electronic digital data that is communicated between computing devices using parameterized URLs over HTTP, or another suitable communication or messaging protocol.”; [0116] In an embodiment, the agricultural intelligence computer system 130, among other components, includes a cloud detection subsystem 170. The cloud detection subsystem 170 collects images and other information related to an area, such as an agricultural field, from the model data and field data repository 160 and/or external data 110 and determines which portions correspond to clouds and/or cloud shadows.”; whole paper , at least Figure.7 of Beeson); and prior to generating a map of tillage for the one or more fields using the trained model ([0066] of Guan “When field data 106 is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system. [0067] In an example embodiment, the agricultural intelligence computer system 130 is programmed to generate and cause displaying a graphical user interface comprising a data manager for data input. After one or more fields have been identified using the methods described above, the data manager may provide one or more graphical user interface widgets which when selected can identify changes to the field, soil, crops, tillage, or nutrient practices. The data manager may include a timeline view, a spreadsheet view, and/or one or more editable programs.”; [0182] At block 615, the high-precision pixel classifier 501 is trained on labeled training data. The training of the high-precision pixel classifier 501 will differ depending on the machine learning technique used to implement the high-precision pixel classifier 501. However, there are many commercially available tools, such as Vowpal Wabbit, Spark, PyBrain, and so forth that implement a variety of machine learning techniques that could potentially be used to implement the high-precision pixel classifier 501. In some embodiments, the high-precision pixel classifier 501 includes a component that processes the labeled training data into a format expected by the utilized library, and then invokes a training routine of the library to train the machine learning model. However, although there are many well-known machine learning techniques that may be used to implement the high-precision pixel classifier 501, many classifiers have configurable settings or coefficients that may need to be adjusted to provide adequate results. For example, in the case of a SVM, the per-class penalty may be set to 5:1 and the kernel function may be set to Radial Basis Function (RBF) with (γ=0.25 and C=1.0;; Figure 7 of Beeson), training the RESNET model to identify tillage in the multiple fields ([0079] of Guan “In an embodiment, field manager computing device 104 sends field data 106 to agricultural intelligence computer system 130 comprising or including, but not limited to, data values representing one or more of: a geographical location of the one or more fields, tillage information for the one or more fields, crops planted in the one or more fields, and soil data extracted from the one or more fields. Field manager computing device 104 may send field data 106 in response to user input from user 102 specifying the data values for the one or more fields. Additionally, field manager computing device 104 may automatically send field data 106 when one or more of the data values becomes available to field manager computing device 104. For example, field manager computing device 104 may be communicatively coupled to remote sensor 112 and/or application controller 114. In response to receiving data indicating that application controller 114 released water onto the one or more fields, field manager computing device 104 may send field data 106 to agricultural intelligence computer system 130 indicating that water was released on the one or more fields. Field data 106 identified in this disclosure may be input and communicated using electronic digital data that is communicated between computing devices using parameterized URLs over HTTP, or another suitable communication or messaging protocol.”; whole paper of Beeson, ; [0079] of Dai “Optionally, the convolution layer and the deconvolution layer are connected by a cascade structure, and the deconvolution module 720 is further configured to acquire, through the second number of deconvolution layers, convolution processing information of the convolution layer cascaded with the corresponding deconvolution layer, and obtain a deconvolution result of the corresponding deconvolution layer by superimposing a deconvolution result of an upper layer of the corresponding deconvolution layer and the convolution processing information of the convolution layer cascaded with the corresponding deconvolution layer. The first number may be equal to the second number, or may be smaller than the second number. The deep neural network may be a residual network.[0080] Due to the requirements of plant protection accuracy of mobile devices, when selecting a network structure of the deep learning network model, there may be sufficient network parameters to extract the features of the farmland image, and it is also ensured that more precise farmland location information can be extracted. Therefore, in the embodiment of the application, a 101-layer Convolutional Neural Network (CNN) structure is used to ensure a sufficient amount of parameters, which can extract farmland boundary features based on any background or any shape. In specific implementation, the deep learning neural network in the embodiment of the application is a neural network with a RESNET101 convolution structure. That is, the convolution module 710 includes 101 convolution layers, and the deconvolution module 720 includes 101 deconvolution layers. The input farmland image is convolved by the convolution module 710. Each convolution layer down-samples an input image of a current layer to extract image features and output the image features to the next convolution layer until the desired farmland image features are obtained.”) In addition, the same motivation is used as the rejection for claim 6. Regarding claim 16, Guan, She, Beeson teach the system of claim 12, remaining limitations of claim 16 is similar scope to claim 6, and therefore rejected under the same rationale. Regarding claim 17, Guan, She, Beeson and Dai teach the system of claim 16, , wherein the computing device is further configured to: remaining limitations of claim 17 is similar scope to claim 7, and therefore rejected under the same rationale. 3. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Guan et al, IDS, U.S Patent Application Publication No.2019/0087682 (“Guan”) in view of She et al., U.S Patent Application Publication No. 2020/0125844 (“She”) further in view of Beeson, Peter C., et al. "Multispectral satellite mapping of crop residue cover and tillage intensity in Iowa." Journal of Soil and Water Conservation 71.5 (2016): 385-395.(“Beeson”) further in view of Bengtson et al., U.S Patent Application Publication No.20200342226 (“Bengtson”) Regarding claim 8, Guan, She, Beeson teach the computer-implemented method of claim 1, Guan, She, Beeson are understood to be silent on the remaining limitations of claim 8. In the same field of endeavor, Bengtson teaches wherein the model includes a XGBoost model (0092] A recursive feature extraction with cross validation is used as an automated way to select the best features out of all the features provided. Some examples of machine learning techniques that may be used, include XGBoost, scikit-learn's Gradient Boosting Regressor, a Neural Network, and so on. The model is trained via K-fold cross validation. For each fold, the most important features (e.g., measured as the cumulative relative importance up to a threshold) are retained. The trained features are selected as those that were common to all folds.”; [0106] To train the best model(s), it is beneficial to use the entire dataset. The imagery model may use eXtreme Gradient Boosting models (XGBoost) or a neural network to provide the best possible results for prediction. Predictions are then made by the trained model. These are the predictions that would provide the information to the user based on available imagery data. Once the predictions are made on the aggregated temporal data, they are aggregated to monthly predictions or it can be done in another manner that is best suited for the user.”; [0123] For each variety of crop (for example, Canola, Wheat, Corn, Lentil, and Soybean), the average yields are predicted for each month. Depending on the geographic region, the months for which the yields are predicted may vary (for example, in North America, the months of April to September are of interest). The model for each month for a specific crop is trained on the entire dataset, including data up to and including that month. Machine learning techniques, for example, eXtreme Gradient Boosting (XGBoost), are used for each crop to predict average yield for a specific field.”) Therefore, in combination of Guan, She, Beeson, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the computer-implemented method of Guan with including where the model includes XGBoost model as seen in Bengtson because this modification would predict average yield for a specific field ([0123] of Bengtson). 4. Claims 9-10, 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Guan et al, IDS, U.S Patent Application Publication No.2019/0087682 (“Guan”) in view of She et al., U.S Patent Application Publication No. 2020/0125844 (“She”) further in view of Beeson, Peter C., et al. "Multispectral satellite mapping of crop residue cover and tillage intensity in Iowa." Journal of Soil and Water Conservation 71.5 (2016): 385-395.(“Beeson”) further in view of Ruff et al., U.S Patent Application Publication No.2020/0272971 (“Ruff”) Regarding claim 9, Guan, She and Beeson teach the computer-implemented method of claim 1, further comprising treating the one or more fields based on the map of tillage for the one or more fields ([0084] of Guan” In one embodiment, script generation instructions 205 are programmed to provide an interface for generating scripts, including variable rate (VR) fertility scripts. The interface enables growers to create scripts for field implements, such as nutrient applications, planting, and irrigation. For example, a planting script interface may comprise tools for identifying a type of seed for planting. Upon receiving a selection of the seed type, mobile computer application 200 may display one or more fields broken into management zones, such as the field map data layers created as part of digital map book instructions 206. In one embodiment, the management zones comprise soil zones along with a panel identifying each soil zone and a soil name, texture, drainage for each zone, or other field data. Mobile computer application 200 may also display tools for editing or creating such, such as graphical tools for drawing management zones, such as soil zones, over a map of one or more fields. Planting procedures may be applied to all management zones or different planting procedures may be applied to different subsets of management zones. When a script is created, mobile computer application 200 may make the script available for download in a format readable by an application controller, such as an archived or compressed format. Additionally and/or alternatively, a script may be sent directly to cab computer 115 from mobile computer application 200 and/or uploaded to one or more data servers and stored for further use.]; [0086] In one embodiment, the nitrogen graph may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen graph. The user may then use his optimized nitrogen graph and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. Nitrogen instructions 210 also may be programmed to generate and cause displaying a nitrogen map, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. The nitrogen map may display projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted for different times in the past and the future (such as daily, weekly, monthly or yearly) using numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude. In one embodiment, the nitrogen map may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen map, such as to obtain a preferred amount of surplus to shortfall. The user may then use his optimized nitrogen map and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. In other embodiments, similar instructions to the nitrogen instructions 210 could be used for application of other nutrients (such as phosphorus and potassium) application of pesticide, and irrigation programs.”; [0075] of She “In one embodiment, field health instructions 214 are programmed to provide timely remote sensing images highlighting in-season crop variation and potential concerns. Example programmed functions include cloud checking, to identify possible clouds or cloud shadows; determining nitrogen indices based on field images; graphical visualization of scouting layers, including, for example, those related to field health, and viewing and/or sharing of scouting notes; and/or downloading satellite images from multiple sources and prioritizing the images for the grower, among others.[0076] In one embodiment, performance instructions 216 are programmed to provide reports, analysis, and insight tools using on-farm data for evaluation, insights and decisions. This enables the grower to seek improved outcomes for the next year through fact-based conclusions about why return on investment was at prior levels, and insight into yield-limiting factors. The performance instructions 216 may be programmed to communicate via the network(s) 109 to back-end analytics programs executed at agricultural intelligence computer system 130 and/or external data server computer 108 and configured to analyze metrics such as yield, yield differential, hybrid, population, SSURGO zone, soil test properties, or elevation, among others. Programmed reports and analysis may include yield variability analysis, treatment effect estimation, benchmarking of yield and other metrics against other growers based on anonymized data collected from many growers, or data for seeds and planting, among others.”; whole paper, at least Figure 7 of Beeson) Guan, She and Beeson are understood to be silent on the remaining limitations of claim 9. In the same field of endeavor, Ruff teaches treating the one or more fields based on the map of tillage for the one or more fields ([0100] In an embodiment, field manager computing device 104 sends field data 106 to agricultural intelligence computer system 130 comprising or including, but not limited to, data values representing one or more of: a geographical location of the one or more fields, tillage information for the one or more fields, crops planted in the one or more fields, and soil data extracted from the one or more fields. Field manager computing device 104 may send field data 106 in response to user input from user 102 specifying the data values for the one or more fields. Additionally, field manager computing device 104 may automatically send field data 106 when one or more of the data values becomes available to field manager computing device 104. [0195] In an embodiment, the agricultural intelligence computing system identifies evidence of existing or previous experiments on a field. Based on the evidence of existing or previous experiments on the field, the agricultural intelligence computing system may select the field as a candidate for performing a trial. The agricultural intelligence computing system may identify evidence of experiments based on sections of a field that are treated differently from the rest of the field. For instance, the agricultural intelligence computing system may identify locations in the field that have received different seed types, seeding populations, and/or product applications such as fertilizer and pesticide. If a determination is made that a field contains one or more experiments, the agricultural intelligence computing system may select the field as a candidate for participation in the trial. [0237] In an embodiment, the agricultural intelligence computing system determines where to place testing locations based on one or more management zones. Management zones refer to regions within an agricultural field or a plurality of agricultural fields that are expected to have similar limiting factors influencing harvested yields of crops. While management zones are generally described with respect to portions of a single field, management zones may be designed to encompass locations in a plurality of fields spanning a plurality of growers. Methods for identifying management zones are described further in U.S. Patent Pub. 2018-0046735A1. The agricultural intelligence computing system may identify benefits of using a new product, different seeds, and/or management practices for a management zone. The agricultural intelligence computing system may identify testing locations within the management zone so that effects of performing the trial can be compared to the rest of the management zone. [0238] In an embodiment, the agricultural intelligence computing system identifies management zones based on a type of trial being performed. For example, two locations on a field may comprise different soil types, but have a similar yield and a similar pest problem. For purposes of implementing a pesticide trial, the two locations may be treated as a single management zone. In contrast, for purposes of implementing a fertilizer trial which is dependent on the soil type, the two locations may be treated as different zones.”; [0253] FIG. 21 depicts an example of a grid overlay on a map used for computing short length yield variability. Map 2102 comprises a grid overlaying a map of an agricultural field. As shown in map 2102, the first vertical line is generated at a grid cell width away from the leftmost boundary of the map whereas the first horizontal line is generated at a grid cell length away from the bottommost boundary of the map. In an embodiment, the agricultural field additionally includes management zones. For example, map 2104 depicts a grid overlay on a map of an agricultural field which contains three management zones that are differentiated by color. The management zones refer to sections of the agricultural field which receive similar management treatment or have previously been grouped based on shared characteristics.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the computer-implemented method of Guan, She and generating tillage intensity maps of Beeson with identifying whether a treatment was applied to each location as seen in Ruff because this modification would execute the trial in the identified locations, such as spraying a treatment according to a trial prescription ([0166] of Ruff). Thus, the combination of Guan, She, Beeson and Ruff teaches further comprising treating the one or more fields based on the map of tillage for the one or more fields. Regarding claim 10, Guan, She, Beeson and Ruff teach the computer-implemented method of claim 9, wherein treating the one or more fields includes applying one or more of a pesticide, a herbicide, and/or a fertilizer to the one or more fields([0086] of Guan “In one embodiment, the nitrogen graph may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen graph. The user may then use his optimized nitrogen graph and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. Nitrogen instructions 210 also may be programmed to generate and cause displaying a nitrogen map, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. The nitrogen map may display projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted for different times in the past and the future (such as daily, weekly, monthly or yearly) using numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude. In one embodiment, the nitrogen map may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen map, such as to obtain a preferred amount of surplus to shortfall. The user may then use his optimized nitrogen map and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. In other embodiments, similar instructions to the nitrogen instructions 210 could be used for application of other nutrients (such as phosphorus and potassium) application of pesticide, and irrigation programs.”;[0076] of She “ In one embodiment, performance instructions 216 are programmed to provide reports, analysis, and insight tools using on-farm data for evaluation, insights and decisions. This enables the grower to seek improved outcomes for the next year through fact-based conclusions about why return on investment was at prior levels, and insight into yield-limiting factors. The performance instructions 216 may be programmed to communicate via the network(s) 109 to back-end analytics programs executed at agricultural intelligence computer system 130 and/or external data server computer 108 and configured to analyze metrics such as yield, yield differential, hybrid, population, SSURGO zone, soil test properties, or elevation, among others. Programmed reports and analysis may include yield variability analysis, treatment effect estimation, benchmarking of yield and other metrics against other growers based on anonymized data collected from many growers, or data for seeds and planting, among others.; [0195] of Ruff “ In an embodiment, the agricultural intelligence computing system identifies evidence of existing or previous experiments on a field. Based on the evidence of existing or previous experiments on the field, the agricultural intelligence computing system may select the field as a candidate for performing a trial. The agricultural intelligence computing system may identify evidence of experiments based on sections of a field that are treated differently from the rest of the field. For instance, the agricultural intelligence computing system may identify locations in the field that have received different seed types, seeding populations, and/or product applications such as fertilizer and pesticide. If a determination is made that a field contains one or more experiments, the agricultural intelligence computing system may select the field as a candidate for participation in the trial. [0238] In an embodiment, the agricultural intelligence computing system identifies management zones based on a type of trial being performed. For example, two locations on a field may comprise different soil types, but have a similar yield and a similar pest problem. For purposes of implementing a pesticide trial, the two locations may be treated as a single management zone. In contrast, for purposes of implementing a fertilizer trial which is dependent on the soil type, the two locations may be treated as different zones.”.) In addition, the same motivation is used as the rejection for claim 9. Regarding claim 18, Guan, She, Beeson teach the system of claim 12, wherein the computing device is further configured to direct operation of a farm implement at the one or more fields to treat the one or more fields with a treatment based on the map of tillage for the one or more fields(see at least [0084] of Guan “In one embodiment, script generation instructions 205 are programmed to provide an interface for generating scripts, including variable rate (VR) fertility scripts. The interface enables growers to create scripts for field implements, such as nutrient applications, planting, and irrigation. For example, a planting script interface may comprise tools for identifying a type of seed for planting. Upon receiving a selection of the seed type, mobile computer application 200 may display one or more fields broken into management zones, such as the field map data layers created as part of digital map book instructions 206. In one embodiment, the management zones comprise soil zones along with a panel identifying each soil zone and a soil name, texture, drainage for each zone, or other field data. Mobile computer application 200 may also display tools for editing or creating such, such as graphical tools for drawing management zones, such as soil zones, over a map of one or more fields. Planting procedures may be applied to all management zones or different planting procedures may be applied to different subsets of management zones. When a script is created, mobile computer application 200 may make the script available for download in a format readable by an application controller, such as an archived or compressed format. Additionally and/or alternatively, a script may be sent directly to cab computer 115 from mobile computer application 200 and/or uploaded to one or more data servers and stored for further use.]; [0086] In one embodiment, the nitrogen graph may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen graph. The user may then use his optimized nitrogen graph and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. Nitrogen instructions 210 also may be programmed to generate and cause displaying a nitrogen map, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. The nitrogen map may display projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted for different times in the past and the future (such as daily, weekly, monthly or yearly) using numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude. In one embodiment, the nitrogen map may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen map, such as to obtain a preferred amount of surplus to shortfall. The user may then use his optimized nitrogen map and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. In other embodiments, similar instructions to the nitrogen instructions 210 could be used for application of other nutrients (such as phosphorus and potassium) application of pesticide, and irrigation programs.”; [0075] of She “In one embodiment, field health instructions 214 are programmed to provide timely remote sensing images highlighting in-season crop variation and potential concerns. Example programmed functions include cloud checking, to identify possible clouds or cloud shadows; determining nitrogen indices based on field images; graphical visualization of scouting layers, including, for example, those related to field health, and viewing and/or sharing of scouting notes; and/or downloading satellite images from multiple sources and prioritizing the images for the grower, among others. [0076] In one embodiment, performance instructions 216 are programmed to provide reports, analysis, and insight tools using on-farm data for evaluation, insights and decisions. This enables the grower to seek improved outcomes for the next year through fact-based conclusions about why return on investment was at prior levels, and insight into yield-limiting factors. The performance instructions 216 may be programmed to communicate via the network(s) 109 to back-end analytics programs executed at agricultural intelligence computer system 130 and/or external data server computer 108 and configured to analyze metrics such as yield, yield differential, hybrid, population, SSURGO zone, soil test properties, or elevation, among others. Programmed reports and analysis may include yield variability analysis, treatment effect estimation, benchmarking of yield and other metrics against other growers based on anonymized data collected from many growers, or data for seeds and planting, among others.; whole paper, at least Figure 7 of Beeson) In addition, the same motivation is used as the rejection for claim 1. Guan, She, Beeson are understood to be silent on the remaining limitations of claim 9. In the same field of endeavor, Ruff teaches direct operation of a farm implement at the one or more fields to treat the one or more fields with a treatment based on the map of tillage for the one or more fields ([0100] In an embodiment, field manager computing device 104 sends field data 106 to agricultural intelligence computer system 130 comprising or including, but not limited to, data values representing one or more of: a geographical location of the one or more fields, tillage information for the one or more fields, crops planted in the one or more fields, and soil data extracted from the one or more fields. Field manager computing device 104 may send field data 106 in response to user input from user 102 specifying the data values for the one or more fields. Additionally, field manager computing device 104 may automatically send field data 106 when one or more of the data values becomes available to field manager computing device 104. [0195] In an embodiment, the agricultural intelligence computing system identifies evidence of existing or previous experiments on a field. Based on the evidence of existing or previous experiments on the field, the agricultural intelligence computing system may select the field as a candidate for performing a trial. The agricultural intelligence computing system may identify evidence of experiments based on sections of a field that are treated differently from the rest of the field. For instance, the agricultural intelligence computing system may identify locations in the field that have received different seed types, seeding populations, and/or product applications such as fertilizer and pesticide. If a determination is made that a field contains one or more experiments, the agricultural intelligence computing system may select the field as a candidate for participation in the trial. [0237] In an embodiment, the agricultural intelligence computing system determines where to place testing locations based on one or more management zones. Management zones refer to regions within an agricultural field or a plurality of agricultural fields that are expected to have similar limiting factors influencing harvested yields of crops. While management zones are generally described with respect to portions of a single field, management zones may be designed to encompass locations in a plurality of fields spanning a plurality of growers. Methods for identifying management zones are described further in U.S. Patent Pub. 2018-0046735A1. The agricultural intelligence computing system may identify benefits of using a new product, different seeds, and/or management practices for a management zone. The agricultural intelligence computing system may identify testing locations within the management zone so that effects of performing the trial can be compared to the rest of the management zone. [0238] In an embodiment, the agricultural intelligence computing system identifies management zones based on a type of trial being performed. For example, two locations on a field may comprise different soil types, but have a similar yield and a similar pest problem. For purposes of implementing a pesticide trial, the two locations may be treated as a single management zone. In contrast, for purposes of implementing a fertilizer trial which is dependent on the soil type, the two locations may be treated as different zones.”; [0253] FIG. 21 depicts an example of a grid overlay on a map used for computing short length yield variability. Map 2102 comprises a grid overlaying a map of an agricultural field. As shown in map 2102, the first vertical line is generated at a grid cell width away from the leftmost boundary of the map whereas the first horizontal line is generated at a grid cell length away from the bottommost boundary of the map. In an embodiment, the agricultural field additionally includes management zones. For example, map 2104 depicts a grid overlay on a map of an agricultural field which contains three management zones that are differentiated by color. The management zones refer to sections of the agricultural field which receive similar management treatment or have previously been grouped based on shared characteristics.”; [0314] In an embodiment, the agricultural intelligence computing system determines the result value association based on captured data for the field. For example, the agricultural intelligence computing system may receive field data including field descriptions, soil data, planting data, fertility data, harvest and yield data, crop protection data, pest and disease data, irrigation data, tiling data, imagery, weather data, and additional management data. Based on the field data, the agricultural intelligence computing system may compute benefits to the field of using one or more products, management practices, farming equipment, or seeds. The agricultural intelligence computing system may generate a trial participation request based on the computed benefits to the field. For example, the agricultural intelligence computing system may be programmed or configured to offer the one or more products, management practices, farming equipment, or seeds at a particular percentage of computed increase in profits for the field.) In addition, the same motivation is used as the rejection for claim 9. Thus, the combination of Guan, She, Beeson and Ruff teaches wherein the computing device is further configured to direct operation of a farm implement at the one or more fields to treat the one or more fields with a treatment based on the map of tillage for the one or more fields. Regarding claim 19, Guan, She, Beeson and Ruff teach the system of claim 18, wherein the treatment includes one or more of a pesticide, a herbicide, and/or a fertilizer (see at least [0086] of Guan “ In one embodiment, the nitrogen graph may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen graph. The user may then use his optimized nitrogen graph and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. Nitrogen instructions 210 also may be programmed to generate and cause displaying a nitrogen map, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. The nitrogen map may display projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted for different times in the past and the future (such as daily, weekly, monthly or yearly) using numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude. In one embodiment, the nitrogen map may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen map, such as to obtain a preferred amount of surplus to shortfall. The user may then use his optimized nitrogen map and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. In other embodiments, similar instructions to the nitrogen instructions 210 could be used for application of other nutrients (such as phosphorus and potassium) application of pesticide, and irrigation programs.”; [0076] of She “In one embodiment, performance instructions 216 are programmed to provide reports, analysis, and insight tools using on-farm data for evaluation, insights and decisions. This enables the grower to seek improved outcomes for the next year through fact-based conclusions about why return on investment was at prior levels, and insight into yield-limiting factors. The performance instructions 216 may be programmed to communicate via the network(s) 109 to back-end analytics programs executed at agricultural intelligence computer system 130 and/or external data server computer 108 and configured to analyze metrics such as yield, yield differential, hybrid, population, SSURGO zone, soil test properties, or elevation, among others. Programmed reports and analysis may include yield variability analysis, treatment effect estimation, benchmarking of yield and other metrics against other growers based on anonymized data collected from many growers, or data for seeds and planting, among others.; [0195] of Ruff “ In an embodiment, the agricultural intelligence computing system identifies evidence of existing or previous experiments on a field. Based on the evidence of existing or previous experiments on the field, the agricultural intelligence computing system may select the field as a candidate for performing a trial. The agricultural intelligence computing system may identify evidence of experiments based on sections of a field that are treated differently from the rest of the field. For instance, the agricultural intelligence computing system may identify locations in the field that have received different seed types, seeding populations, and/or product applications such as fertilizer and pesticide. If a determination is made that a field contains one or more experiments, the agricultural intelligence computing system may select the field as a candidate for participation in the trial. [0238] In an embodiment, the agricultural intelligence computing system identifies management zones based on a type of trial being performed. For example, two locations on a field may comprise different soil types, but have a similar yield and a similar pest problem. For purposes of implementing a pesticide trial, the two locations may be treated as a single management zone. In contrast, for purposes of implementing a fertilizer trial which is dependent on the soil type, the two locations may be treated as different zones.”.) In addition, the same motivation is used as the rejection for claim 9. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARAH LE whose telephone number is (571)270-7842. The examiner can normally be reached Monday: 8AM-4:30PM EST, Tuesday: 8 AM-3:30PM EST, Wednesday: 8AM-2:30PM EST, Thursday and Friday off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARAH LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jul 25, 2023
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569321
PROPOSING DENTAL RESTORATION MATERIAL PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573128
Progressive Compression of Geometry for Graphics Processing
2y 5m to grant Granted Mar 10, 2026
Patent 12536715
GENERATION OF STYLIZED DRAWING OF THREE-DIMENSIONAL SHAPES USING NEURAL NETWORKS
2y 5m to grant Granted Jan 27, 2026
Patent 12505585
SYSTEMS AND METHODS FOR OVERLAY OF VIRTUAL OBJECT ON PROXY OBJECT
2y 5m to grant Granted Dec 23, 2025
Patent 12505590
NODE LIGHTING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month