Prosecution Insights
Last updated: April 19, 2026
Application No. 17/978,473

JOINING RASTER DATA AND VECTOR DATA IN AGRICULTURAL DATA PROCESSING

Final Rejection §103
Filed
Nov 01, 2022
Examiner
HANSEN, CONNOR LEVI
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Deere & Company
OA Round
4 (Final)
75%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
21 granted / 28 resolved
+13.0% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
32 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Rejections made under 35 U.S.C. 112(b) are withdrawn. Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 8-9 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wan et al. (MY 157809 A), (hereinafter, Wan) in view of Zhang et al. (US 20230222733 A1), (hereinafter, Zhang) and further in view of Guo et al. (US 2020/0126232 A1), (hereinafter, Guo). Regarding claim 1, Wan teaches a method implemented by one or more processors (see Processor (20) in Fig. 1 and Fig. 3), the method comprising: selecting an instance of geolocated agricultural image data capturing an agricultural plot, wherein the instance of geolocated agricultural image data includes a two dimensional representation of the agricultural plot, divided into a plurality of cells (Wan, “The proposed system and method provide efficient, user-friendly and computerized system of transforming satellite image into a digital field plots map of an agriculture field”, pg. 9 of 20, lines 11-16, “Such satellite image could be obtained from IKONOS satellite service system or any other service provider.”, pg. 10 of 20, lines 7-9, see Satellite Image of Fig. 3, Two-dimension satellite images composed of individual pixels are obtained.), and wherein each cell represents a category of agricultural data from a plurality of categories of agricultural data captured in the agricultural plot (Wan, “The combined linear feature image… (is) then subjected to Maximum Likelihood (ML) classifier in the classification process (2) to classify the image… A total of 6099 and 899 pixels were selected for paddy and farm/road path class during the field study for use in the transformation process.”, pg. 11 of 20, lines 1-11, Each pixels of the image is individually classified into categories such as paddy field or road/path.); processing the instance of geolocated agricultural image data to identify a first contiguous grouping of cells and a second contiguous grouping of cells, wherein the first contiguous grouping of cells and the second contiguous grouping of cells represent the category of agricultural data (Wan, ”The classified image is then subjected to raster to vector conversion process (3) to extract the paddy field plot boundaries of the field.”, pg. 11 of 20, lines 12-14, The classified images are then processed to extract boundaries for each grouping of pixels. Figure 4a illustrates that each group categorized as the paddy fields plot are individually identified for vector conversion.); generating a first vector representation of the first contiguous grouping of cells and a second vector representation of the second contiguous grouping of cells, wherein the first vector representation includes a representation of a first boundary of the first contiguous grouping of cells and the second vector representation includes a representation of a second boundary of the second contiguous grouping of cells, and the first vector representation and the second vector representation further include a representation of the category of agricultural data (Wan, “The raster to vector conversion would be able to generate lines, which outlines the boundaries of the plots.”, pg. 9 of 20, lines 14-22, Based on each of the identified boundaries, the system performs raster to vector conversion to generate vector representations for each paddy field plot. These plot boundary vector representations can be further interpreted as representing the category, or a vector representation corresponding to the pixels classified as being paddy field.); Wan does not teach joining the first vector representation and the second vector representation to generate a joint vector representation of the category of agricultural data; replacing the first contiguous grouping of cells and the second contiguous grouping of cells of the instance of geolocated agricultural image data with the joint vector representation to generate an updated instance of geolocated agricultural image data, the updated instance of geolocated agricultural image data to include the instance of geolocated agriculture image data and the joint vector representation; and performing a calculation on the updated instance of geolocated agricultural image data. However, Zhang teaches joining the first vector representation and the second vector representation to generate a joint vector representation of the category of agricultural data; replacing the first contiguous grouping of cells and the second contiguous grouping of cells of the instance of geolocated agricultural image data with the joint vector representation to generate an updated instance of geolocated agricultural image data, the updated instance of geolocated agricultural image data to include the instance of geolocated agriculture image data and the joint vector representation; and performing a calculation on the updated instance of geolocated agricultural image data. (Zhang, “In SlOl, a to-be-processed image is acquired to obtain a grayscale image of the to-be-processed image. In S102, pixels in the grayscale image are classified according to grayscale values to obtain binary images corresponding to different landform categories. In S103, image spot contours of image spots in the binary images are extracted, and the extracted image spot contours are taken as vector graphs to obtain a vector graph set. In S104, the vector graphs corresponding to a same landform category in the vector graph set are merged according to position information, and a first landform map is obtained according to merging results corresponding to different landform categories. In S105, the vector graphs corresponding to different landform categories in the first landform map are mapped by using a preset landform type, and a mapping result is taken as a second landform map.”, pg. 2, paragraphs 0020-0024, “In this embodiment, after S103 is performed to obtain a first landform map according to merging results corresponding to different landform categories, the following content may be further included: changing a landform category of the vector graph with an area less than a first preset threshold in the first landform map to a landform category of the vector graph with an area greater than a second preset threshold adjacent thereto. That is, in this embodiment, the landform category of the vector graph in the first landform map can be changed, so that the landform category of the vector graph with a smaller area is the same as the landform category of the vector graph with a larger area adjacent thereto, thereby further improving the accuracy and consistency of the built first landform map. In this embodiment, after S104 is performed to obtain a first landform map, S105 is performed to map, by using a preset landform type... The preset landform type may include a landform color, a landform shape and so on.”, pg. 3, paragraphs 0040-0043, Pixels of an image are classified into landform categories, and contiguous groups of pixels in the same category are extracted as vector graphs. These vectors are merged into a joint vector representation for each landform type and presented as a landform map generated based on spatial positions of the image. Once this joint vector representation is generated, further calculation can be performed, such as thresholding to adjust or reclassify vector graphs based on their sizes. This allows the system to produce a more detailed and reliable map, which is then styled by applying preset colors and/or shapes for each landform category.). Wan teaches generating vector representations of individual paddy field plot boundaries through pixel-level classification and raster-to-vector conversion, and providing those vector representation to a Geographical Information Systems (GIS) for precision farming applications (Wan, “Figure 4a shows the visual representation of the result of applying Maximum Likelihood classification… and Figure 4b shows the result of converting the raster information to vector information to obtain the boundary plots. As shown in Figure 4b, the resulting image show clear demarcation of field plots boundary that could be used for GIS application in precision farming.”, pg. 11, lines 23-32). Zhang teaches generating a joint vector representation of merged vector graphs determined to be in the same category as landform maps, performing calculations to refine the merged results (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Wan by including the joint vector representation process as taught by Zhang (Zhang, pg. 2, paragraphs 0020-0024, and pg. 3, paragraphs 0040-0043). The motivation for doing so would have been to join vector representation of the same category into a uniform representation, thereby reducing the amount of data which is transmitted and processed to the GIS. The combination of Wan in view of Zhang would generate joint vector representation maps corresponding to agricultural categories, which can then be provided to the GIS for use in precision farming. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Wan with Zhang to obtain the invention as specified above. Wan in view of Zhang does not teach wherein the calculation determines a characteristic of an agricultural operation to be performed with respect to a portion of the agricultural plot represented by the joint vector representation. However, Guo teaches wherein the calculation determines a characteristic of an agricultural operation to be performed with respect to a portion of the agricultural plot represented by the joint vector representation (Guo, “various machine learning models may be trained to generate data indicative of predicted crop yields at a granular level. For example, given a sequence of high-elevation digital images (which may include synthetic high-elevation digital images generated using techniques described herein), crop yield may be predicted on a pixel-by-pixel basis”, pg. 2, paragraph 0014, lines 7-16, A sequence of agricultural images taken in the same location/plot are input to a machine learning model for crop yield predictions.). Wan in view of Zhang teaches generating joint vector representation maps for precision farming purposes (Wan, “As shown in Figure 4b, the resulting image show clear demarcation of field plots boundary that could be used for GIS application in precision farming.”, pg. 11 of 20, lines 29-32). Guo teaches using processed images of agricultural plots to determine a characteristic of an agriculture operation for precision farming application, namely crop yield predictions using a machine learning model (Guo, pg. 2, paragraph 0014, lines 7-16). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Wan in view of Zhang to include the machine learning model of Guo (Guo, pg. 2, paragraph 0014, lines 7-16), for crop yield predictions of the updated images. The motivation for doing so would have been to predict the crop yield corresponding to an agricultural plot, thereby providing field status information for future decision making (as suggested by Guo, “digital images captured from high elevations, such as satellite images… are becoming increasingly important for agricultural applications, such as estimating a current state or health of a field.”, see abstract). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Wan in view of Zhang with Guo to obtain the invention as specified in claim 1. Regarding claim 2, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1, wherein the instance of geolocated agricultural image data is captured via one or more image sensors mounted on a satellite (Wan, “satellite image could be one of the sources”, pg. 6 and 7 of 20, lines 26 and 1, respectively). Regarding claim 4, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1, further including: processing a plurality of updated instances of geolocated agricultural image data, wherein each instance in the plurality of updated instances of geolocated agricultural image data captures a same geographical location; and generating a predicted crop yield for the same geographical location using an agricultural machine learning model (Guo, “various machine learning models may be trained to generate data indicative of predicted crop yields at a granular level. For example, given a sequence of high-elevation digital images (which may include synthetic high-elevation digital images generated using techniques described herein), crop yield may be predicted on a pixel-by-pixel basis”, pg. 2, paragraph 0014, lines 7-16, A sequence of agricultural images taken in the same location are input to a machine learning model for crop yield predictions.). Regarding claim 5, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1, wherein processing the instance of geolocated agricultural image data to identify the first contiguous grouping of cells and the second contiguous grouping of cells, wherein the first contiguous grouping of cells and the second contiguous grouping of cells represent the category of agricultural data includes: identifying one or more edges in the instance of geolocated agricultural image data (Wan, “the satellite image is first subjected to a liner feature (LF) extraction ( 10) to obtain the linear feature image of the satellite image, applying edge extraction (11)”, pg. 10 of 20, lines 9-12, see Fig. 3 step 10, The linear feature extraction is used to emphasize edges in the image.) ; processing a first plurality of cells in the instance of geolocated agricultural image data to generate a first candidate grouping of cells based on the identified one or more edges; processing a second plurality of cells in the instance of geolocated agricultural image data to generate a second candidate grouping of cells based on the identified one or more edges (Wan, “applying edge extraction (11) to the near-infrared band of the satellite image to obtain the linear feature band and combining the linear feature image with red, green and near-infrared bands of the satellite image ( 12) .”, pg. 10 of 20, lines 12-17, see Fig. 4a, Edges corresponding to the near-infrared band are processed to generate boundaries for each groupings of pixels representing the agricultural plots); processing the first candidate grouping of cells and the second candidate grouping of cells to determine whether the first candidate grouping of cells and the second candidate grouping of cells represent the category of agricultural data (Wan, “The combined linear feature image, red, green and near-infrared bands are then subjected to Maximum Likelihood (ML) to classifier”, pg. 11 of 20, lines 1-4, All pixels of the image are subject to classification.); and in response to determining the first candidate grouping of cells and the second candidate grouping of cells represents the category of agricultural data, identifying the first contiguous grouping of cells and the second contiguous grouping of cells based on the first candidate grouping of cells and the second candidate grouping of cells, respectively (Wan, “The classified image is then subjected to raster to vector conversion process (3) to extract the paddy field plot boundaries of the field.”, pg. 11 of 20, lines 12-14, Once the image is classified, the classification result for each identified plot is used to define the boundaries for vectorization.). Claim 8 corresponds to claim 1, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo teaches the addition of a non-transitory computer readable medium (Wan, “The computerized system (19) generally includes… a software (not shown)”, pg. 9 of 20, lines 20-23) to perform the method according to claim 1. As indicated in the analysis of claim 1, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1. Therefore, claim 8 is rejected for the same reason of obviousness as claim 1. Claim 9 corresponds to claim 2, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo teaches the addition of a non-transitory computer readable medium (see analysis of claim 8) to perform the method according to claim 1. As indicated in the analysis of claim 2, Wan in view of Zhang and further in view of Guo teaches the method according to claim 2. Therefore, claim 9 is rejected for the same reason of obviousness as claim 2. Claim 11 corresponds to claim 4, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo teaches the addition of a non-transitory computer readable medium (see analysis of claim 8) to perform the method of claim 4. As indicated in the analysis of claim 4, Wan in view of Zhang and further in view of Guo teaches the method according to claim 4. Therefore, claim 11 is rejected for the same reason of obviousness as claim 4. Claim 12 corresponds to claim 5, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo teaches the addition of a non-transitory computer readable medium (see analysis of claim 8) to perform the method according to claim 5. As indicated in the analysis of claim 5, Wan in view of Zhang and further in view of Guo teaches the method according to claim 5. Therefore, claim 12 is rejected for the same reason of obviousness as claim 5. Claim 15 corresponds to claim 1, additionally reciting a system comprising one or more processors and memory. Wan in view of Zhang and further in view of Guo teaches the addition of a system comprising one or more processors and memory (Wan, “The computerized system (19) generally includes hardware including at least a processor (20), a memory device (21) and a storing device (22)”, pg. 9 of 20, lines 20-23) to perform the method according to claim 1. As indicated in the analysis of claim 1, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1. Therefore, claim 15 is rejected for the same reason of obviousness as claim 1. Claim 16 corresponds to claim 2, additionally reciting a system comprising one or more processors and memory. Wan in view of Zhang and further in view of Guo teaches the addition of a system comprising one or more processors and memory (see analysis of claim 15) to perform the method according to claim 2. As indicated in the analysis of claim 2, Wan in view of Zhang and further in view of Guo teaches the method according to claim 2. Therefore, claim 16 is rejected for the same reason of obviousness as claim 2. Claim 18 corresponds to claim 4, additionally reciting a system comprising one or more processors and memory. Wan in view of Zhang and further in view of Guo teaches the addition of a system comprising one or more processors and memory (see analysis of claim 15) to perform the method according to claim 4. As indicated in the analysis of claim 4, Wan in view of Zhang and further in view of Guo teaches the method according to claim 4. Therefore, claim 18 is rejected for the same reason of obviousness as claim 4. Claim 19 corresponds to claim 5, additionally reciting a system comprising one or more processors and memory. Wan in view of Zhang and further in view of Guo teaches the addition of a system comprising one or more processors and memory (see analysis of claim 15) to perform the method according to claim 5. As indicated in the analysis of claim 5, Wan in view of Zhang and further in view of Guo teaches the method according to claim 5. Therefore, claim 19 is rejected for the same reason of obviousness as claim 5. Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wan et al. (MY 157809 A) in view of Zhang et al. (US 20230222733 A1) and further in view of Guo et al. (US 2020/0126232 A1) and Han (US 2022/0237397 A1). Regarding claim 3, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1, wherein processing the instance of geolocated agricultural to identify the first contiguous grouping of cells and the second contiguous grouping of cell, wherein the first contiguous grouping of cells and the second contiguous grouping of cells represent the category of agricultural data includes: determining a number of cells in a first candidate grouping of cells and a second candidate grouping of cells (Wan, ”The classified image is then subjected to raster to vector conversion process (3) to extract the paddy field plot boundaries of the field.”, pg. 11 of 20, lines 12-14, in identifying the vector boundaries of the plot, the number of pixels corresponding to the category would be determined); Wan in view of Zhang and further in view of Guo does not teach determining whether the number of cells in the first candidate grouping of cells and the second candidate grouping of cells respectively satisfy a threshold value; and in response to determining the number of cells in the first candidate grouping of cells and the second candidate grouping of cells respectively satisfy the threshold value, identifying the first candidate grouping of cells and the second candidate grouping of cells as the first contiguous grouping of cells and the second contiguous grouping of cells, respectively. However, Han teaches determining whether the number of cells in the first candidate grouping of cells and the second candidate grouping of cells respectively satisfy a threshold value; and in response to determining the number of cells in the first candidate grouping of cells and the second candidate grouping of cells respectively satisfy the threshold value, identifying the first candidate grouping of cells and the second candidate grouping of cells as the first contiguous grouping of cells and the second contiguous grouping of cells, respectively. (Han, “In some implementations, density-based clustering is performed according to a minimum size parameter. The minimum size parameter defines the minimum size of a cluster, and can be useful to filter out small groups of remaining pixels”, pg. 3, paragraph 0028, lines 6-11, Pixels are only defined in a cluster if they meet a minimum size requirement.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Wan in view of Zhang and further in view of Guo to include a minimum size parameter thresholding, as taught by Han (pg. 3, paragraph 0028, lines 6-11), for pixel group determination. The motivation for doing so would have been to filter out irrelevant data from the image, thereby increasing the processing speed. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Wan in view of Zhang and further in view of Guo with Han to obtain the invention as specified in claim 3. Claim 10 corresponds to claim 3, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo and Han teaches the addition of a non-transitory computer readable medium (see analysis of claim 8) to perform the method of claim 3. As indicated in the analysis of claim 3, Wan in view of Zhang and further in view of Guo and Han teaches the method according to claim 3. Therefore, claim 10 is rejected for the same reason of obviousness as claim 3. Claim 17 corresponds to claim 3, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo and Han teaches the addition of a system comprising one or more processors and memory (see analysis of claim 15) to perform the method according to claim 3. As indicated in the analysis of claim 3, Wan in view of Zhang and further in view of Guo and Han teaches the method according to claim 3. Therefore, claim 17 is rejected for the same reason of obviousness as claim 3. Claims 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wan et al. (MY 157809 A) in view of Zhang et al. (US 20230222733 A1) and further in view of Guo et al. (US 2020/0126232 A1) and Hafiz et al. (“A survey on instance segmentation: state of the art”, International Journal of Multimedia Information Retrieval, 2020), (hereinafter, Hafiz). Claim 6 corresponds to claim 5, reciting the methods according to claim 5, with the exception of: identifying one or more objects in the instance of geolocated agricultural image data; processing a first plurality of cells in the instance of agricultural image data to generate a first candidate grouping of cells based on the identified one or more objects; and processing a second plurality of cells in the instance of agricultural image data to generate a second candidate grouping of cells based on the identified on or more objects. Claim 6 differs from claim 5 primarily in that it performs image segmentation using identified objects rather than edges. As indicated in the analysis of claim 5, Wan in view of Zhang and further in view of Guo teaches the method of image segmentation according to edges. Wan in view of Zhang and further in view of Guo does not teach performing segmentation of the image using identified objects. However, Hafiz teaches methods regarding instance segmentation (Hafiz, see abstract). Hafiz teaches performing image segmentation using identified objects (Hafiz, “The goal of semantic segmentation is to obtain fine inference by predicting labels for each image pixel. Every pixel is class labelled according to the object or region within which it is enclosed.”, pg. 171, 2nd column, lines 1-3, see Fig. 1). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Wan in view of Zhang and further in view of Guo by replacing the edge-based segmentation with the object-based segmentation of Hafiz (pg. 171, 2nd column, lines 1-3, see Fig. 1). The motivation for doing so would have been to identify individual labels for all objects in the image, as suggested by Hafiz (“instance segmentation provides different labels for separate instances of objects belonging to the same object class.”, pg. 171, 2nd column, lines 4-6). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Wan in view of Zhang and further in view of Guo with Hafiz to obtain the invention as specified in claim 6. Claim 13 corresponds to claim 6, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Guo and Hafiz teaches the addition of a non-transitory computer readable medium (see analysis of claim 8) to perform the method of claim 6. As indicated in the analysis of claim 6, Wan in view of Zhang and further in view of Hafiz teaches the method according to claim 6. Therefore, claim 13 is rejected for the same reason of obviousness as claim 6. Claim 20 corresponds to claim 6, additionally reciting a system comprising one or more processors and memory. Wan in view of Zhang and further in view of Hafiz teaches the addition of a system comprising one or more processors and memory (see analysis of claim 15) to perform the method according to claim 6. As indicated in the analysis of claim 6, Wan in view of Zhang and further in view of Guo and Hafiz teaches the method according to claim 6. Therefore, claim 20 is rejected for the same reason of obviousness as claim 6. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wan et al. (MY 157809 A) in view of Zhang et al. (US 20230222733 A1) and further in view of Guo et al. (US 2020/0126232 A1) and Brumby et al. (US 2022/0415022 A1), (hereinafter, Brumby). Regarding claim 7, Wan in view of Zhang and further in view of Guo teaches the method according to claim 1, wherein the plurality of agricultural categories includes at least one of a class of crops captured in the instance of geolocated agricultural image data or an unplanted portion of the plot captured in the instance of geolocated agricultural image data (Wan, “A total of 6099 and 899 pixels were selected for paddy and farm/road path class during the field study for use in the transformation process.”, pg. 11 of 20, lines 8-11, pixels are categorized as either paddy or farm/road path). Wan in view of Zhang and further in view of Guo does not teach an agricultural category including a cloud captured in the instance of geolocated agricultural image data. However, Brumby teaches methods for categorizing pixels of satellite image data (Brumby, see abstract). In particular, Brumby teaches an agricultural category including a cloud captured in the instance of geolocated agricultural image data (Brumby, “The plurality of mapping categories may comprise at least grass, flooded vegetation, crops, bare ground, snow/ice and clouds,”, pg. 2, paragraph 0011, lines 8-10, see Fig. 4A clouds 433). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Wan in view of Zhang and further in view of Guo to include an additional classification category for clouds captured in the image, as taught by Brumby (pg. 2, paragraph 0011, lines 8-10, see Fig. 4A clouds 433). The motivation for doing so would have been to consider occlusions in the image data due to cloud coverage (as suggested by Brumby, “In some embodiments, the mapping system may account for cloud cover variability and possible gaps in data coverage by utilizing multiple observations of a particular geographic area in generating a map of a plurality of geographic areas”, pg. 4, paragraph 0047, lines 5-9) Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Wan in view of Zhang and further in view of Guo with Brumby to obtain the invention as specified in claim 7. Claim 14 corresponds to claim 7, additionally reciting a non-transitory computer readable medium. Wan in view of Zhang and further in view of Brumby teaches the addition of a non-transitory computer readable medium (see analysis of claim 8) to perform the method of claim 7. As indicated in the analysis of claim 7, Wan in view of Zhang and further in view of Guo and Brumby teaches the method according to claim 7. Therefore, claim 14 is rejected for the same reason of obviousness as claim 7. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONNOR LEVI HANSEN whose telephone number is (703)756-5533. The examiner can normally be reached Monday-Friday 9:00-5:00 (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CONNOR L HANSEN/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Dec 18, 2024
Non-Final Rejection — §103
Feb 25, 2025
Applicant Interview (Telephonic)
Feb 25, 2025
Examiner Interview Summary
Mar 24, 2025
Response Filed
Apr 16, 2025
Final Rejection — §103
May 23, 2025
Applicant Interview (Telephonic)
May 23, 2025
Examiner Interview Summary
Jun 24, 2025
Response after Non-Final Action
Jul 23, 2025
Request for Continued Examination
Jul 24, 2025
Response after Non-Final Action
Sep 10, 2025
Non-Final Rejection — §103
Dec 02, 2025
Examiner Interview Summary
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response Filed
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530785
TRACKING DEVICE, TRACKING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Jan 20, 2026
Patent 12524984
HISTOGRAM OF GRADIENT GENERATION
2y 5m to grant Granted Jan 13, 2026
Patent 12518363
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND STORAGE MEDIUM WITH PIECEWISE LINEAR FUNCTION FOR TONE CONVERSION ON IMAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12499648
IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETECTING SUBJECT IN CAPTURED IMAGE
2y 5m to grant Granted Dec 16, 2025
Patent 12482257
REDUCING ENVIRONMENTAL INTERFERENCE FROM IMAGES
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+29.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month