Prosecution Insights
Last updated: April 19, 2026
Application No. 18/922,267

Agricultural Vehicles Including an Imaging Controller, and Related Methods

Non-Final OA §103§112
Filed
Oct 21, 2024
Examiner
MCCLEARY, CAITLIN RENEE
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Agco International GmbH
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
89%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
54 granted / 95 resolved
+4.8% vs TC avg
Strong +32% interview lift
Without
With
+32.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
56 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
27.4%
-12.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 95 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are currently pending and have been examined in this application. This communication is the first action on the merits (FAOM). Examiner's Note Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to Applicant's definition which is not specifically set forth in the disclosure. Claim Interpretation Use of the word "means" ( or "step for") in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(-f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(-f) (pre- AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word "means" ( or "step for") in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(-f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(-f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre- AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “propulsion system” in claims 1-10, “steering system” in claims 1-10, “navigation controller” in claims 1-10 and 18-20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The above-referenced claim limitations has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because: “propulsion system” in claims 1-10, “steering system” in claims 1-10, “navigation controller” in claims 1-10 and 18-20 all use a generic placeholder “system” or “controller” coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, the claims have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: Propulsion system: [0042] - The vehicle 100 includes an operator cabin 106 from which an operator of the vehicle 100 may control the vehicle 100, and an engine compartment 108 housing an engine or other propulsion system for providing a motive force for moving the vehicle 100. In some embodiments, the propulsion system includes motors operably coupled to wheels of the vehicle 100. Steering system: [0042] - The vehicle 100 includes a steering system (e.g., a steering wheel and associated steering column, universal joint, and rack-and-pinion) configured for facilitating steering and navigation of the vehicle 100. Navigation controller: [0117-0118] - The machine-executable code may be configured to adapt the at least one processor 506 to cause the vehicle 100 to perform at least one navigation operation. For all the units corresponding to a computer (hardware) the software (steps in an algorithm/flowchart) should be included to indicate proper support. If applicant wishes to provide further explanation or dispute the examiner's interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. l 12(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S. C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6 and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites “the imaging controller” and there is insufficient antecedent basis for this limitation in the claim. As best understood, the claim will be interpreted to read --the computing device--. Claim 9 recites “the computing device controller” and there is insufficient antecedent basis for this limitation in the claim. As best understood, the claim will be interpreted to read --the computing device--. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 7-11, 13, 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Theverapperuma (US 2022/0024485 A1) in view of Bharathwaj (US 2024/0282105 A1). Regarding claim 1, Theverapperuma discloses an agricultural vehicle (see at least [0036] – autonomous vehicle 120 can be a dump truck, tractor, bull dozer, excavator, forklift), comprising: a propulsion system configured to move the agricultural vehicle (see at least [0038] – propulsion system); a steering system configured to orient the agricultural vehicle (see at least [0038] – steering system); a navigation controller operably coupled to the propulsion system and the steering system (see at least [0035, 0038, 0042] - autonomous vehicle management system 122 (also referred to as a controller system) is configured to process data describing the state of autonomous vehicle 120 and the state of the autonomous vehicle's environment, and based upon the processing, control one or more autonomous functions or operations of autonomous vehicle 120… autonomous vehicle management system 122 may issue instructions/commands to vehicle systems 112 to programmatically and autonomously control various aspects of the autonomous vehicle's motion such as the propulsion, braking, steering or navigation); a camera operably coupled to the agricultural vehicle (see at least [0007, 0039], claims 1 and 14 - camera); a radar operably coupled to the agricultural vehicle (see at least [0007, 0039], claims 1, 14, and 20 – LIDAR sensor or radar sensor); and a computing device operably coupled to the radar and the camera (see at least [0007, 0039], claims 1, 14, and 20 - sensor data can then be fed to autonomous vehicle management system 122), the computing device comprising: at least one processor; and at least one non-transitory computer-readable storage medium having instructions thereon that, when executed by the at least one processor, cause the computing device to (see at least [0007, 0043], claims 1 and 14 - autonomous vehicle management system 122… software may be stored on a non-transitory computer readable medium (e.g., on a memory device) and may be executed by one or more processors (e.g., by computer systems) to perform its functions): receive image data from the camera (see at least [0007, 0039], claims 1 and 14 – camera… at least one camera image of a physical environment… sensor data can then be fed to autonomous vehicle management system 122); receive radar data from the radar and generate a radar point cloud based on the radar data (see at least [0007, 0039], claims 1, 14, and 20 – LIDAR sensor or radar sensor… 3D representation of the physical environment as a point cloud… sensor data can then be fed to autonomous vehicle management system 122); identify one or more objects in the image data using a neural network trained using a dataset of objects to generate labeled image data (see at least [0007, 0039, 0059, 0134], claims 1 and 14 – neural network trained to identify objects in the image data… used to identify and classify objects in the environment of the autonomous vehicle 120); fuse the labeled image data with the radar point cloud to obtain fused data (see at least [0007, 0083, 0112, 0148], claims 1 and 14 – generating an output representation… combine data from a camera with data from other sensors (e.g., LIDAR point clouds)); and perform one or more navigation operations based on the fused data (see at least claims 1 and 14, [0007] – based on the output representation, determining the plan of action involving autonomously navigating and executing the plan of action). Theverapperuma does not appear to explicitly disclose a neural network trained using a dataset of agricultural objects. Bharathwaj, in the same field of endeavor, teaches the following limitations: a neural network trained using a dataset of agricultural objects (see at least [0139]). Since Theverapperuma teaches that the vehicle can be a tractor (Theverapperuma – [0036, 0038]), it would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Bharathwaj into the invention of Theverapperuma with a reasonable expectation of success for the purpose of tailoring the instant segmentation task with respect to a particular agricultural item of interest to improve the ability to identify particular objects relating to agriculture (Bharathwaj – [0139, 0147]). Regarding claim 3, Theverapperuma discloses wherein the computing device comprises instructions thereon that, when executed by the at least one processor, cause the computing device to perform a segmentation operation on the image data to generate the labeled image data (see at least [0007, 0039, 0059, 0084, 0134], claims 1 and 14 – segmenting a 2D image captured by a camera… segmentation is typically performed concurrently with classification (determining the class of each segment)). Theverapperuma does not appear to explicitly disclose an instance segmentation operation. Bharathwaj, in the same field of endeavor, teaches the following limitations: an instance segmentation operation (see at least [0137-0139]). The motivation to combine Theverapperuma and Bharathwaj is the same as in the rejection of claim 1 above. Regarding claim 4, Theverapperuma discloses wherein the computing device comprises instructions thereon that, when executed by the at least one processor, cause the computing device to perform the one or more navigation operations comprising at least one of reducing a speed and changing a direction of travel of the agricultural vehicle based on the fused data (see at least claims 1 and 14, [0007, 0042, 0044] – based on the output representation, determining the plan of action involving autonomously navigating and executing the plan of action… make decisions regarding actions (e.g., navigation, braking, acceleration)). Regarding claim 7, Theverapperuma discloses wherein a field of view of the radar overlaps a field of view of the camera (see at least [0007, 0041, 0080, 0108], claims 1 and 14). Regarding claim 8, Theverapperuma discloses wherein the computing device comprises instructions thereon that, when executed by the at least one processor, causes the computing device to synchronize the radar data with the image data (see at least [0007, 0080], claims 1 and 14). Regarding claim 9, Theverapperuma discloses wherein the computing device controller comprises instructions thereon that, when executed by the at least one processor, cause the computing device to project the radar point cloud onto the classified image data to form the fused data (see at least [0007, 0083, 0100-0101, 0108], claims 1, 14, and 20 - combine data from a camera with data from another sensor, for example, to merge 2D camera images with 3D data from other sensors (e.g., LIDAR point clouds)… LIDAR-centric fusion). Regarding claim 10, Theverapperuma discloses wherein the computing device comprises instructions thereon that, when executed by the at least one processor, cause the computing device to project the classified image data onto the radar point cloud to form the fused data (see at least [0007, 0083, 0100-0101, 0108], claims 1, 14, and 20 - combine data from a camera with data from another sensor, for example, to merge 2D camera images with 3D data from other sensors (e.g., LIDAR point clouds)… camera-centric fusion). Regarding claim 11, all the limitations have been analyzed in view of claim 1, and it has been determined that claim 11 does not teach or define any new limitations beyond those previously recited in claim 1; therefore, claim 11 is also rejected over the same rationale as claim 1. Regarding claim 13, Theverapperuma discloses wherein controlling one or more operations of the agricultural vehicle based on the fused data comprises performing an object avoidance operation based on the fused data (see at least [0034]). Regarding claim 16, Theverapperuma does not appear to explicitly disclose further comprising: receiving a user input of an agricultural operation prior to receiving the image data; and labeling objects in the image data based on a predetermined set of agricultural objects selected based on the agricultural operation. However, Theverapperuma does disclose receiving a user input of an agricultural operation prior to receiving the image data (see at least [0035, 0038, 0044, 0061] - the autonomous operation may be the ability of the vehicle 120 to autonomously sense its environment and navigate or drive along a path autonomously and substantially free of any human user or manual input. Examples of other autonomous operations include, without limitation, scooping and dumping operations, moving materials or objects (e.g., moving dirt or sand from one area to another), lifting materials, driving, rolling, spreading dirt, excavating, transporting materials or objects from one point to another point, and the like… a goal to be set by an operator). Bharathwaj, in the same field of endeavor, teaches the following limitations: labeling objects in the image data based on a predetermined set of agricultural objects selected based on the agricultural operation (see at least [0139]). The motivation to combine Theverapperuma and Bharathwaj is the same as in the rejection of claim 1 above. Regarding claim 18, all the limitations have been analyzed in view of claim 1, and it has been determined that claim 18 does not teach or define any new limitations beyond those previously recited in claim 1; therefore, claim 18 is also rejected over the same rationale as claim 1. Regarding claim 20, Theverapperuma discloses wherein the imaging controller comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to perform the object detection operation using a neural network trained with a dataset (see at least [0035-0036, 0038, 0044, 0059, 0061]). Theverapperuma does not appear to explicitly disclose a neural network trained with an agricultural dataset. Bharathwaj, in the same field of endeavor, teaches the following limitations: a neural network trained with an agricultural dataset (see at least [0139]). The motivation to combine Theverapperuma and Bharathwaj is the same as in the rejection of claim 1 above. Claims 2, 14-15, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Theverapperuma in view of Bharathwaj and Arafat (CN 115731524 A, a machine translation is attached and is being relied upon). Regarding claim 2, Theverapperuma discloses wherein the computing device comprises instructions thereon that, when executed by the at least one processor, cause the computing device to perform a segmentation to identify instances of objects (see at least [0007, 0034, 0039, 0059, 0084, 0134], claims 1 and 14 – segmenting a 2D image captured by a camera). Theverapperuma does not appear to explicitly disclose wherein the computing device comprises instructions thereon that, when executed by the at least one processor, cause the computing device to perform an instance segmentation operation on the fused data to identify instances of agricultural objects in the fused data. Bharathwaj, in the same field of endeavor, teaches the following limitations: wherein the computing device comprises instructions thereon that, when executed by the at least one processor, cause the computing device to perform an instance segmentation operation on the data to identify instances of agricultural objects in the data (see at least [0137-0139]). The motivation to combine Theverapperuma and Bharathwaj is the same as in the rejection of claim 1 above. Arafat, in the same field of endeavor, teaches the following limitations: perform an instance segmentation operation on the fused data to identify instances of objects in the fused data (see at least [0033]). Since Theverapperuma teaches that semantic segmentation can be performed on the image data and the LIDAR or radar data (Theverapperuma – [0084]), it would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Arafat into the invention of Theverapperuma with a reasonable expectation of success for the purpose of improving detection of objects by segmenting the fused data (Arafat – [0007, 0033]). Instance segmentation is advantageous over semantic segmentation because it is more detailed as it identifies and separates each individual object. Furthermore, instance segmentation is known to be performed on image data, LIDAR/radar data, or both, and therefore these modifications could be carried out to yield predictable results. Doing so would provide a more detailed and accurate depiction of the environment for safer control of the agricultural vehicle. Regarding claim 14, Theverapperuma does not appear to explicitly disclose performing an image segmentation operation on the fused data to identify instances of agricultural objects in the fused data. Bharathwaj, in the same field of endeavor, teaches the following limitations: performing an image segmentation operation on the data to identify instances of agricultural objects in the data (see at least [0137-0139]). The motivation to combine Theverapperuma and Bharathwaj is the same as in the rejection of claim 1 above. Arafat, in the same field of endeavor, teaches the following limitations: performing an image segmentation operation on the fused data to identify instances of objects in the fused data (see at least [0033]). The motivation to combine Theverapperuma and Arafat is the same as in the rejection of claim 2 above. Regarding claim 15, Theverapperuma does not appear to explicitly disclose wherein performing an image segmentation operation on the fused data comprises performing the image segmentation operation using a neural network trained with a dataset comprising image data and radar data of agricultural objects. Bharathwaj, in the same field of endeavor, teaches the following limitations: wherein performing an image segmentation operation on the data comprises performing the image segmentation operation using a neural network trained with a dataset comprising image data and radar data of agricultural objects (see at least [0137-0139]). The motivation to combine Theverapperuma and Bharathwaj is the same as in the rejection of claim 1 above. Arafat, in the same field of endeavor, teaches the following limitations: wherein performing an image segmentation operation on the fused data comprises performing the image segmentation operation using a neural network trained with a dataset comprising image data and radar data of objects (see at least [0033]). The motivation to combine Theverapperuma and Arafat is the same as in the rejection of claim 2 above. Regarding claim 17, Theverapperuma discloses wherein fusing the labeled image data with the radar data to form fused data comprises projecting the labeled image data onto a 3D point cloud (see at least [0007, 0083, 0100-0101, 0108], claims 1, 14, and 20 - combine data from a camera with data from another sensor, for example, to merge 2D camera images with 3D data from other sensors (e.g., LIDAR point clouds)… LIDAR-centric fusion). Theverapperuma does not appear to explicitly disclose the method further comprising: performing an image segmentation operation on the fused data. Arafat, in the same field of endeavor, teaches the following limitations: performing an image segmentation operation on the fused data (see at least [0033]). The motivation to combine Theverapperuma and Arafat is the same as in the rejection of claim 2 above. Regarding claim 19, all the limitations have been analyzed in view of claim 2, and it has been determined that claim 19 does not teach or define any new limitations beyond those previously recited in claim 2; therefore, claim 19 is also rejected over the same rationale as claim 2. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Theverapperuma in view of Bharathwaj and Carvalho (BR 10 2021 013 431 A2, a machine translation is attached and is being relied upon). Regarding claim 5, Theverapperuma does not appear to explicitly disclose wherein the camera is configured to receive image data comprising RGB data, SWIR data, and NIR data. Carvalho, in the same field of endeavor, teaches the following limitations: wherein the camera is configured to receive image data comprising RGB data, SWIR data, and NIR data (see at least [2200]). Since Theverapperuma teaches that the vehicle can be a tractor (Theverapperuma – [0036, 0038]), it would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Carvalho into the invention of Theverapperuma with a reasonable expectation of success for the purpose of better evaluating the state of vegetables and organisms while also providing simplicity of use and installation (Carvalho – [100, 5500]). Selecting the camera to be a known specific type of camera that is capable of being used in a vehicle environment requires only routine skill in the art and doing so would yield predictable results. Furthermore, this would only increase the abilities of the camera in tractor applications, since the camera could now be used to detect in the visible spectrum for object and terrain detection and classification, but also in the NIR and SWIR spectrum to monitor plants. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Theverapperuma in view of Bharathwaj and Kanthan (US 2020/0387713 A1). Regarding claim 6, Theverapperuma does not appear to explicitly disclose wherein the computing system comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to select the neural network from a set of neutral networks based on an operation to be performed with the agricultural vehicle. However, Theverapperuma does disclose an operation to be performed with the agricultural vehicle (see at least [0035, 0038, 0044, 0061] - the autonomous operation may be the ability of the vehicle 120 to autonomously sense its environment and navigate or drive along a path autonomously and substantially free of any human user or manual input. Examples of other autonomous operations include, without limitation, scooping and dumping operations, moving materials or objects (e.g., moving dirt or sand from one area to another), lifting materials, driving, rolling, spreading dirt, excavating, transporting materials or objects from one point to another point, and the like… a goal to be set by an operator). Kanthan, in the same field of endeavor, teaches the following limitations: wherein the computing system comprises instructions thereon that, when executed by the at least one processor, cause the imaging controller to select the neural network from a set of neutral networks based on an operation to be performed (see at least [0049, 0071, 0101, 0108]). It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Kanthan into the invention of Theverapperuma with a reasonable expectation of success for the purpose of improving processing performance and accuracy of detection by selecting neural networks for subgroups of attributes of interest (Kanthan – [0049]). For example, if Theverapperuma’s vehicle was a watercraft it would be more efficient and accurate to utilize neural networks for detecting objects that are expected to be encountered on the water (i.e., buoys, docks, swimmers) but not utilizing neural networks that detect objects that would not be encountered on the water (i.e., bicycles). The same reasoning applies to buses, tractors, dump trucks, cars, etc. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Theverapperuma in view of Bharathwaj and Madsen (US 2023/0304818 A1). Regarding claim 12, Theverapperuma discloses wherein controlling one or more operations of the agricultural vehicle based on the fused data comprises: providing the fused data to a remote location (see at least [0043] – autonomous vehicle management system 122 can be remote from the autonomous vehicle 120); and generating a map of an area traversed by the agricultural vehicle (see at least [0035, 0044, 0058-0059, 0061] – generate and/or update the map of the real-time environment of the autonomous vehicle…). Theverapperuma does not appear to explicitly disclose wherein the area is a field. Madsen, in the same field of endeavor, teaches the following limitations: wherein the area is a field (see at least [0016, 0046, 0048]). It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Madsen into the invention of Theverapperuma with a reasonable expectation of success for the purpose of generating 3D terrain maps in agricultural/farming applications for different applications, such as determining a height of the vegetation on the terrain above the ground surface to help determine whether a crop is ready for harvesting, to identify brush that may need to be cleared from a field before planting, to assess the health of crops, to identify moisture levels of the terrain to help identify safe paths for vehicle planning to drive over to avoid equipment sinking or damaging the field, or the like (Madsen – [0046, 0048]). Conclusion The prior art made of record, and not relied upon, considered pertinent to applicant’s disclosure or directed to the state of art is listed on the enclosed PTO-982. The following is a brief description for relevant prior art that was cited but not applied: Das (US 12,313,727 B1) is directed to techniques for combining data using transformer-based machine learning models. In some examples, a first transformer is used to combine a first dataset with a second dataset. The results are then combined with a third dataset, using a second transformer. Each dataset can represent data from a different sensor modality. The transformers compute scores based on queries and apply the scores to values. The first dataset can be used to generate queries for the first transformer, and the values for the first transformer can be derived from the second dataset. Similarly, the third dataset can be used to generate queries for the second transformer, and the values for the second transformer can be derived from the output of the first transformer. The output of the second transformer is therefore a combination of all three datasets and can be used for object detection, for example, determining three-dimensional boundaries of objects. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAITLIN MCCLEARY whose telephone number is (703)756-1674. The examiner can normally be reached Monday - Friday 10:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.R.M./Examiner, Art Unit 3669 /NAVID Z. MEHDIZADEH/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Oct 21, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589771
VEHICLE CONTROL DEVICE, STORAGE MEDIUM FOR STORING COMPUTER PROGRAM FOR VEHICLE CONTROL, AND METHOD FOR CONTROLLING VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12583670
LIFT ARM ASSEMBLY FOR A FRONT END LOADING REFUSE VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12552379
STAGGERING DETERMINATION DEVICE, STAGGERING DETERMINATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12539840
SYSTEM AND METHOD FOR PROBING PROPERTIES OF A TRAILER TOWED BY A TOWING VEHICLE IN A HEAVY-DUTY VEHICLE COMBINATION
2y 5m to grant Granted Feb 03, 2026
Patent 12509934
Sensor Device
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
89%
With Interview (+32.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 95 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month