Prosecution Insights
Last updated: April 18, 2026
Application No. 18/746,108

OBSTACLE DETECTION SYSTEM, AGRICULTURAL MACHINE AND OBSTACLE DETECTION METHOD

Final Rejection §101§102§103§112
Filed
Jun 18, 2024
Examiner
HILGENDORF, DALE W
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kubota Corporation
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
691 granted / 816 resolved
+32.7% vs TC avg
Strong +21% interview lift
Without
With
+21.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
31 currently pending
Career history
847
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 thru 14 have been examined. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "72" and "74" have both been used to designate the headlands in Figure 13. The reference character 72 should be moved down into the field (see Figure 7). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: From Figure 15, reference character S160 is not in the specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claim 4 is objected to because of the following informalities: In line 6, the abbreviation/acronym RGB should include the full wording of the abbreviation/acronym to clearly identify it’s meaning. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6, 12 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites “a trigger” in line 4, while “a trigger” is also recited in 8 of claim 1. Based on an analysis of the claim language, the examiner interprets the claim 6 “trigger” as a different “trigger” than in claim 1. The examiner suggests changing the trigger of claim 6 to “a second trigger” or “another trigger” or “a different trigger”, etc. to clearly distinguish from the claim 1 trigger. Claim 12 recites “an implement” in line 4, while “an implement” is also recited in lines 2 and 3 of claim 12. It is unclear if this is a new implement or the same implement. The examiner assumes it is the same implement for continued examination. Claim 13 recites “An agricultural machine” in line 1, while “an agricultural machine” is also recited in lines 1 and 2 of claim 1. It is unclear if this is a new agricultural machine or the same agricultural machine. The examiner assumes it is the same agricultural machine for continued examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 2, 4, 5, 8 and 12 thru 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Subject Matter Eligibility Criteria - Step 1: Claim 1 is directed to a system (i.e., a machine). Accordingly, claim 1 is within at least one of the four statutory categories. Claim 14 is directed to a method (i.e., a process). Accordingly, claim 14 is within at least one of the four statutory categories. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong One: Regarding Prong One of Step 2A of the Alice/Mayo test (which collectively includes the guidance in the January 7, 2019 Federal Register notice and the October 2019 update issued by the USPTO as now incorporated into the MPEP, as supported by relevant case law), the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. MPEP 2106.04(II)(A)(1). An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts. MPEP 2106.04(a). Independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites: An obstacle detection system for an agricultural machine to perform self-driving while sensing a surrounding environment with a LiDAR sensor and a camera, the obstacle detection system comprising: a controller configured or programmed to: cause the camera, upon detecting an obstacle candidate based on data that is output from the LiDAR sensor, as a trigger, to acquire an image of the obstacle candidate; and determine whether or not to change a traveling status of the agricultural machine based on the image of the obstacle candidate acquired with the camera. The above underlined limitation constitutes “a mental process” because it is an observation/evaluation/judgment/analysis that can, at the currently claimed high level of generality, be practically performed in the human mind (e.g., with pen and paper). For instance, a person could evaluate the gathered data from a camera and decide whether or not to adjust the travel of the vehicle. There is no actual claimed control of the vehicle travel in claim 1. Accordingly, the claim recites at least one abstract idea. Independent claim 14 recites, An obstacle detection method for an agricultural machine to perform self-driving while sensing a surrounding environment with a LiDAR sensor and a camera, the obstacle detection method comprising: detecting an obstacle candidate based on data that is output from the LiDAR sensor; acquiring an image of the obstacle candidate with the camera upon detecting the obstacle candidate as a trigger; and determining whether or not to change a traveling status of the agricultural machine based on the image of the obstacle candidate acquired with the camera. Therefore, claim 14 also recites at least one abstract idea. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2A - Prong Two: Regarding Prong Two of Step 2A of the Alice/Mayo test, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. As noted at MPEP §2106.04(II)(A)(2), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” MPEP §2106.05(I)(A). In the present case, the additional limitations beyond the above-noted at least one abstract idea recited in the claim are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”) claim 1 recites: An obstacle detection system for an agricultural machine to perform self-driving while sensing a surrounding environment with a LiDAR sensor and a camera (extra-solution activity (data gathering) as noted below, see MPEP § 2106.05(g)), the obstacle detection system comprising: a controller configured or programmed (using computers or machinery as mere tools to perform the abstract idea as noted below, see MPEP § 2106.05(f)) to: cause the camera, upon detecting an obstacle candidate based on data that is output from the LiDAR sensor, as a trigger, to acquire an image of the obstacle candidate (extra-solution activity (data gathering) as noted below, see MPEP § 2106.05(g)); and determine whether or not to change a traveling status of the agricultural machine based on the image of the obstacle candidate acquired with the camera. For the following reasons, the above-identified additional limitations, when considered as a whole with the limitations reciting the at least one abstract idea, do not integrate the above-noted at least one abstract idea into a practical application. Regarding the additional limitation of a controller configured or programmed, this limitation amounts to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). Regarding the additional limitations of sensing a surrounding environment with a LiDAR sensor and a camera; and cause the camera to acquire an image of the obstacle candidate, these additional limitations merely add insignificant extra-solution activity (data gathering) to the at least one abstract idea in a manner that does not meaningfully limit the at least one abstract idea (see MPEP § 2106.05(g)). Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Looking at the additional limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. MPEP §2106.05(I)(A) and §2106.04(II)(A)(2). For these reasons, claim 1 does not recite additional elements that integrate the judicial exception into a practical application. Accordingly, claim 1 is directed to at least one abstract idea. Similarly, claim 14 recites, An obstacle detection method for an agricultural machine to perform self-driving while sensing a surrounding environment with a LiDAR sensor and a camera (extra-solution activity (data gathering) as noted below, see MPEP § 2106.05(g)), the obstacle detection method comprising: detecting an obstacle candidate based on data that is output from the LiDAR sensor (extra-solution activity (data gathering) as noted below, see MPEP § 2106.05(g)); acquiring an image of the obstacle candidate with the camera upon detecting the obstacle candidate as a trigger (extra-solution activity (data gathering) as noted below, see MPEP § 2106.05(g)); and determining whether or not to change a traveling status of the agricultural machine based on the image of the obstacle candidate acquired with the camera. Regarding the additional limitations of sensing a surrounding environment with a LiDAR sensor and a camera; detecting an obstacle candidate based on data that is output from the LiDAR sensor; and acquiring an image of the obstacle candidate with the camera, these additional limitations merely add insignificant extra-solution activity (data gathering) to the at least one abstract idea in a manner that does not meaningfully limit the at least one abstract idea (see MPEP § 2106.05(g)). Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Looking at the additional limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. MPEP §2106.05(I)(A) and §2106.04(II)(A)(2). For these reasons, claim 14 does not recite additional elements that integrate the judicial exception into a practical application. Accordingly, claim 14 is directed to at least one abstract idea. Subject Matter Eligibility Criteria - Alice/Mayo Test: Step 2B: Regarding Step 2B of the Alice/Mayo test, claims 1 and 14 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. Regarding claim 1, the additional limitation of a controller, this limitation amounts to merely using a computer or other machinery as tools performing their typical functionality in conjunction with performing the above-noted at least one abstract idea (see MPEP § 2106.05(f)). Regarding the additional limitations (claims 1 and 14) of sensing a surrounding environment with a LiDAR sensor and a camera; cause the camera to acquire an image of the obstacle candidate; detecting an obstacle candidate based on data that is output from the LiDAR sensor; and acquiring an image of the obstacle candidate with the camera, these additional limitations have been reevaluated, and it has been determined that such limitations are not unconventional as they merely consist of data gathering which are recited at a high level of generality. See OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); or buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Further, adding a preliminary step of gathering data to a process that only recites determining whether or not to change the traveling of a vehicle (a mental process) does not add a meaningful limitation to the process of obstacle detection. See MPEP 2106.05(d)(II) and 2106.05(g). The dependent claims 2, 4, 5, 8, 12 and 13 do not provide additional elements or a practical application to become eligible under 35 U.S.C. 101. Dependent claim 2 is directed to: detect an object within the surrounding environment based on data that is output from the LiDAR sensor (extra-solution activity (data gathering), see MPEP § 2106.05(g)); and when the detected object is the obstacle candidate, cause the camera to acquire the image of the obstacle candidate (extra-solution activity (data gathering), see MPEP § 2106.05(g)) and determine whether or not to change the traveling status of the agricultural machine based on the image. Dependent claim 4 is directed to: when the obstacle candidate is detected, cause the camera to acquire a color image of the obstacle candidate (extra-solution activity (data gathering), see MPEP § 2106.05(g)); and identify, based on RGB values of the color image of the obstacle candidate, a type of the object that is the obstacle candidate. A person can see red, green and blue to identify an object. Dependent claim 5 is directed to: transmit the image of the obstacle candidate acquired with the camera to an external device (extra-solution activity (data outputting), see MPEP § 2106.05(g)); and based on a signal received from the external device indicating whether or not to change the traveling status of the agricultural machine, determine whether or not to change the traveling status of the agricultural machine. Dependent claim 8 is directed to: detect an object that is higher than a predetermined height as the obstacle candidate based on data that is output from the LiDAR sensor while the agricultural machine is traveling in a field or on an agricultural road (extra-solution activity (data gathering), see MPEP § 2106.05(g)). Dependent claim 12 is directed to: the agricultural machine is a tractor to which an implement is attachable (the machine is merely an object on which the method operates, see MPEP § 2106.05(b)); and when the tractor has an implement attached thereto, the controller (using computers or machinery as mere tools to perform the abstract idea, see MPEP § 2106.05(f)) is configured or programmed to set, based on a type of the implement, a region to be scanned by the LiDAR sensor while the agricultural machine is traveling (extra-solution activity (data outputting), see MPEP § 2106.05(g)). Dependent claim 13 is directed to: agricultural machine comprising the obstacle detection system, the LiDAR sensor, and the camera (the machine is merely an object on which the method operates, see MPEP § 2106.05(b)). Claims 3, 6, 7, and 9 thru 11 provide a practical application (stopping vehicle travel, decelerating the vehicle, change the magnification to detect an obstacle, change the camera orientation, change the illumination) of the abstract idea and are not subject to the above 101 rejection. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 6, 7, 10, 13 and 14 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Zhu et al Patent Number 9,234,618 B1. Regarding claim 1 Zhu et al disclose the claimed obstacle detection system, “The computer vision system 140 can process and analyze images captured by camera 130 to identify objects and/or features in the environment surrounding vehicle 100. The detected features/objects can include traffic signals, road way boundaries, other vehicles, pedestrians, and/or obstacles, etc.” (column 9 lines 4 thru 8), for the claimed agricultural machine, “an example system may also be implemented in or take the form of other vehicles, such as cars, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, earth movers, boats, snowmobiles, aircraft, recreational vehicles, amusement park vehicles, farm equipment, construction equipment, trams, golf carts, trains, and trolleys” (column 6 lines 39 thru 44), to perform the claimed self-driving, “Example embodiments relate to an autonomous vehicle, such as a driverless automobile” (column 4 lines 12 and 13), while the claimed sensing a surrounding environment with a LIDAR sensor and a camera, “that includes a light detection and ranging (LIDAR) sensor for actively detecting reflective features in the environment surrounding the vehicle” (column 4 lines 13 thru 15), “the laser rangefinder or LIDAR unit 128 can be any sensor configured to sense objects in the environment in which the vehicle 100 is located” (column 8 lines 14 thru 16), and “The camera 130 can include one or more devices configured to capture a plurality of images of the environment surrounding the vehicle 100.” (column 7 lines 22 thru 24), the obstacle detection system comprising: the claimed controller, “a computer system 112 can control the vehicle 100 while in an autonomous mode via control instructions to a control system 106 for the vehicle 100” (column 6 lines 50 thru 52), configured or programmed to: the claimed cause the camera to acquire an image of an obstacle candidate upon detecting the obstacle candidate based on data output from the LIDAR sensor, “FIG. 7 is a flowchart 700 of a process for navigating an autonomous vehicle according to real time environmental feedback information from both the LIDAR device 302 and the hyperspectral sensor 620. The LIDAR device 302 is scanned through a scanning zone surrounding the vehicle (702). Information from reflected light signals provided by the LIDAR device 302 and/or associated optical detectors is analyzed to generate a 3-D point cloud of positions of reflective features in the scanning zone (704). The 3-D point cloud is analyzed via the controller 610, the sensor fusion algorithm 138, the computer vision system 140, and/or the object detection module described above, etc. to identify regions of the scanning zone for spectral analysis (706). The region identified for spectral analysis (706) can be a region including a LIDAR-indicated reflective feature/object.” (column 20 lines 48 thru 62, and Figure 7) (claimed data output from the LIDAR sensor), “The identified region is identified with the hyperspectral sensor 620 to characterize the region according to its spectral properties (708). The spectral information is used to determine whether the region includes a solid material (710).” (column 21 lines 6 thru 9 and Figure 7) (claimed camera to acquire an image of an obstacle candidate), and “the hyperspectral sensor 620 can include both imaging optics and a spectral selectivity module. The imaging optics can be one or more lenses, mirrors, shutters, and/or apertures arranged to focus received radiation on an imaging plane that includes a photo sensitive detector, such as a charge coupled device array, or a similar detector for generating electrical signals related to an intensity pattern in the imaging plane.” (column 18 lines 53 thru 60), the hyperspectral sensor equates to the claimed camera, and the solid material equates to the claimed obstacle candidate; and the claimed determine whether or not to change a traveling status of the agricultural machine based on the image of the obstacle candidate acquired with the camera, “The 3-D point cloud information from the LIDAR device can be combined with the indications of whether reflective features are solid or not solid to map solid objects in the scanning zone (712).” (column 21 lines 30 thru 34, and Figure 7), and “The autonomous vehicle is navigated to avoid interference with objects in the map of solid objects 630 (714). In some embodiments, non-solid LIDAR-indicated features, such as water spray patterns and/or exhaust plumes are substantially ignored by the navigation control systems operating the autonomous vehicle. For example, the object avoidance system 144 and/or navigation/pathing system 142 can be provided with the map of solid objects 630 to automatically determine a path for the autonomous vehicle that avoids solid objects without avoiding splashes of water or exhaust plumes even if optically opaque. Thus, in some embodiments of the present disclosure, information from both the LIDAR device 302 and the hyperspectral sensor 620 is combined generate the map of solid objects 630.” (column 21 lines 40 thru 53, and Figure 7). Regarding claim 2 Zhu et al disclose the claimed system of claim 1 (see above), wherein the controller is further configured or programmed to: the claimed detect an object within the surrounding environment based on data that is output from the LIDAR sensor, “The LIDAR device 302 is scanned through a scanning zone surrounding the vehicle (702). Information from reflected light signals provided by the LIDAR device 302 and/or associated optical detectors is analyzed to generate a 3-D point cloud of positions of reflective features in the scanning zone (704). The 3-D point cloud is analyzed via the controller 610, the sensor fusion algorithm 138, the computer vision system 140, and/or the object detection module described above, etc. to identify regions of the scanning zone for spectral analysis (706). The region identified for spectral analysis (706) can be a region including a LIDAR-indicated reflective feature/object.” (column 20 lines 51 thru 62, and Figure 7); and the claimed when the detected object is the obstacle candidate, cause the camera to acquire the image of the obstacle candidate and determine whether or not to change the traveling status of the agricultural machine based on the image, “The identified region is identified with the hyperspectral sensor 620 to characterize the region according to its spectral properties (708). The spectral information is used to determine whether the region includes a solid material (710).” (column 21 lines 6 thru 9 and Figure 7), and “The 3-D point cloud information from the LIDAR device can be combined with the indications of whether reflective features are solid or not solid to map solid objects in the scanning zone (712).” (column 21 lines 30 thru 34, and Figure 7), and “The autonomous vehicle is navigated to avoid interference with objects in the map of solid objects 630 (714). In some embodiments, non-solid LIDAR-indicated features, such as water spray patterns and/or exhaust plumes are substantially ignored by the navigation control systems operating the autonomous vehicle. For example, the object avoidance system 144 and/or navigation/pathing system 142 can be provided with the map of solid objects 630 to automatically determine a path for the autonomous vehicle that avoids solid objects without avoiding splashes of water or exhaust plumes even if optically opaque. Thus, in some embodiments of the present disclosure, information from both the LIDAR device 302 and the hyperspectral sensor 620 is combined generate the map of solid objects 630.” (column 21 lines 40 thru 53, and Figure 7). Regarding claim 6 Zhu et al disclose the claimed system of claim 1 (see above), wherein the controller is further configured or programmed to: the claimed cause the agricultural machine to stop traveling or to decelerate upon detecting the obstacle candidate as the trigger, the obstacle avoidance system 144 can effect changes in the navigation of the vehicle by operating one or more subsystems in the control system 106 to undertake braking maneuvers (column 9 lines 36 thru 40). Regarding claim 7 Zhu et al disclose the claimed system of claim 1 (see above), wherein the controller is further configured or programmed to: the claimed cause the agricultural machine to resume travel or accelerate when determining not to change the traveling status, “the obstacle avoidance system 144 can be configured such that a swerving maneuver is not undertaken when other sensor systems detect vehicles, construction barriers, other obstacles, etc.” (column 9 lines 44 thru 48), and “Non-solid features to not avoid (i.e., to ignore) can be, for example, LIDAR-indicated reflective features with corresponding spectral information associated with a non-solid material.” (column 20 lines 41 thru 44). Regarding claim 10 Zhu et al disclose the claimed system of claim 1 (see above), wherein, the claimed if a field of view of the camera does not include the obstacle candidate when the obstacle candidate is detected, the controller changes an orientation of the camera so that the field of view of the camera includes the obstacle candidate, “sensor unit 202 can include any combination of cameras, RADARs, LIDARs, range finders, and acoustic sensors. The sensor unit 202 can include one or more movable mounts that could be operable to adjust the orientation of one or more sensors in the sensor unit 202. In one embodiment, the movable mount could include a rotating platform that could scan sensors so as to obtain information from each direction around the vehicle 200. In another embodiment, the movable mount of the sensor unit 202 could be moveable in a scanning fashion within a particular range of angles and/or azimuths. The sensor unit 202 could be mounted atop the roof of a car, for instance, however other mounting locations are possible. Additionally, the sensors of sensor unit 202 could be distributed in different locations and need not be collocated in a single location. Some possible sensor types and mounting locations include LIDAR unit 206 and laser rangefinder unit 208. Furthermore, each sensor of sensor unit 202 could be configured to be moved or scanned independently of other sensors of sensor unit 202.” (column 12 lines 14 thru 33), and “The camera 210 can have associated optics operable to provide an adjustable field of view. Further, the camera 210 can be mounted to vehicle 200 with a movable mount to vary a pointing angle of the camera 210” (column 13 lines 13 thru 16). Regarding claim 13 Zhu et al disclose the claimed agricultural machine “an example system may also be implemented in or take the form of other vehicles, such as cars, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, earth movers, boats, snowmobiles, aircraft, recreational vehicles, amusement park vehicles, farm equipment, construction equipment, trams, golf carts, trains, and trolleys” (column 6 lines 39 thru 44), comprising, the claimed obstacle detection system (see above rejection of claim 1), the claimed LIDAR sensor, the sensor system includes a laser rangefinder/LIDAR unit 128 (Figure 1), and the claimed camera, the sensor system includes a camera 30 (Figure 1), and the system 600 for employs a hyperspectral sensor 620 (Figure 6A). Regarding claim 14 Zhu et al disclose the claimed obstacle detection method, (Figures 6B and 7), for the claimed agricultural machine, “an example system may also be implemented in or take the form of other vehicles, such as cars, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, earth movers, boats, snowmobiles, aircraft, recreational vehicles, amusement park vehicles, farm equipment, construction equipment, trams, golf carts, trains, and trolleys” (column 6 lines 39 thru 44), to perform the claimed self-driving, “Example embodiments relate to an autonomous vehicle, such as a driverless automobile” (column 4 lines 12 and 13), while the claimed sensing a surrounding environment with a LIDAR sensor and a camera, “that includes a light detection and ranging (LIDAR) sensor for actively detecting reflective features in the environment surrounding the vehicle” (column 4 lines 13 thru 15), “the laser rangefinder or LIDAR unit 128 can be any sensor configured to sense objects in the environment in which the vehicle 100 is located” (column 8 lines 14 thru 16), and “The camera 130 can include one or more devices configured to capture a plurality of images of the environment surrounding the vehicle 100.” (column 7 lines 22 thru 24), the obstacle detection method comprising: the claimed detecting an obstacle candidate based on data that is output from the LIDAR sensor, “FIG. 7 is a flowchart 700 of a process for navigating an autonomous vehicle according to real time environmental feedback information from both the LIDAR device 302 and the hyperspectral sensor 620. The LIDAR device 302 is scanned through a scanning zone surrounding the vehicle (702). Information from reflected light signals provided by the LIDAR device 302 and/or associated optical detectors is analyzed to generate a 3-D point cloud of positions of reflective features in the scanning zone (704). The 3-D point cloud is analyzed via the controller 610, the sensor fusion algorithm 138, the computer vision system 140, and/or the object detection module described above, etc. to identify regions of the scanning zone for spectral analysis (706). The region identified for spectral analysis (706) can be a region including a LIDAR-indicated reflective feature/object.” (column 20 lines 48 thru 62, and Figure 7); the claimed acquiring an image of the obstacle candidate with the camera upon detecting the obstacle candidate as a trigger, “The identified region is identified with the hyperspectral sensor 620 to characterize the region according to its spectral properties (708). The spectral information is used to determine whether the region includes a solid material (710).” (column 21 lines 6 thru 9 and Figure 7) (claimed camera acquiring an image of an obstacle candidate), and “the hyperspectral sensor 620 can include both imaging optics and a spectral selectivity module. The imaging optics can be one or more lenses, mirrors, shutters, and/or apertures arranged to focus received radiation on an imaging plane that includes a photo sensitive detector, such as a charge coupled device array, or a similar detector for generating electrical signals related to an intensity pattern in the imaging plane.” (column 18 lines 53 thru 60), the hyperspectral sensor equates to the claimed camera, and the solid material equates to the claimed obstacle candidate; and the claimed determining whether or not to change a traveling status of the agricultural machine based on the image of the obstacle candidate acquired with the camera, “The 3-D point cloud information from the LIDAR device can be combined with the indications of whether reflective features are solid or not solid to map solid objects in the scanning zone (712).” (column 21 lines 30 thru 34, and Figure 7), and “The autonomous vehicle is navigated to avoid interference with objects in the map of solid objects 630 (714). In some embodiments, non-solid LIDAR-indicated features, such as water spray patterns and/or exhaust plumes are substantially ignored by the navigation control systems operating the autonomous vehicle. For example, the object avoidance system 144 and/or navigation/pathing system 142 can be provided with the map of solid objects 630 to automatically determine a path for the autonomous vehicle that avoids solid objects without avoiding splashes of water or exhaust plumes even if optically opaque. Thus, in some embodiments of the present disclosure, information from both the LIDAR device 302 and the hyperspectral sensor 620 is combined generate the map of solid objects 630.” (column 21 lines 40 thru 53, and Figure 7). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al Patent Number 9,234,618 B1 in view of Kuroda Patent Application Publication Number 2018/0267170 A1. Regarding claim 3 Zhu et al teach the claimed system of claims 1 and 2 (see above), wherein the controller is further configured or programmed to: the claimed identify a type of object based on the image acquired by the camera, “The spectral information is used to determine whether the region includes a solid material (710).” (column 21 lines 8 and 9, and Figure 7), the material being solid or not equates to the claimed type of object. Zhu et al do not teach the claimed when the obstacle candidate is a person or animal cause the agricultural machine to stop traveling or change a traveling path to avoid the obstacle candidate. Zhu et al adjust the travel path to avoid solid material (obstacles) (Figure 7 step 714). A person is a solid object and would be avoided by Zhu et al. Zhu et al lack the teaching of detecting that the obstacle is a person or animal. Kuroda teaches, “The information processing unit 340 characteristically includes a person determination unit 43 in addition to a recognition processing unit 41 and an obstacle determination unit 42. The person determination unit 43 determines whether an obstacle is a person” P[0090], and “If a person is detected within the detection regions for the ultrasonic sensor 30 and the LIDAR sensor 31 at the start of operation of the traveling apparatus 301, it is determined that the person is in the vicinity of the traveling apparatus 301, and the operation of the traveling apparatus 301 is stopped until the person exits from the detection regions.” P[0093]. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the computer vision system of Zhu et al with the person determination unit of Kuroda in order to, with a reasonable expectation of success, inhibit erroneous sensing and implement stable sensing (Kuroda P[0008]). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al Patent Number 9,234,618 B1 in view of Neitemeier et al Patent Application Publication Number 2019/0098825 A1. Regarding claim 4 Zhu et al teach the claimed system of claims 1 and 2 (see above). Zhu et al do not teach the claimed when the obstacle candidate is detected, cause the camera to acquire a color image of the obstacle candidate, and the claimed identify a type of object based on the RGB values of the color image of the obstacle candidate. The use of color imaging is common and well known in the art, and in the general use of camera images. Neitemeier et al teach, the claimed when the obstacle candidate is detected, cause the camera to acquire a color image of the obstacle candidate, “The camera-based sensor system 12 preferably comprises at least one camera, in particular at least one color image camera, for generating the starting camera images 14.” P[0027], and “the segmentation can be carried out based on the color distribution in the particular starting camera image” P[0028]; and the claimed identify a type of object based on the RGB values of the color image of the obstacle candidate, “The color scheme of the image segments and the selective display of the image segments of predetermined classes form the basis for a particularly intuitive and clear display of the characteristics of the relevant surroundings area.” P[0015], and “Within the scope of the subsequent classification, it is inferred, from factors such as the shape, the volume, or the color of the segment 27 in combination with the factor of the piece of height information “3”, that the image segment 27 is to be allocated to the class of an obstacle.” P[0037]. The color imaging and class identification of Neitemeier et al would be included as color imaging of Zhu et al. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the computer vision system of Zhu et al with the color imaging classification of Neitemeier et al in order to, with a reasonable expectation of success, reduce the volume of data to be processed (Neitemeier et al P[0011]). Claim(s) 5 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al Patent Number 9,234,618 B1 in view of Isawe et al Patent Application Publication Number 2023/0221728 A1. Regarding claim 5 Zhu et al teach the claimed system of claims 1 and 2 (see above). Zhu et al do not teach the claimed transmit the image of the obstacle candidate acquired with the camera to an external device, and the claimed based on a signal received from the external device indicating whether or not to change the traveling status of the agricultural machine, determine whether or not to change the traveling status. Isawe et al teach, the claimed transmit the image of the obstacle candidate acquired with the camera to an external device, “the tractor 1 and the mobile communication terminal 5 are provided with communication modules 28 and 52, respectively, that enable wireless communication of information including positioning information between the in-vehicle control unit 23 and the terminal control unit 3B [51]” (P[0049] and Figure 6), and “the image processing device 85 performs image transmission processing to transmit the generated all-around image and the images from the cameras 81 to 84 to the display control section 23E on the tractor side and the display control section 51A on a mobile communication terminal side (step #2)” (P[0083] and Figure 11); and the claimed based on a signal received from the external device indicating whether or not to change the traveling status of the agricultural machine, determine whether or not to change the traveling status, “If the obstacle is detected to be located in the deceleration control range Rdc of the first detection range Rd1 in the sixth determination processing, the automatic travel control section 23F performs second notification command processing to issue a notification command for notifying about the obstacle being located in the deceleration control range Rdc on the liquid crystal monitor 27 of the tractor 1 or the display device 50 of the mobile communication terminal 5 to the display control section 23E of the in-vehicle control unit 23 and the display control section 51A of the terminal control unit 51 (step #25). In addition, the automatic travel control section 23F performs deceleration command processing to issue a deceleration command for decreasing the vehicle speed of the tractor 1 as the obstacle located in the deceleration control range Rdc approaches the tractor 1, to the vehicle speed control section 23B (step #26). In this way, it is possible to notify the user, such as the occupant in the driving unit 12 or the administrator on the outside of the vehicle, of the presence of the obstacle in the deceleration control range Rdc of the first detection range Rd1 for the tractor 1. In addition, by the control actuation of the vehicle speed control section 23B, the vehicle speed of the tractor 1 can be appropriately reduced as the tractor 1 approaches the obstacle.” (P[0105] and Figure 22). The mobile communication of Isawe et al would be combined with Zhu et al as a further control means for the vehicle. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the computer vision system of Zhu et al with the mobile communications for remote control of the vehicle of Isawe et al in order to, with a reasonable expectation of success, avoid performing unnecessary collision avoidance operations (Isawe et al P[0006]). Regarding claim 12 Zhu et al teach the claimed system of claim 1 (see above). Zhu et al do not explicitly teach the claimed agricultural machine is a tractor with an attachable implement, and the claimed when the implement is attached to the tractor, the controller sets a region to be scanned by the LIDAR sensor when traveling based on the type of implement. Isawe et al teach, the claimed agricultural machine is a tractor with an attachable implement, tractor 1 may include rotary tiller 3, a plow, a disc harrow, a cultivator, a subsoiler, a seeder, a spraying device, and a mowing device coupled to the rear portion of the tractor 1 (Figure 1 and P[0034]); and the claimed when the implement is attached to the tractor, the controller sets a region to be scanned by the LIDAR sensor when traveling based on the type of implement, “the LiDAR control sections 86B and 87B perform cut processing and masking processing, which are based on the vehicle body information and the like, for the measurement ranges Rm1 and Rm2 of the measuring sections 86A and 87A, and thereby set a first detection range Rd1 and a second detection range Rd2 for the above-described obstacle candidate as a detection target, respectively. In the cut processing, the LiDAR control sections 86B and 87B acquire a maximum left-right width of the vehicle body including the rotary tiller 3 (a left-right width of the rotary tiller 3 in the present embodiment) by the communication with the in-vehicle control unit 23, add a predetermined safety range to this maximum left-right width of the vehicle body, and thereby set a detection target width Wd of the obstacle candidate.” P[0077]. The rotary tiller information of Isawe et al would be combined with Zhu et al as vehicle information to assist in automatic control of the vehicle (i.e. the tiller would be part of the vehicle system). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the computer vision system of Zhu et al with the LIDAR processing of the vehicle operation to include the rotary tiller information of Isawe et al in order to, with a reasonable expectation of success, avoid performing unnecessary collision avoidance operations (Isawe et al P[0006]). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al Patent Number 9,234,618 B1 in view of Nishi et al Patent Application Publication Number 2019/382005 A1. Regarding claim 8 Zhu et al teach the claimed system of claim 1 (see above). Zhu et al do not teach the claimed detect an object that is higher than a predetermined height as the obstacle candidate based on output data from the LIDAR sensor while the agricultural machine is traveling. Nishi et al teach, “the areas indicated by thin long dash-dotted lines in FIG. 24 are areas in which an obstacle that is present at a position lower than the predetermined height cannot be detected by the obstacle detectors 65” (P[0386] and Figure 24), and each obstacle detector 65 employs a laser scanner P[0261]. The obstacle detectors not detecting obstacles lower than a predetermined height equates to the claimed detect an object that is higher than a predetermined height, and would be combined with Zhu et al by limiting the objects detected. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the computer vision system of Zhu et al with the obstacle at a position lower than the predetermined height not detected by the obstacle detectors of Nishi et al in order to, with a reasonable expectation of success, avoid a reduction in work efficiency based on misdetection (Nishi et al P[0043]). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al Patent Number 9,234,618 B1 in view of Omoto et al Japanese Patent Application Publication Number JP-2017050829-A (provided translation cited in rejection). Regarding claim 9 Zhu et al teach the claimed system of claim 1 (see above). Zhu et al teach the claimed cause the camera to acquire an image of the surrounding environment before the obstacle candidate is detected, “The camera 130 can include one or more devices configured to capture a plurality of images of the environment surrounding the vehicle 100.” (column 8 lines 22 thru 24), and “the camera 130 can capture a plurality of images that represent information about an environment of the vehicle 100 while operating in an autonomous mode. The environment may include other vehicles, traffic lights, traffic signs, road markers, pedestrians, etc.” (column 11 lines 41 thru 45). Zhu et al do not teach the claimed when the obstacle candidate is detected, cause the camera to capture the obstacle candidate at a higher magnification than the image of the surrounding environment, but the zooming of a camera (claimed magnification) is a common and well known feature of cameras. Omoto et al teach, “A camera control system comprises: an object detection part for detecting a position of an object by a laser sensor at a fixed time interval; a positional information measuring part for measuring positional information of the object; a camera angle-of-view/magnification discrimination part which calculates an angle-of-view width and a magnification of the object on imaging by a camera and discriminates a trigger of camera control in accordance with the condition; and a camera operation instruction part for performing panning/tilting control and zooming magnification control on the camera.” (abstract), and “The camera 300 is a device that acquires an image of the target object 10 in the target area 110 by rotating a lens of the camera or adjusting a shooting magnification by a zoom function based on a command from the control device 200.” (translation page 2 paragraph 9). The control of the zoom function of Omoto et al would be used in the system of Zhu et al for the identif
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Nov 22, 2025
Non-Final Rejection — §101, §102, §103
Mar 25, 2026
Response Filed
Apr 09, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578734
COOPERATIVE MANAGEMENT STRATEGIES FOR UNSAFE DRIVING
2y 5m to grant Granted Mar 17, 2026
Patent 12567331
Systems and Methods to Manage Tracking of Objects Through Occluded Regions
2y 5m to grant Granted Mar 03, 2026
Patent 12555482
TRAVELING CONTROL APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12555475
VEHICLE TRAVEL CONTROL ASSISTANCE SYSTEM, SERVER APPARATUS, AND VEHICLE
2y 5m to grant Granted Feb 17, 2026
Patent 12530381
DETERMINING AUTONOMOUS VEHICLE STATUS BASED ON MAPPING OF CROWDSOURCED OBJECT DATA
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+21.2%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month