Prosecution Insights
Last updated: April 19, 2026
Application No. 18/131,692

COMPUTER VISION AND DEEP LEARNING ROBOTIC LAWN EDGER AND MOWER

Final Rejection §103
Filed
Apr 06, 2023
Examiner
PARK, KYLE S
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Tysons Computer Vision, LLC
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
97%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
92 granted / 140 resolved
+13.7% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
170
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
25.1%
-14.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This Final action is in response to the applicant’s amendment/response of November 26, 2025. Claim 21 has been newly added. Claims 1-21 are pending and have been considered as follows. Response to Arguments Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC § 102 has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Herrera and KIYOOKA. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 19, 20, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, respectively. As to claim 1, Herrera teaches an autonomous vehicle for performing gardening tasks (see at least Abstract regarding an autonomous robot, Herrera), comprising: a motorized wheeled chassis including a plurality of movement wheels and at least one motor providing power to one or more of the plurality of movement wheels (see at least FIGS. 1-5 and Abstract regarding a robot body having at least one motor arranged to propel the robot body via a wheel arrangement and drive a cutting blade or trimmer line, wherein the robot body also has predefined areas for including at least one more motor to drive a sprayer, trimmer or edging blade. See also at least paragraph 118 regarding the autonomous gardener includes a frame or housing which supports the operating components of the gardener. These operating components may include, but not limited to, at least one motor, such as an electric motor, which is arranged to drive the blades of the gardener so as to cut the grass of a lawn to which the gardener is mowing. The at least one motor may also be used to drive the gardener itself via the means of transmission systems such as gearing mechanisms or gear boxes which transmit a driving force to its wheel arrangements, although preferably, as is the case of this embodiment, separate motors are used to drive the gardener along its operating surface with each rear wheel having its own individual motor and gearbox, Herrera); at least one rotating wheel attached to the motorized wheeled chassis (see at least FIGS. 1-5 and Abstract regarding a robot body having at least one motor arranged to propel the robot body via a wheel arrangement and drive a cutting blade or trimmer line, wherein the robot body also has predefined areas for including at least one more motor to drive a sprayer, trimmer or edging blade. See also at least paragraphs 117-118 regarding the autonomous gardener includes a frame or housing which supports the operating components of the gardener. These operating components may include, but not limited to, at least one motor, such as an electric motor, which is arranged to drive the blades of the gardener so as to cut the grass of a lawn to which the gardener is mowing. The at least one motor may also be used to drive the gardener itself via the means of transmission systems such as gearing mechanisms or gear boxes which transmit a driving force to its wheel arrangements, although preferably, as is the case of this embodiment, separate motors are used to drive the gardener along its operating surface with each rear wheel having its own individual motor and gearbox, Herrera); at least one line or blade extending from the at least one rotating wheel, the at least one line or blade configured to perform a selected gardening task (see at least FIGS. 1-5 and Abstract regarding a robot body having at least one motor arranged to propel the robot body via a wheel arrangement and drive a cutting blade or trimmer line, wherein the robot body also has predefined areas for including at least one more motor to drive a sprayer, trimmer or edging blade. See also at least paragraphs 117-120 regarding the autonomous gardener includes a frame or housing which supports the operating components of the gardener. These operating components may include, but not limited to, at least one motor, such as an electric motor, which is arranged to drive the blades of the gardener so as to cut the grass of a lawn to which the gardener is mowing. The motor 106-driven blade disk 104 controls the cutting mechanism for the gardener, using a disk 104 to hold and rotate the blade, a blade adapter 103 for attaching the motor 106 to the blade disk 104. The hinges 105 provide a mechanism for holding the spraying mechanism when installed, Herrera); at least one of a downward-facing camera attached to the motorized wheeled chassis or an outward-facing camera attached to the motorized wheeled chassis (see at least FIGS. 1-5 and paragraph 142 regarding the navigation system includes the optical surveying module 504 (such as an LIDAR unit 505), the IMU unit 511, the sonic obstacle detection module 506, which may include Sonar sensors 507 or LIDAR 508 although other sound or light wave based obstacle detections methods 514 are possible. Each of these modules are arranged to provide a specific function and return individual navigation information either detected, calculated, gathered or surveyed, as in the case of the LIDAR or camera unit 505 which is arranged to generate a virtual map representative of the obstacles or placement of specific objections proximate to the gardener, Herrera); and a processor configured to control processing related to (see at least FIG. 5 and paragraph 140 regarding a controller/processor 500, Herrera) determining a position of the motorized wheeled chassis (see at least paragraphs 129 regarding GPS sensors which can be used to obtain a GPS coordinate of the gardener. In some examples, the gardener may be implemented to use “RTK GPS” or Real Time Kinematic GPS which includes two GPS modules, one fixed and one in the gardener in addition to advanced GPS information to determine the precise position of the gardener within the mowing area and world, Herrera), driving the at least one motor to move the one or more of the plurality of movement wheels to follow a path from the determined position of the motorized wheeled chassis (see at least paragraphs 140-143 regarding the controller is also arranged to control the motor drivers 512 and motors 513 to drive the gardener along a work surface within a work area. Preferably, as is the case in this embodiment, the gardener is driven by having a motor placed adjacent to each of the rear wheels with each motor being arranged to drive each rear wheel. In turn, the controller 500 can direct electric current from a power source, such as a battery 502, to the motors drivers 512 so as to perform a controlled operation of one or both motors 513. This can allow for forward, reverse and turning actions of the gardener by turning one or more wheels at different speeds or directions, Herrera), rotating the at least one rotating wheel when the selected gardening task is performed (see at least Abstract regarding consisting of a robot body having at least one motor arranged to propel the robot body via a wheel arrangement and drive a cutting blade or trimmer line. See also at least paragraphs 140-145 regarding the controller 500 can also command the blade and spray motors 512 to operate so as to operate the blades to cut the grass and the sprayer to water or fertilize the grass of a work surface. To perform these functions, the controller 500 will execute a control routine or process which determines the conditions for and when the gardener is to be operated. These commands at least include instructions to command the direction of travel of the gardener and the operation of the blades and sprayers, Herrera), and correcting the path of the motorized wheeled chassis based on one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera (see at least paragraphs 140-142 regarding each of these modules are arranged to provide a specific function and return individual navigation information either detected, calculated, gathered or surveyed, as in the case of the LIDAR or camera unit 505 which is arranged to generate a virtual map representative of the obstacles or placement of specific objections proximate to the gardener. See also at least paragraphs 156-158 regarding the controller builds a plan for the mowing path, incorporating the mowing pattern (900, 901, 902) given by the user, between the start and farthest point 828 by finding sequential GPS coordinates between the points. The controller will continue to try and build the path (827 & 815) for some allotted time or attempts until an error message is eventually sent out. Once the path is built (827 & 835), the gardener will then go through its process of cutting the grass 826 by mowing each row 821 of an area, wherein a row is defined as a path created by the controller that has a start and farthest point from start (end point) and sequential points in between the start and end point. This process includes detecting objects with the help of sensors 822. Once an object is detected 813, we use the sensors to move around the object 824 and go to the next point to cut 820. The gardener will then check to see if it has reached the current point in the built mowing path plan by using navigation data, Herrera). Herrera does not explicitly teach wherein the one or more images used to correct the path of the motorized wheeled chassis include an image of an area where the selected gardening task has been performed by rotating the at least one rotating wheel at one or more previous positions of the motorized wheeled chassis. However, such matter is taught by KIYOOKA (see at least FIG. 1 and paragraph 24 regarding a lawn mower 10 with an autonomous traveling function. See also at least FIG. 5 and paragraphs 50-51 regarding a condition of a lawn ahead in the traveling direction of the lawn mower in the image photographed by the camera 54 in the lawn mower 10. The white arrow in FIG. 5 is added in order to facilitate understanding. In FIG. 5, there is a long portion of lawn 102a before mowing on the right side of the lawn 102 ahead in the traveling direction of the lawn mower, and a short portion of lawn 102b after mowing on the left side. As can be seen from this, when there is the portion of lawn 102a before mowing and the portion of lawn 102b after mowing, the portion of lawn 102a before mowing has a darker green color than the portion of lawn 102b after mowing, for example. The image processor 81 (FIG. 4) can distinguish the portion of lawn 102a before mowing and the portion of lawn 102b after mowing from the photographed image in accordance with such color difference. In addition, the image processor 81 may be configured to fetch and store a photographed image of the portion of lawn 102b in a state where the portion of lawn 102b after mowing has been present ahead in the traveling direction of the lawn mower in advance, and to compare the color of the stored photographed image with the color of the current photographed image photographed by the camera 54, thereby determining whether the portion of lawn ahead in the traveling direction of the lawn mower is after or before being mowed. See also at least paragraphs 57-60 regarding in the case of FIG. 5, the portion of lawn on the right side needs to be mowed, and thus the traveling instruction generator 84 generates the travel instruction in such a manner that the lawn mower is turned toward the right so as to preferentially travel on the right side of FIG. 5, and then the lawn mower is moved straight ahead. The generated traveling instruction is transmitted from the second controller 80 to the first controller 70 as traveling instruction data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of KIYOOKA which teaches wherein the one or more images used to correct the path of the motorized wheeled chassis include an image of an area where the selected gardening task has been performed by rotating the at least one rotating wheel at one or more previous positions of the motorized wheeled chassis with the system of Herrera as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having an image of an area where the selected gardening task has been performed by rotating the at least one rotating wheel at one or more previous positions of the motorized wheeled chassis and would have predictably applied it to improve the system of Herrera. As to claim 2, Herrera teaches wherein the position of the motorized wheeled chassis is initially determined using information obtained from at least one of an inertial measurement unit (IMU) or a global positioning system (GPS) receiver (see at least paragraph 126 regarding an inertial measurement unit (IMU) module 511 arranged to measure the force of movement of the gardener by detecting and recording various forces which are subjected on the robot, including the direction of movement, force of movement, magnetic bearing of movement, acceleration and gyroscopic movements. See also at least paragraph 130 regarding GPS sensors which can be used to obtain a GPS coordinate of the gardener. In some examples, the gardener may be implemented to use “RTK GPS” or Real Time Kinematic GPS which includes two GPS modules, one fixed and one in the gardener in addition to advanced GPS information to determine the precise position of the gardener within the mowing area and world, Herrera). As to claim 3, Herrera teaches wherein the at least one of the IMU or the GPS receiver is used to determine the path of the motorized wheeled chassis before the path of the motorized wheeled chassis is corrected (see at least paragraphs 156-158 regarding the controller then finds the farthest point from the current starting point 829, wherein the farthest point is defined as a point within the working area that has the greatest distance possible between two points in the same working area. After the farthest point is calculated, the controller builds a plan for the mowing path, incorporating the mowing pattern (900, 901, 902) given by the user, between the start and farthest point 828 by finding sequential GPS coordinates between the points. The controller will continue to try and build the path (827 & 815) for some allotted time or attempts until an error message is eventually sent out. Once the path is built (827 & 835), the gardener will then go through its process of cutting the grass 826 by mowing each row 821 of an area, wherein a row is defined as a path created by the controller that has a start and farthest point from start (end point) and sequential points in between the start and end point. This process includes detecting objects with the help of sensors 822. Once an object is detected 813, we use the sensors to move around the object 824 and go to the next point to cut 820, Herrera). As to claim 19, Examiner notes claim 19 recites similar limitations to claim 1 and is rejected under the same rational. As to claim 20, Examiner notes claim 20 recites similar limitations to claim 1 and is rejected under the same rational. As to claim 21, Herrera does not explicitly teach wherein the one or more images used to correct the path of the motorized wheeled chassis further include an image of an area where the selected gardening task is to be performed by rotating the at least one rotating wheel at one or more future positions along the path of the motorized wheeled chassis. However, such matter is taught by KIYOOKA (see at least FIG. 1 and paragraph 24 regarding a lawn mower 10 with an autonomous traveling function. See also at least FIG. 5 and paragraphs 50-51 regarding a condition of a lawn ahead in the traveling direction of the lawn mower in the image photographed by the camera 54 in the lawn mower 10. The white arrow in FIG. 5 is added in order to facilitate understanding. In FIG. 5, there is a long portion of lawn 102a before mowing on the right side of the lawn 102 ahead in the traveling direction of the lawn mower, and a short portion of lawn 102b after mowing on the left side. As can be seen from this, when there is the portion of lawn 102a before mowing and the portion of lawn 102b after mowing, the portion of lawn 102a before mowing has a darker green color than the portion of lawn 102b after mowing, for example. The image processor 81 (FIG. 4) can distinguish the portion of lawn 102a before mowing and the portion of lawn 102b after mowing from the photographed image in accordance with such color difference. In addition, the image processor 81 may be configured to fetch and store a photographed image of the portion of lawn 102b in a state where the portion of lawn 102b after mowing has been present ahead in the traveling direction of the lawn mower in advance, and to compare the color of the stored photographed image with the color of the current photographed image photographed by the camera 54, thereby determining whether the portion of lawn ahead in the traveling direction of the lawn mower is after or before being mowed. See also at least paragraphs 57-60 regarding in the case of FIG. 5, the portion of lawn on the right side needs to be mowed, and thus the traveling instruction generator 84 generates the travel instruction in such a manner that the lawn mower is turned toward the right so as to preferentially travel on the right side of FIG. 5, and then the lawn mower is moved straight ahead. The generated traveling instruction is transmitted from the second controller 80 to the first controller 70 as traveling instruction data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of KIYOOKA which teaches wherein the one or more images used to correct the path of the motorized wheeled chassis further include an image of an area where the selected gardening task is to be performed by rotating the at least one rotating wheel at one or more future positions along the path of the motorized wheeled chassis with the system of Herrera as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having an image of an area where the selected gardening task is to be performed by rotating the at least one rotating wheel at one or more future positions along the path of the motorized wheeled chassis and would have predictably applied it to improve the system of Herrera. Claim(s) 4, 5, 12, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, and further in view of Chen et al., US 2023/0259138 A1, hereinafter referred to as Chen, respectively. As to claim 4, Herrera, as modified by KIYOOKA, does not explicitly teach wherein the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera are stored in a simultaneous location and mapping (SLAM) library. However, such matter is taught by Chen (see at least Abstract regarding a camera for collecting image data of the environment around the smart mower; an inertial measurement unit (IMU) for detecting pose data of the smart mower; a memory at least used for storing an application program for controlling the smart mower to work or travel; and a processor for calling the application program, fusing the image data collected by the camera and the pose data acquired by the IMU, performing simultaneous localization and mapping (SLAM) of the smart mower. See also at least paragraphs 115-117. See also at least paragraph 142 regarding during the SLAM process, all the image data collected by the camera 132 and the angular velocity and acceleration data collected by the IMU 133 are uploaded to the cloud server 200 for fusion. Alternatively, data preprocessing such as feature point extraction of the image frame is performed locally at the mobile terminal 130, and then the preprocessed data is sent to the cloud server 200 for fusion, so as to reduce the dependence on a wireless communication rate. In addition to the SLAM, the cloud server 200 may also run other program logic. With the capabilities of cloud computing and cloud storage, the cloud server 200 can take advantage of functional applications such as obstacle recognition, boundary recognition, road recognition, and path planning). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Chen which teaches wherein the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera are stored in a simultaneous location and mapping (SLAM) library with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera are stored in a simultaneous location and mapping (SLAM) library and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. As to claim 5, Herrera, as modified by KIYOOKA, does not explicitly teach wherein the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera are stored in the SLAM library in association with positional information determined using the at least one of the IMU or the GPS receiver. However, such matter is taught by Chen (see at least Abstract regarding a camera for collecting image data of the environment around the smart mower; an inertial measurement unit (IMU) for detecting pose data of the smart mower; a memory at least used for storing an application program for controlling the smart mower to work or travel; and a processor for calling the application program, fusing the image data collected by the camera and the pose data acquired by the IMU, performing simultaneous localization and mapping (SLAM) of the smart mower. See also at least paragraphs 115-117. See also at least paragraph 142 regarding during the SLAM process, all the image data collected by the camera 132 and the angular velocity and acceleration data collected by the IMU 133 are uploaded to the cloud server 200 for fusion. Alternatively, data preprocessing such as feature point extraction of the image frame is performed locally at the mobile terminal 130, and then the preprocessed data is sent to the cloud server 200 for fusion, so as to reduce the dependence on a wireless communication rate. In addition to the SLAM, the cloud server 200 may also run other program logic. With the capabilities of cloud computing and cloud storage, the cloud server 200 can take advantage of functional applications such as obstacle recognition, boundary recognition, road recognition, and path planning). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Chen which teaches wherein the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera are stored in the SLAM library in association with positional information determined using the at least one of the IMU or the GPS receiver with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera are stored in the SLAM library in association with positional information determined using the at least one of the IMU or the GPS receiver and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. As to claim 12, Herrera, as modified by KIYOOKA, does not explicitly teach wherein each image obtained from the at least one outward-facing camera includes depth data. However, such matter is taught by Chen (see at least paragraphs 100-101 regarding the camera assembly 120 may include a single camera or two (multiple) cameras. The camera assembly 120 may also include a depth camera, also known as an RGB-D camera. The biggest feature of the RGB-D camera is that the RGB-D camera can measure the distance between an object and the RGB-D camera by actively emitting light to the object and receiving the returned light like a laser sensor through the principle of infrared structured light or time-of-flight (ToF). The RGB-D camera obtains depth through a physical measurement manner, saving a lot of calculations compared to the binocular camera or the multiocular camera that performs calculations through software). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Chen which teaches wherein each image obtained from the at least one outward-facing camera includes depth data with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein each image obtained from the at least one outward-facing camera includes depth data and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. As to claim 13, Herrera, as modified by KIYOOKA, does not explicitly teach wherein the depth data included in each image obtained from the at least one outward-facing camera is stored in a SLAM library. However, such matter is taught by Chen (see at least paragraphs 100-101 regarding the camera assembly 120 may include a single camera or two (multiple) cameras. The camera assembly 120 may also include a depth camera, also known as an RGB-D camera. The biggest feature of the RGB-D camera is that the RGB-D camera can measure the distance between an object and the RGB-D camera by actively emitting light to the object and receiving the returned light like a laser sensor through the principle of infrared structured light or time-of-flight (ToF). The RGB-D camera obtains depth through a physical measurement manner, saving a lot of calculations compared to the binocular camera or the multiocular camera that performs calculations through software. See also at least paragraphs 110 regarding mapping: through the obtained pose, the depth of the corresponding feature point is calculated by a trigonometric method, and the current environment map is reconstructed synchronously. In the SLAM model, a map refers to a set of all landmark points. Once the positions of the landmark points are determined, the mapping is completed. See also at least paragraphs 137-139). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Chen which teaches wherein the depth data included in each image obtained from the at least one outward-facing camera is stored in a SLAM library with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the depth data included in each image obtained from the at least one outward-facing camera is stored in a SLAM library and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. As to claim 18, Herrera, as modified by KIYOOKA, does not explicitly teach wherein the determined position of the motorized wheeled chassis is confirmed using a loop closure algorithm; or wherein the loop closure algorithm determines whether the position of the motorized wheeled chassis coincides with a previously determined position of the motorized wheeled chassis. However, Chen teaches wherein the determined position of the motorized wheeled chassis is confirmed using a loop closure algorithm (see at least paragraphs 112 and 115 regarding the loopback detection is also referred to as closed-loop detection and is to save the previously detected image key frames, and when the smart mower 110 returns to the same place where the smart mower 110 originally passed, determine whether the smart mower 110 has passed this place through the matching relationship of feature points); and wherein the loop closure algorithm determines whether the position of the motorized wheeled chassis coincides with a previously determined position of the motorized wheeled chassis (see at least paragraphs 112 and 115 regarding the loopback detection is also referred to as closed-loop detection and is to save the previously detected image key frames, and when the smart mower 110 returns to the same place where the smart mower 110 originally passed, determine whether the smart mower 110 has passed this place through the matching relationship of feature points. Further, to solve the SLAM problem accurately, the smart mower 110 needs to repeatedly observe the same region to implement the closed-loop motion, so the system uncertainty is accumulated until the closed-loop motion occurs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Chen which teaches wherein the determined position of the motorized wheeled chassis is confirmed using a loop closure algorithm; and wherein the loop closure algorithm determines whether the position of the motorized wheeled chassis coincides with a previously determined position of the motorized wheeled chassis with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the determined position of the motorized wheeled chassis is confirmed using a loop closure algorithm; and wherein the loop closure algorithm determines whether the position of the motorized wheeled chassis coincides with a previously determined position of the motorized wheeled chassis and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. Claim(s) 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, and further in view of HAHN et al., US 2019/0258267 A1, hereinafter referred to as HAHN, respectively. As to claim 6, Herrera teaches a first rotating wheel configured to perform a mowing task using at least one cutting line or blade (see at least paragraph 118 regarding the autonomous gardener includes a frame or housing which supports the operating components of the gardener. These operating components may include, but not limited to, at least one motor, such as an electric motor, which is arranged to drive the blades of the gardener so as to cut the grass of a lawn to which the gardener is mowing, Herrera), however, Herrera, as modified by KIYOOKA, does not explicitly teach a second rotating wheel configured to perform an edging task using at least one edging line or blade. However, such matter is taught by HAHN (see at least FIGS. 14-16 and paragraphs 261-268 regarding a mower body 102 having at least one motor arranged to drive a cutting blade 212b and to propel the mower body 102 on an operating surface via a wheel arrangement, wherein the mower body 102 includes a navigation system 204 arranged to assist a controller 202 to control the operation of the mower body 102 within a predefined operating area; wherein the mower body 102 further includes a cutter module 1500 arranged to trim the edges of the predefined operating area. The autonomous lawn mower 100 includes a cutter module 1500 comprising a perimeter cutter 1510 for trimming the edges of a predefined operating area, and a locking mechanism 1520 for engaging the cutter module 1500 with the mower body 102). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of HAHN which teaches a second rotating wheel configured to perform an edging task using at least one edging line or blade with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having a second rotating wheel configured to perform an edging task using at least one edging line or blade and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. As to claim 7, Herrera teaches wherein at least one of the first rotating wheel or the second rotating wheel is configured to perform a weeding task (see at least paragraph 118 regarding the autonomous gardener includes a frame or housing which supports the operating components of the gardener. These operating components may include, but not limited to, at least one motor, such as an electric motor, which is arranged to drive the blades of the gardener so as to cut the grass of a lawn to which the gardener is mowing, Herrera). Claim(s) 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, in view of HAHN et al., US 2019/0258267 A1, hereinafter referred to as HAHN, and further in view of PETTERSSON, US 2015/0163993 A1, hereinafter referred to as PETTERSSON, respectively. As to claim 8, Herrera, as modified by KIYOOKA and HAHN, does not explicitly teach wherein the weeding task is performed by the at least one of the first rotating wheel or the second rotating wheel based on identification of a weed using the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera. However, such matter is taught by PETTERSSON (see at least paragraphs 36-37 regarding at least one state parameter is extracted from the set of image data and/or from the terrain data which represents an actual state of at least one designated terrain section, the state particularly relating to a state of at least one plant and/or of ground, the state parameter is compared to a predetermined threshold for the respective state and gardener information is derived based on the comparison of the predetermined threshold and the state parameter. The state parameter particularly provides at least one terrain factor of a group of terrain factors, the group of terrain factors comprises at least the following factors: plant height, particularly grass length, plant growth, particularly of bush or hedge, humidity of the terrain, density of plants, planarity of the terrain and brightness or colour of the terrain. See also at least paragraph 79). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of PETTERSSON which teaches wherein the weeding task is performed by the at least one of the first rotating wheel or the second rotating wheel based on identification of a weed using the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera with the system of Herrera, as modified by KIYOOKA and HAHN, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the weeding task is performed by the at least one of the first rotating wheel or the second rotating wheel based on identification of a weed using the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA and HAHN. As to claim 9, Herrera, as modified by KIYOOKA and HAHN, does not explicitly teach wherein the identification of the weed is performed by object recognition processing controlled by the processor. However, such matter is taught by PETTERSSON (see at least paragraphs 36-37 regarding at least one state parameter is extracted from the set of image data and/or from the terrain data which represents an actual state of at least one designated terrain section, the state particularly relating to a state of at least one plant and/or of ground, the state parameter is compared to a predetermined threshold for the respective state and gardener information is derived based on the comparison of the predetermined threshold and the state parameter. The state parameter particularly provides at least one terrain factor of a group of terrain factors, the group of terrain factors comprises at least the following factors: plant height, particularly grass length, plant growth, particularly of bush or hedge, humidity of the terrain, density of plants, planarity of the terrain and brightness or colour of the terrain. See also at least paragraphs 79 and 113-116. See also at least Claims 1-12). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of PETTERSSON which teaches wherein the identification of the weed is performed by object recognition processing controlled by the processor with the system of Herrera, as modified by KIYOOKA and HAHN, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the identification of the weed is performed by object recognition processing controlled by the processor and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA and HAHN. As to claim 10, Herrera, as modified by KIYOOKA and HAHN, does not explicitly teach wherein the identification of the weed is confirmed by communication with a user device. However, such matter is taught by PETTERSSON (see at least paragraphs 36-44 regarding the gardener information is provided to a user of the gardening vehicle, particularly together with a related recommendation concerning a suggested treatment of the respective at least one designated terrain section, and/or the gardening-tool is applied based on the gardener information, the gardening-tool particularly being designed as cutting-tool, particularly hedge-cutter, tree-branch cutter, grass-cutter or scissors, as fertilising unit, as pesticide unit, as watering unit or as lawn thatcher. See also at least paragraphs 79 and 113-126 regarding the gardener information--as well as any other information produced by the lawnmower 1 or received by the lawnmower 1--may be transmitted to the user via a wireless communication link, e.g. via radio, Bluetooth, Wi-Fi or mobile phone communication standard (e.g. GSM). For instance, the information is received and processed by a smart phone. The parameter to be checked if a respective terrain section exceeds a defined threshold, provides at least one terrain factor of a group of terrain factors, the group comprising the following factors plant height, particularly grass length, plant growth, particularly of a bush or a hedge, humidity of the terrain, density of plants, e.g. of grass stalks, planarity of the terrain and brightness or colour of the terrain. According to a specific embodiment of the lawnmower 1 of the invention, the mower 1 comprises additional tools for treating terrain, ground and/or plants. Such tools may be applied depending on the gardener information and/or on a command of the user. See also at least Claims 1-13). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of PETTERSSON which teaches wherein the identification of the weed is confirmed by communication with a user device with the system of Herrera, as modified by KIYOOKA and HAHN, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the identification of the weed is confirmed by communication with a user device and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA and HAHN. As to claim 11, Herrera teaches the user interface including display of the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera (see at least paragraph 121 regarding a user interface for the user to interact with the gardener directly and send it commands. The user interface touch screen 203 allows the user to configure and set up the gardener just as if they were using the mobile software program. See also at least FIGS. 6-7 and paragraphs 148-154 regarding . If a camera is included, the user can see what the gardener visualizes using the camera attached to the gardener through the Manual/CAM tab 608, Herrera), however, Herrera, as modified by KIYOOKA and HAHN, does not explicitly teach wherein the identification of the weed is confirmed using a user interface displayed on the user device. However, such matter is taught by PETTERSSON (see at least paragraphs 36-44 regarding the gardener information is provided to a user of the gardening vehicle, particularly together with a related recommendation concerning a suggested treatment of the respective at least one designated terrain section, and/or the gardening-tool is applied based on the gardener information, the gardening-tool particularly being designed as cutting-tool, particularly hedge-cutter, tree-branch cutter, grass-cutter or scissors, as fertilising unit, as pesticide unit, as watering unit or as lawn thatcher. See also at least paragraphs 79 and 113-126 regarding the gardener information--as well as any other information produced by the lawnmower 1 or received by the lawnmower 1--may be transmitted to the user via a wireless communication link, e.g. via radio, Bluetooth, Wi-Fi or mobile phone communication standard (e.g. GSM). For instance, the information is received and processed by a smart phone. The parameter to be checked if a respective terrain section exceeds a defined threshold, provides at least one terrain factor of a group of terrain factors, the group comprising the following factors plant height, particularly grass length, plant growth, particularly of a bush or a hedge, humidity of the terrain, density of plants, e.g. of grass stalks, planarity of the terrain and brightness or colour of the terrain. According to a specific embodiment of the lawnmower 1 of the invention, the mower 1 comprises additional tools for treating terrain, ground and/or plants. Such tools may be applied depending on the gardener information and/or on a command of the user. See also at least Claims 1-13). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of PETTERSSON which teaches wherein the identification of the weed is confirmed using a user interface displayed on the user device with the system of Herrera, as modified by KIYOOKA and HAHN, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the identification of the weed is confirmed using a user interface displayed on the user device and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA and HAHN. Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, and further in view of Simpson, US 2024/0315165 A1, hereinafter referred to as Simpson, respectively. As to claim 14, Herrera, as modified by KIYOOKA, does not explicitly teach wherein the path of the motorized wheeled chassis is corrected using outputs of a neural network, the neural network being configured to use the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera as one or more inputs. However, such matter is taught by Simpson (see at least paragraphs 49-50 regarding the lawn maintenance machine may include a number of cameras facing in a variety of directions such as, for example, forward, backwards, and downwards, and a number of sensors. The number of cameras may capture static image data (e.g., still photographs of visible and/or hyperspectral light) and/or dynamic image data (e.g., videos of visible and/or hyperspectral light), referred to herein as “image data.” The predictive model service may use models trained using machine learning techniques, and may be configured to navigate the mower along a path without requiring manual user control. In an example, real-time environmental data from the cameras and/or other sensors may be provided to the predictive model service, and the predictive model service may use the environmental data as inputs to machine-learning-based models that make real-time decisions about navigation, obstacle avoidance, throttle, and/or steering of the lawn maintenance machine. See also at least paragraphs 109-111 regarding the predictive model service 304 may be developed using the training service 312, which may include any hardware, software, or other circuit or processer or combination thereof configured to execute any suitable pattern recognition or classification algorithm, probabilistic model, artificial intelligence method, and untrained or trained learning models (e.g., supervised or unsupervised learning, reinforcement learning, feature learning, anomaly detection, and association rules). These learning models may utilize a single or any suitable combination of various models such as artificial neural networks, decision trees, support vector networks, Bayesian networks, genetic algorithms, generative adversarial networks, or training programs such as federated learning. See also at least Claim 1 regarding receiving image data from a camera system attached to the unmanned lawn mower; providing the image data as an input to a predictive model service; receiving an output from the predictive model service, the output at least partially defining a second path within the area to be mowed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Simpson which teaches wherein the path of the motorized wheeled chassis is corrected using outputs of a neural network, the neural network being configured to use the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera as one or more inputs with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the path of the motorized wheeled chassis is corrected using outputs of a neural network, the neural network being configured to use the one or more images obtained from the at least one of the downward-facing camera or the outward-facing camera as one or more inputs and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, in view of Simpson, US 2024/0315165 A1, hereinafter referred to as Simpson, and further in view of DALFRA, WO 2021139397 A1, hereinafter referred to as DALFRA, respectively. As to claim 15, Herrera, as modified by KIYOOKA and Simpson, does not explicitly teach wherein the outputs of the neural network include an angular value indicating a degree of misalignment of the motorized wheeled chassis with respect to a boundary, and a scalar value indicating an amount of lateral offset of the motorized wheeled chassis with respect to the boundary. However, such matter is taught by DALFRA (see at least paragraphs 46-58 regarding processing a digital image based on a trained neural network mainly includes performing image segmentation on the digital image to obtain an image to be analyzed. A set of points at the boundary between two classes may be approximated by a straight line, such as by linear regression, where such a line is characterized by an offset and an angular coefficient. If the self-moving device 1 is perfectly aligned with the boundary line L, such a straight line will be practically vertical (zero angular coefficient in the chosen coordinate system of the image). On the contrary, if the self-moving device 1 is misaligned with respect to the forementioned limit line L, such a straight line will be tilted (positive or negative angular coefficient in the chosen coordinate system of the image). According to the selected coordinate system, the offset of the straight line represents the degree of deviation of the mobile device 1 from the boundary 3. Based on the installation position and installation angle of the camera 151, the preset conditions may include the relative position of the boundary line between the working surface and the non-working surface in the image to be analyzed, specifically including the offset and angle relationship). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of DALFRA which teaches wherein the outputs of the neural network include an angular value indicating a degree of misalignment of the motorized wheeled chassis with respect to a boundary, and a scalar value indicating an amount of lateral offset of the motorized wheeled chassis with respect to the boundary with the system of Herrera, as modified by KIYOOKA and Simpson, as both systems are directed to a system and method for controlling the autonomous vehicle within the boundary based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the outputs of the neural network include an angular value indicating a degree of misalignment of the motorized wheeled chassis with respect to a boundary, and a scalar value indicating an amount of lateral offset of the motorized wheeled chassis with respect to the boundary and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA and Simpson. Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, in view of Simpson, US 2024/0315165 A1, hereinafter referred to as Simpson, in view of DALFRA, WO 2021139397 A1, hereinafter referred to as DALFRA, in view of Ebrahimi Afrouzi et al., US 2022/0026920 A1, hereinafter referred to as Ebrahimi Afrouzi, and further in view of NIU et al., US 2014/0175063 A1, hereinafter referred to as NIU, respectively. As to claim 16, Herrera, as modified by KIYOOKA, Simpson, and DALFRA, does not explicitly teach wherein the outputs of the neural network further include a value indicating whether a corner is detected, a scalar value indicating a distance to the detected corner. However, such matter is taught by Ebrahimi Afrouzi (see at least paragraphs 880-882 regarding the neural network 12800 receives input 12801 and determines probabilities of a distance of the robot from an object, wherein the distance measurement is most likely to be 10 cm. In embodiments, having multiple sources of information help increase resolution. See at least FIG. 14 and paragraphs 909-916 regarding an example of a corner 14400 that may be detected by a processor of a robot based on sensor data and used to localize the robot. For instance, a camera positioned on the robot 14401 captures a first image 14402 of the environment and detects a corner 14403 at a first time point t.sub.0. At a second time point t.sub.1, the camera captures a second image 14404 and detects a new position of the corner 14402. The difference in position 14405 between the position of corner 14402 in the first image 14403 and the second image 14404 may be used in determining an amount of movement of the robot and localization. See also at least paragraph 1062 regarding determining and track distances to corners, light spots, edges, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Ebrahimi Afrouzi which teaches wherein the outputs of the neural network further include a value indicating whether a corner is detected, a scalar value indicating a distance to the detected corner with the system of Herrera, as modified by KIYOOKA, Simpson, and DALFRA, as both systems are directed to a system and method for controlling a navigation of the autonomous vehicle based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the outputs of the neural network further include a value indicating whether a corner is detected, a scalar value indicating a distance to the detected corner and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA, Simpson, and DALFRA. Herrera, as modified by KIYOOKA, Simpson, DALFRA, and Ebrahimi Afrouzi, does not explicitly teach a scalar value indicating an angle of the detected corner. However, such matter is taught by NIU (see at least paragraph 21 regarding a corner angle acquiring unit that calculates a corner angle of the corner portion. See also at least paragraphs 70-71. See also at least Claim 1 regarding a corner angle acquiring unit that calculates a corner angle of the corner portion). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of NIU which teaches a scalar value indicating an angle of the detected corner with the system of Herrera, as modified by KIYOOKA, Simpson, DALFRA, and Ebrahimi Afrouzi, as both systems are directed to a system and method for controlling a navigation of the vehicle automatically based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having a scalar value indicating an angle of the detected corner and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA, Simpson, DALFRA, and Ebrahimi Afrouzi. Claim(s) 17 is rejected under 35 U.S.C. 103 as being unpatentable over Herrera, US 2021/0360853 A1, hereinafter referred to as Herrera, in view of KIYOOKA et al., US 2021/0076561 A1, hereinafter referred to as KIYOOKA, in view of Simpson, US 2024/0315165 A1, hereinafter referred to as Simpson, and further in view of Anderson, US 2011/0153172 A1, hereinafter referred to as Anderson, respectively. As to claim 17, Herrera, as modified by KIYOOKA, does not explicitly teach wherein the neural network is configured to use a plurality of images obtained from the at least one downward-facing camera as inputs. However, such matter is taught by Simpson (see at least paragraphs 16-18 regarding the camera system includes a forward-facing camera configured to capture images in a forward direction of travel of the unmanned lawn mower and a downward-facing camera configured to capture images of turf in front of the unmanned lawn mower. The forward-facing camera and the downward-facing camera may provide the image data to the predictive model service. See also at least paragraphs 49-50. See also at least paragraphs 109-111 regarding the predictive model service 304 may be developed using the training service 312, which may include any hardware, software, or other circuit or processer or combination thereof configured to execute any suitable pattern recognition or classification algorithm, probabilistic model, artificial intelligence method, and untrained or trained learning models (e.g., supervised or unsupervised learning, reinforcement learning, feature learning, anomaly detection, and association rules). These learning models may utilize a single or any suitable combination of various models such as artificial neural networks, decision trees, support vector networks, Bayesian networks, genetic algorithms, generative adversarial networks, or training programs such as federated learning. See also at least Claim 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Simpson which teaches wherein the neural network is configured to use a plurality of images obtained from the at least one downward-facing camera as inputs with the system of Herrera, as modified by KIYOOKA, as both systems are directed to a system and method for controlling a navigation and mowing action of the mower based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the neural network is configured to use a plurality of images obtained from the at least one downward-facing camera as inputs and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA. Herrera, as modified by KIYOOKA and Simpson, does not explicitly teach wherein the plurality of images obtained from the at least one downward-facing camera include portions of a boundary where an edging task has been performed at the one or more previous positions. However, such matter is taught by Anderson (see at least paragraphs 48-52 regarding designated area 118 is an area marked with a designator. In this illustrative embodiment, designated area 118 surrounds flower bed 110. Designated area 118 is an area where edging material is to be installed. The designator for designated area 118 marks designated area 118 for installation of edging material. For example, designated area 118 may be a path marked with spray paint. Area management vehicle 108 detects designated area 118 while traveling through lawn 132 using image information from a camera system. Area management vehicle 108 operates a digging system in designated area 118. In one illustrative embodiment, area management vehicle 108 operates the digging system to dig designated area to a predetermined depth. Area management vehicle 108 may move along designated area 118 while operating the digging system. Area management vehicle 108 may operate the digging system for a particular period of time or until area management vehicle 108 detects that designated area 118 no longer contains the designator using image information obtained from the camera system. For example, area management vehicle 108 may move along designated area 118, and operate the digging system within designated area 118 until the spray paint is no longer detectable using the image information). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Anderson which teaches wherein the plurality of images obtained from the at least one downward-facing camera include portions of a boundary where an edging task has been performed at the one or more previous positions with the system of Herrera, as modified by KIYOOKA and Simpson, as both systems are directed to a system and method for controlling the autonomous vehicle within the boundary based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the plurality of images obtained from the at least one downward-facing camera include portions of a boundary where an edging task has been performed at the one or more previous positions and would have predictably applied it to improve the system of Herrera as modified by KIYOOKA and Simpson. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: HORST et al. (US 20180052471 A1) regarding a system for navigation of at least one self-driving floor treatment device. FLEER et al. (DE 102012112036 A1) regarding a system for navigating in a self-propelled soil cultivation device. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE S. PARK whose telephone number is (571)272-3151. The examiner can normally be reached Mon-Thurs 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M ANTONUCCI can be reached at (313)446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.S.P./Examiner, Art Unit 3666 /ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Apr 06, 2023
Application Filed
Aug 21, 2025
Non-Final Rejection — §103
Nov 26, 2025
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600384
MODEL HYPERPARAMETER ADJUSTMENT USING VEHICLE DRIVING CONTEXT CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12596367
METHOD FOR THE SEMI-AUTOMATED GUIDANCE OF A MOTOR VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12594886
Vehicle and Control Method Thereof
2y 5m to grant Granted Apr 07, 2026
Patent 12576874
DRIVER SCORING SYSTEM AND METHOD USING OPTIMUM PATH DEVIATION
2y 5m to grant Granted Mar 17, 2026
Patent 12565194
PARKING ASSISTANCE APPARATUS AND PARKING ASSISTANCE METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
97%
With Interview (+31.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month