Prosecution Insights
Last updated: April 19, 2026
Application No. 18/318,714

AUTOMATED MOVING VEHICLE AND CONTROL METHOD THEREOF

Non-Final OA §103
Filed
May 16, 2023
Examiner
CHANDRASIRI, UPUL PRIYADARSHAN
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Coretronic Intelligent Robotics Corporation
OA Round
3 (Non-Final)
20%
Grant Probability
At Risk
3-4
OA Rounds
2y 5m
To Grant
-9%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
2 granted / 10 resolved
-32.0% vs TC avg
Minimal -29% lift
Without
With
+-28.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
36 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 01/23/2026 has been entered. Response to Amendment The amendment filed 01/23/2026 is being entered. Claims 1, 10, and 11 are amended. Claim 4 and 6 are canceled. Claims 1-3, 5, and 7-11 are pending, and rejected as detailed below. 35 U.S.C. 112(b) Claim Rejections Amendment to claims 10 is entered. Applicant’s arguments with respect to the rejection(s) of claim(s) 10 under 35 U.S.C. 112(b) have been fully considered and are persuasive. Therefore, the rejection for claim 10 has been withdrawn. Response to Arguments Claim Rejections under 35 U.S.C. §103 Applicant argues that that rationale of the Office constitutes hindsight bias. Kawashima did not disclose that the travelable region RE could be updated, while Vihonen merely disclosed a generic spatial map without providing any motivation to create an entirely new, dedicated "static point cloud" within Kawashima's framework for classifying temporary obstacles, followed by its subsequent updating. Furthermore, Kawashima's technique involves recalculating to generate a new obstacle-avoidance path, which differs entirely in purpose and technical means from the technical feature "updating the static point cloud according to the current scan point in response to the shortest distance being less than a first threshold" recited in claim 1 of the present application. Applicant’s arguments with respect to hindsight bias have been fully considered and not persuasive. More specifically, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to update the travelable region RE when necessary similar to the initial generation of the travelable region RE. Furthermore, Kawashima's technique of recalculating to generate a new obstacle-avoidance path is similar to the vehicle moving along path to avoid objects based on the current scan point of the claimed invention. Furthermore, the combination of Kawashima and Vihonen is obvious because the updating of the spatial map is able to save time during the operation process of the vehicle (Vihonen; 0017). Applicant also argues that None of the cited prior art discloses steps for explicitly classifying a real-time scan point as either "dynamic" or "static" based on its "distance from a temporary static object map.” Applicant argues that the technique of the claimed invention includes explicit steps for classifying obstacles. Kawashima merely identifies a single type of "obstacle (P)". None of the cited prior art discloses steps for explicitly classifying a real-time scan point as either "dynamic" or "static" based on its "distance from a temporary static object map". Therefore, Kawashima, Wong, Vihonen and Kabushiki do not disclose the feature "classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold and updating the static point cloud according to the current scan point classified as the static obstacle; classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold, and calculating a first distance between the current scan point classified as the dynamic obstacle" as claimed in the amended independent claim 1. For at least the rationales set forth above, Kawashima, Wong, Vihonen and Kabushiki do not disclose, teach, or reasonably suggest each and every feature claimed in the amended independent claim 1. Even if all the prior arts of record are taken into consideration, the prior arts of record, taken alone or in combination, do not disclose, teach, or reasonably suggest each and every feature claimed in the independent claim 1. As such, the independent claim 1, as amended, should stand novel and non-obvious over the prior arts of record and be patentable. Applicant’s arguments, as amended herein, with respect to the rejections of claim 1 under 35 U.S.C. §103 have been fully considered and persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection for claim 1 under 35 U.S.C. §103 is made in view of previously applied reference Kawashima, Wong, and Kabushiki, and further in view of newly found reference “Mapless Online Detection of Dynamic Objects in 3D Lidar, 2019 16th Conference on Computer and Robot Vision (CRV), Kingston, QC, Canada, 2019, pp. 113-120 (Year: 2019)”. In particular, the amendments to claim 1 are addressed in the instant office action. Applicant also argues that the dependent claims 2-3, 5 and 7-10, since the dependent claims 2-3, 5 and 7-10 depend on the independent claim 1, these dependent claims are also stand non-obvious and novel over the arts of record as a matter of law. Applicant’s arguments with respect to the rejections of 2-3, 5 and 7-10 under 35 U.S.C. §103 have been fully considered and not persuasive because claim 1 is rejected based on previously applied reference Kawashima, Wong, and Kabushiki, and further in view of newly found reference “Mapless Online Detection of Dynamic Objects in 3D Lidar, 2019 16th Conference on Computer and Robot Vision (CRV), Kingston, QC, Canada, 2019, pp. 113-120 (Year: 2019)”. In particular, claims 2-3, 5 and 7-10 are addressed in the instant office action. Applicant also argues that the independent claim 11 claims features corresponding to the features claimed in the amended independent claim 1. For at least the same rationales set forth the independent claim 1, the prior arts of record, taken alone or in combination, do not disclose, teach, or reasonably suggest each and every feature claimed in the independent claim 11, either. As such, the independent claim 11 should also stand novel and non-obvious over the prior art of record and be patentable. Applicant’s arguments, as amended herein, with respect to the rejections of claim 11 under 35 U.S.C. §103 have been fully considered and persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection for claim 11 under 35 U.S.C. §103 is made in view of previously applied reference Kawashima, Wong, and Kabushiki, and further in view of newly found reference “Mapless Online Detection of Dynamic Objects in 3D Lidar, 2019 16th Conference on Computer and Robot Vision (CRV), Kingston, QC, Canada, 2019, pp. 113-120 (Year: 2019)”. In particular, the amendments to claim 11 are addressed in the instant office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3 and 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Kawashima (US 20220289538 A1), and further in view of Wong (US 20140074342 A1), Kabushiki (US 20220411246 A1) and Yoon et al (“Mapless Online Detection of Dynamic Objects in 3D Lidar,” 2019 16th Conference on Computer and Robot Vision (CRV), Kingston, QC, Canada, 2019, pp. 113-120; hereinafter “Yoon”). Regarding claim 1, Kawashima teaches, (Currently amended) An automated moving vehicle (Kawashima, at least one para. 0035; “FIG. 2 is a schematic view of a configuration of the movable body. The movable body 10 is a device capable of moving automatically. In the present embodiment, the movable body 10 is a forklift, and, in particular, a so-called automated guided forklift (AGF).”), comprising: a housing (Kawashima, at least one para. 0035; “a vehicle body 20”); a sensor, disposed on the housing (Kawashima, at least one para. 0036; “the sensors 26 are provided on the mast 22 and four corners of the vehicle body 20”); a driving device, disposed in the housing (Kawashima, at least one para. 0035; “The vehicle body 20 has wheels 20A.”); and a processor, disposed in the housing and is configured to couple the sensor and the driving device (Kawashima, at least one para. 0043; “The control device 28 controls the movable body 10. The control device 28 sets the avoidance path R2 based on the detection result by the sensor 26 of the movable body 10, and causes the movable body 10 to move along the avoidance path R2.”), wherein the processor is configured to execute: obtaining a static point cloud (Kawashima, at least one para. 0080; “In the present embodiment, a travelable region RE that is a region in which the movable body 10 can move is set in advance, and the movement path R1 is set such that the movable body 10 moving along the movement path R1 is located within the range of the travelable region RE.”) and a current point cloud (Kawashima, at least one para. 0053; “the detection control unit 74 keeps the sensor 26 performing detection on the travel direction side so as to cause the sensor 26 to detect a region including the front surface Pa of the obstacle P.”) in a work space through the sensor, wherein the current point cloud comprises a current scan point (Kawashima, at least one para. 0072; “In other words, the obstacle information acquisition unit 76 determines the inclination of the line segment Na1 as the attitude of the front surface Pa, and determines the end point PS1a and the end point PS2a as the end point on one side and the end point on the other side of the front surface Pa in the width direction, respectively.”, wherein the PS2a is the current scan point”); obtaining a preset point cloud corresponding to the work space (Kawashima, at least one para. 0080; “In the present embodiment, a travelable region RE that is a region in which the movable body 10 can move is set in advance”); calculating a shortest distance between the current scan point and the static point cloud (Kawashima, at least one para. 0100 and FIG. 19; “The avoidance path information acquisition unit 78 calculates a distance D1 between the first endpoint and a boundary line of the travelable region RE on a side in a direction heading from the second end point to the first end point”) in response to the current scan point not overlapping with the preset point cloud (Kawashima, at least one para. 0080; “the avoidance path information acquisition unit 78 sets the avoidance path R2 such that the movable body 10 moves within the range of the travelable region RE (meaning that the all the scan points are within the RE)”); (Kawashima, at least one para. 0104; “FIG. 20 is a schematic view in the case where the distance D1 is less than a threshold value. When the distance D1 is less than a threshold value, the avoidance path information acquisition unit 78 sets the estimated position of the rear end point on the second end point side such that a length D3 from the second end point to the rear end point on the second end point side is a second predetermined value.”) (Kawashima, at least one para. 0100; “determines whether the distance D1 is equal to or greater than a predetermined threshold value.”), and calculating a first distance between the current scan point classified as the dynamic obstacle and the automated moving vehicle (Kawashima, at least one para. 0058; “In calculating an integrated score value, the detection control unit 74 uses a vertical distance between each measuring point M constituting a point cloud M0 and a straight line candidate.”); and controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. Even though the static point cloud and the preset point cloud are inherent within the teaching of Kawashima, Kawashima does not explicitly mention the terms “the static point cloud” and “the preset point cloud”. classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. However, Wong in the same field of endeavor (Wong, at least one para. 0003; “Embodiments of the present invention generally relate to industrial vehicle navigation systems and, more particularly, to a method and apparatus for using pre-positioned objects to localize an industrial vehicle.”) teaches “the static point cloud” (Wong, at least one para. 0061; “dynamic features 422 (features that may change on the map, such as features created by placing pallets or pre-positioned objects on the map, and the like).”) and “the preset point cloud” (Wong, at least one para. 0061; “The map module 404 may include various data, such as static features 424 (such as features that do not change on the map, such as features created by walls and fixed racking, and the like)”) classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. Kawashima and Wong are both considered to be analogous to the claimed invention because both of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the collected sensor data of Kawashima with the teaching of Wong to differentiate the static point cloud and the preset point cloud. One of the ordinary skill in the art would have been motivated to make this modification so that industrial vehicle can be provided with an accurate localization and map data with respect to dynamically placed and pre-positioned object landmarks (Wong; 0061). The combination of Kawashima and Wong does not explicitly teach classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. However, Yoon in the same field of endeavor (Yoon, p. 113, column 1, para. 2; “An autonomous system must be aware of dynamic elements in its environment. Our focus is on detection using lidar (light detection and ranging), specifically spinning lidars, which operate by sweeping multiple lasers about an axis for a 360◦ field of view (FOV).”) teaches classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold (Yoon, p. 116, column 1, para. 2; “We compute the error metric for all query points…Those greater than [an] error threshold are labelled dynamic, the rest are static.”) updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold (Yoon, p. 113, column 1, para. 2; “We compute the error metric for all query points…Those greater than [an] error threshold are labelled dynamic, the rest are static.”) controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. The combination of Kawashima, Wong, and Yoon are considered to be analogous to the claimed invention because all of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the shortest distance of Kawashima with the teaching of Yoon to classify the static point obstacles and the dynamic obstacles. One of the ordinary skill in the art would have been motivated to make this modification so that updating of the static map is able to filter out points that are not part of the static environment (moving person) or part of the of the environment (pre-positioned pallet). The combination of Kawashima, Wong, and Yoon does not explicitly teach controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. However, Kabushiki in the same field of endeavor (Kabushiki, at least one para. 0006; “An industrial vehicle for solving the above-described problem includes: a drive device; a drive controller configured to control the drive device; and a main controller configured to transmit a command to the drive controller, the drive controller controlling the drive device in response to the command of the main controller to cause the industrial vehicle to travel.”) teaches controlling a driving device to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold (Kabushiki, at least one para. 0085; “Whether or not the condition A3 is satisfied can be determined by the vehicle speed calculated by the main controller 31. The main controller 31 determines that the forklift truck 10 stops when the vehicle speed is equal to or less than a stop determination threshold [km/h].”). The combination of Kawashima, Wong, Yoon, and Kabushiki are considered to be analogous to the claimed invention because all of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the movement of the automated vehicle of Kabushiki with the calculation of the first distance of Kawashima. One of the ordinary skill in the art would have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results. Regarding claim 2, Kawashima teaches, (Original) The automated moving vehicle according to claim 1, wherein the processor is further configured to execute: updating a moving path of the automated moving vehicle according to the current scan point in response to the shortest distance being less than the first threshold (Kawashima, at least one para. 0106 and FIG. 20; “When the distance D1 between the end point PS2a and the boundary line RE1 is less than the threshold value as in FIG. 20, there is a possibility that the movable body 10 cannot pass between the end point PS2a and the boundary line RE1. Meanwhile, when the distance D1 is less than the threshold value, the distance D2 from the rear end point PS4a to the boundary line RE2 is set to a value equal to or greater than the threshold value. Accordingly, the avoidance path information acquisition unit 78 generates the avoidance path R2 using the direction Y2 as the first direction (the avoidance direction), instead of generating the avoidance path R2 using the direction Y1”). Regarding claim 3, Kabushiki teaches, (Original) The automated moving vehicle according to claim 2, wherein the processor is further configured to execute: updating the moving path such that a second distance between the moving path and the current scan point is greater than or equal to a third threshold (Kabushiki, at least one para. 0175; “As with the travel limitation control, the main controller 31 may keep the trajectory derivation threshold YT when detecting a person being present in the vehicle speed limitation area.”). Regarding claim 7, Kawashima teaches, (Original) The automated moving vehicle according to claim 6, wherein the processor is further configured to execute: determining a moving path of the automated moving vehicle according to the preset point cloud (Kawashima, at least one para. 0080; “the movement path R1 is set such that the movable body 10 moving along the movement path R1 is located within the range of the travelable region RE”, wherein path R1 is based on the preset point cloud of the travelable region RE) and the static point cloud (Kawashima, at least one para. 0080; “the avoidance path information acquisition unit 78 sets the avoidance path R2 such that the movable body 10 moves within the range of the travelable region RE”, wherein path R2 is based on ). Regarding claim 8, Kawashima teaches, (Original) The automated moving vehicle according to claim 1, wherein the processor is further configured to execute: controlling the driving device to move the automated moving vehicle (Kawashima, at least one para. 0038; “The control device 28 controls movement of the movable body 10.”) along a moving path before obtaining the current scan point (Kawashima, at least one para. 0034; “In the present embodiment, the movable body 10 moves along a movement path R1”); and controlling the driving device to move the automated moving vehicle along the same movement path in response to the shortest distance being greater than or equal to the first threshold (Kawashima, FIG. 1; As shown below, it is obvious that the vehicle 10 inherently move along the same movement path in response to the shortest distance being greater than the first threshold under the vehicle 10’s normal and usual operation. FIG. 1 of Kawashima is modified to show that abovementioned inherency. If the vehicle 10 was to take the alternative path (R3 that is shown with red arrow), the vehicle 10 is able to move along path R3 and does not have to change direction since the shortest distance is greater than the first threshold and provide more than enough space for the vehicle 10 freely pass through P and RE1. PNG media_image1.png 528 408 media_image1.png Greyscale Regarding claim 9, Kabushiki teaches, (Original) The automated moving vehicle according to claim 1, further comprising: an inertial measurement unit, coupled to the processor and measuring an acceleration of the automated moving vehicle (Kabushiki, at least one para. 0042; “The accelerator sensor 34 detects an operation amount of the accelerator 16, that is, an accelerator opening degree. The accelerator sensor 34 outputs an electric signal depending on the accelerator opening degree to the main controller 31. The main controller 31 can recognize the accelerator opening degree based on the electric signal from the accelerator sensor 34.”), wherein the processor determines the second threshold based on the acceleration (Kabushiki, at least one para. 0076; “the main controller 31 calculates a target vehicle speed from an accelerator opening degree detected by the accelerator sensor 34.”). Regarding claim 10, Kawashima teaches, (Currently amended) The automated moving vehicle according to claim 1, wherein the current point cloud comprises multiple current scan points (Kawashima, at least one para. 0053; “The detection control unit 74 acquires a point cloud M0 which is a set of measuring points M, based on the detection result of the reflected light received by the sensor 26.”), the processor is further configured to execute: dividing the multiple current scan points in the current point cloud into a plurality of clusters, wherein the clusters comprise a first cluster (Kawashima, FIG. 6 and FIG. 14; FIG. 6 shows the first cluster with respect to Pa and FIG. 14 shows the second cluster with respect to Pb.), wherein a distance between any current scan point of the first cluster and the static point cloud is greater than or equal to the first threshold (Kawashima, at least one para. 0100; “determines whether the distance D1 is equal to or greater than a predetermined threshold value.”); executing a clustering algorithm on the clusters to update the clusters, and determining whether the current scan point is located in the first cluster that is updated (Kawashima, at least one para. 0055; “the detection control unit 74 may be configured to acquire, as a straight line candidate, a straight line connecting two measuring points M selected by the random sample consensus (RANSAC) algorithm from the measuring points M acquired by the single detection by the sensor 26.”); and calculating the first distance in response to the current scan point being located in the first cluster that is updated (Kawashima, at least one para. 0059; “FIG. 8 is a graph showing an example of a score. The horizontal axis of the graph corresponds to the X direction (depth) and indicates the distance between a measuring point M and a straight line candidate, meaning more to the right side in the horizontal axis corresponds to being more to the far side (the side far from the movable body 10 at the time of detection).”, in other words, distance in the depth direction is measured based on the measuring point M of the first cluster). Regarding claim 11, Kawashima teaches, (Currently amended) A control method of an automated moving vehicle, comprising (Kawashima, at least one para. 0002; “The disclosure relates to a method of controlling a movable body, a movable body and a program.”): obtaining a static point cloud (Kawashima, at least one para. 0080; “In the present embodiment, a travelable region RE that is a region in which the movable body 10 can move is set in advance, and the movement path R1 is set such that the movable body 10 moving along the movement path R1 is located within the range of the travelable region RE.”) and a current point cloud (Kawashima, at least one para. 0053; “the detection control unit 74 keeps the sensor 26 performing detection on the travel direction side so as to cause the sensor 26 to detect a region including the front surface Pa of the obstacle P.”) on a work space through a sensor, wherein the current point cloud comprises a current scan point (Kawashima, at least one para. 0072; “In other words, the obstacle information acquisition unit 76 determines the inclination of the line segment Na1 as the attitude of the front surface Pa, and determines the end point PS1a and the end point PS2a as the end point on one side and the end point on the other side of the front surface Pa in the width direction, respectively.”, wherein the PS2a is the current scan point”); obtaining a preset point cloud corresponding to the work space (Kawashima, at least one para. 0080; “In the present embodiment, a travelable region RE that is a region in which the movable body 10 can move is set in advance”); calculating a shortest distance between the current scan point and the static point cloud (Kawashima, at least one para. 0100 and FIG. 19; “The avoidance path information acquisition unit 78 calculates a distance D1 between the first endpoint and a boundary line of the travelable region RE on a side in a direction heading from the second end point to the first end point”) in response to the current scan point not overlapping with the preset point cloud (Kawashima, at least one para. 0080; “the avoidance path information acquisition unit 78 sets the avoidance path R2 such that the movable body 10 moves within the range of the travelable region RE (meaning that the all the scan points are within the RE)”); (Kawashima, at least one para. 0104; “FIG. 20 is a schematic view in the case where the distance D1 is less than a threshold value. When the distance D1 is less than a threshold value, the avoidance path information acquisition unit 78 sets the estimated position of the rear end point on the second end point side such that a length D3 from the second end point to the rear end point on the second end point side is a second predetermined value.”) ; (Kawashima, at least one para. 0100; “determines whether the distance D1 is equal to or greater than a predetermined threshold value.”), and calculating a first distance between the current scan point classified as the dynamic obstacle and the automated moving vehicle (Kawashima, at least one para. 0058; “In calculating an integrated score value, the detection control unit 74 uses a vertical distance between each measuring point M constituting a point cloud M0 and a straight line candidate.”); and controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. Even though the static point cloud and the preset point cloud are inherent within the teaching of Kawashima, Kawashima does not explicitly mention the terms “the static point cloud” and “the preset point cloud”. classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. However, Wong in the same field of endeavor (Wong, at least one para. 0003; “Embodiments of the present invention generally relate to industrial vehicle navigation systems and, more particularly, to a method and apparatus for using pre-positioned objects to localize an industrial vehicle.”) teaches “the static point cloud” (Wong, at least one para. 0061; “dynamic features 422 (features that may change on the map, such as features created by placing pallets or pre-positioned objects on the map, and the like).”) and “the preset point cloud” (Wong, at least one para. 0061; “The map module 404 may include various data, such as static features 424 (such as features that do not change on the map, such as features created by walls and fixed racking, and the like)”) classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. Kawashima and Wong are both considered to be analogous to the claimed invention because both of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the collected sensor data of Kawashima with the teaching of Wong to differentiate the static point cloud and the preset point cloud. One of the ordinary skill in the art would have been motivated to make this modification so that industrial vehicle can be provided with an accurate localization and map data with respect to dynamically placed and pre-positioned object landmarks (Wong; 0061). The combination of Kawashima and Wong does not explicitly teach classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. However, Yoon in the same field of endeavor (Yoon, p. 113, column 1, para. 2; “An autonomous system must be aware of dynamic elements in its environment. Our focus is on detection using lidar (light detection and ranging), specifically spinning lidars, which operate by sweeping multiple lasers about an axis for a 360◦ field of view (FOV).”) teaches classifying the current scan point as a static obstacle in response to the shortest distance being less than a first threshold (Yoon, p. 116, column 1, para. 2; “We compute the error metric for all query points…Those greater than [an] error threshold are labelled dynamic, the rest are static.”) updating the static point cloud according to the current scan point classified as the static obstacle classifying the current scan point as a dynamic obstacle in response to the shortest distance being greater than or equal to the first threshold (Yoon, p. 113, column 1, para. 2; “We compute the error metric for all query points…Those greater than [an] error threshold are labelled dynamic, the rest are static.”) controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. The combination of Kawashima, Wong, and Yoon are considered to be analogous to the claimed invention because all of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the shortest distance of Kawashima with the teaching of Yoon to classify the static point obstacles and the dynamic obstacles. One of the ordinary skill in the art would have been motivated to make this modification so that updating of the static map is able to filter out points that are not part of the static environment (moving person) or part of the of the environment (pre-positioned pallet). The combination of Kawashima, Wong, and Yoon does not explicitly teach controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold. However, Kabushiki in the same field of endeavor (Kabushiki, at least one para. 0006; “An industrial vehicle for solving the above-described problem includes: a drive device; a drive controller configured to control the drive device; and a main controller configured to transmit a command to the drive controller, the drive controller controlling the drive device in response to the command of the main controller to cause the industrial vehicle to travel.”) teaches controlling a driving device of the automated moving vehicle to stop a movement of the automated moving vehicle in response to the first distance being less than a second threshold (Kabushiki, at least one para. 0085; “Whether or not the condition A3 is satisfied can be determined by the vehicle speed calculated by the main controller 31. The main controller 31 determines that the forklift truck 10 stops when the vehicle speed is equal to or less than a stop determination threshold [km/h].”). The combination of Kawashima, Wong, Yoon, and Kabushiki are considered to be analogous to the claimed invention because all of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the movement of the automated vehicle of Kabushiki with the calculation of the first distance of Kawashima. One of the ordinary skill in the art would have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kawashima (US 20220289538 A1), Wong (US 20140074342 A1), Kabushiki (US 20220411246 A1) and Yoon, and further in view of High (US 20230374746 A1). Regarding claim 5, Wong teaches, (Original) The automated moving vehicle according to claim 1, wherein the processor is further configured to execute: removing a scan point of the static point cloud from the static point cloud to update the static point cloud (Wong, at least one para. 0041; “an object used as a landmark may be uniquely identifiable through the use of barcodes, RFID, specific shape, or any other unique feature that can be sensed by the sensors of an industrial vehicle and used as a reference point to remove ambiguity from the surrounding observable features being compared with known previously mapped features.”) in response to a difference between an acquisition time of the scan point of the static point cloud and a current time being greater than or equal to a time threshold. Wong does not teach in response to a difference between an acquisition time of the scan point of the static point cloud and a current time being greater than or equal to a time threshold. However, High in the same field of endeavor (High, at least one para. 0023; “Movement and operation of such devices may be controlled by a central computer system or may be autonomously controlled by the motorized transport units themselves.”) teaches in response to a difference between an acquisition time of the scan point of the static point cloud and a current time being greater than or equal to a time threshold (High, at least one para. 0139; “The central computer system 106 can, in some embodiments, determine whether the obstacle is short term obstacle (i.e., expected to be an obstacle for less than a threshold amount of time) or a long term obstacle (i.e., expected to be an obstacle for more than the threshold amount of time). For example, if the central computer system identifies or predicts that the obstacle is likely not to move for less than five minutes, the obstacle may be classified as a short term obstacle, and longer than five minutes as a long term obstacle.”). Wong and High are both considered to be analogous to the claimed invention because both of them are in the same field as automated vehicle and control thereof within a designated area as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine Reference point removal of Wong with the different acquisition times of reference points of High. One of the ordinary skill in the art would have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results. Furthermore, the combination would have identified dynamically placed objects within the environment (Wong; 0070). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to UPUL P CHANDRASIRI whose telephone number is (703)756-5823. The examiner can normally be reached M-F 8.30 am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at 571-272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /U.P.C./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

May 16, 2023
Application Filed
Feb 20, 2025
Non-Final Rejection — §103
Jul 17, 2025
Response Filed
Sep 26, 2025
Final Rejection — §103
Dec 19, 2025
Interview Requested
Jan 23, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Feb 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12391240
VEHICLE DRIVING ASSIST DEVICE
2y 5m to grant Granted Aug 19, 2025
Patent 12325421
Method for Holding a Two-Track Motor Vehicle
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
20%
Grant Probability
-9%
With Interview (-28.6%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month