Prosecution Insights
Last updated: April 19, 2026
Application No. 18/629,017

Apparatus For Recognizing Object And Method Thereof

Non-Final OA §101§103§112
Filed
Apr 08, 2024
Examiner
DING, XIAOMAO
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
24.1%
-15.9% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) were submitted on 04/08/2024 and 08/21/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The abstract of the disclosure is objected to because an implied phrase, "the present disclosure" is used in line 1. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claims 5 and 15 are objected to because of the following informalities: Claim 5, line 1 and 6, the claim recites the limitation "the first position of the object" in line 5 and “the second position of the object” in line 6. There is insufficient antecedent basis for this limitation in the claim. Examiner will the interpret “the first” and “the second” positions as “a first” and “a second” position. Claim 15, lines 1 and 6, the claim recites the limitation "the first position of the object" in line 5 and “the second position of the object” in line 6. There is insufficient antecedent basis for this limitation in the claim. Examiner will the interpret “the first” and “the second” positions as “a first” and “a second” position. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 3, the claim recites “a third position of the object in the first frame and a fourth position of the object in the at least one frame before the first frame” in lines 5-6. As claim 1 recites a first position representing the object in the first frame and second position representing the object in the preceding frame, it is unclear to the Examiner how the same object could have two different positions in the same frame. Examiner interprets the third and fourth positions to be positions or points within the body of the object, in their respective frames. Regarding claim 13, the claim is directed towards the method of claim 11 but recites the same limitations as claim 3. Therefore, the arguments presented above for claim 3 are equally applicable to claim 13. Examiner will similarly interpret the third and fourth positions to be positions or points within the body of the object, in their respective frames. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 11 recite the following, with claim 1 being exemplary: “An object recognition apparatus comprising: a sensor associated with a vehicle; and a processor configured to: (a) determine, via the sensor, an object in a first frame; (b) determine a first moving direction of the object in the first frame; (c) determine a reliability value of the first moving direction; (d) determine a point representing the object in the first frame based on a representative moving direction of the object, wherein the representative moving direction is one of: the first moving direction based on the reliability value being greater than a threshold value, or a second moving direction of the object in a second frame preceding the first frame based on the reliability value being less than or equal to the threshold value; and (e) output, based on a first position of the point in the first frame and a second position of the point in at least one frame before the first frame, a signal indicating whether the object is a moving object or a movable stationary object.” [Emphasis added]. According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that the independent claims 1 and 11 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1 and 11 are directed to an apparatus and a method, respectively. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? YES. Independent claims 1 and 11 are directed towards a mental process (i.e. an abstract idea). Regarding claims 1 and 11, limitations (a)-(d), in an emphasized claim 1 above, all fall under mental processes. Limitation (a) recites determining an object in a frame via a sensor. Under the broadest reasonable interpretation, this may mean determining the presence of an object in an image captured by a camera, which is a task the human mind is capable of performing. Similarly, regarding limitations (b)-(d), the human mind is capable of determining a direction of the object in an image, determining a likelihood value associated with the direction (for example based on the resolution or clarity of the image), as well as determining a representative point after comparing the likelihood value to a fixed threshold value. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Independent claims 1 and 11 do not recite additional elements that integrate the judicial exception into a practical application. Regarding claims 1 and 11, limitation (e) is an additional element, which while not necessarily being an abstract idea, is insignificant extra-solution activity since it is merely data output (see MPEP §2106.05(g)). Claims 1 and 11 further recite additional elements such as “sensor” and “processor”. These additional elements are not sufficient to recite a practical application of the abstract ideas recited in claims 1 and 11 as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP §2106.05(f)). STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO. Independent claims 1 and 11 do not recite additional elements that amount to significantly more than the judicial exception. Regarding claims 1 and 11, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements from claims 1 and 11 do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional compute functions as recognized by the court decisions listed in MPEP § 2106.05(d). Therefore, independent claim 1 is directed towards an abstract idea without a practical application or significantly more. Regarding claims 2 and 12, with claim 2 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) determining an object box representing the object in the first frame; (b) determining, based on the representative moving direction, a line segment associated with the object box; and (c) determining the point further based on the line segment. Limitations (a)-(c) are all mental processes as the human mind is capable of determining an object box, an associated line, and a representative point from an image. Regarding claims 3 and 13, with claim 3 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) determining an object box representing the object in the first frame; (b) determining, based on a third position of the object in the first frame and a fourth position of the object in the at least one frame before the first frame, a first heading of the object box in the first frame; (c) determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame; (d) determining a line segment associated with the object box based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction; and (e) determining the point further based on the line segment. Limitations (a)-(e) are all mental processes as the human mind is capable of determine an object box, headings, associated lines, and representative points. Regarding claims 4 and 14, with claim 4 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) determining an object box representing the object in the first frame; (b) assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box; and (c) determining the point, further based on the plurality of indices. Limitations (a)-(c) are all mental processes as the human mind could determine an object box and representative point, as well as assign indices to each of vertices of the object box. Regarding claims 5 and 15, with claim 5 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) determining an object box representing the object in the first frame; (b) determining, based on the first position of the object in the first frame, and the second position of the object in the at least one frame before the first frame, a first heading of the object box; (c) determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame; (d) assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box, based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction; and (e) determining the point further based on the plurality of indices. Limitations (a)-(e) are all mental processes as the human mind is capable of determining an object box, headings, assigning indices to vertices of the object box, and determining a representative point. Regarding claims 6 and 16, with claim 6 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) determining an object box representing the object in the first frame; and (a) determining the point based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle, or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle. Limitations (a) and (b) are mental processes as the human mind could determine an object box as well as whether the object is moving in a parallel or perpendicular direction relative to the vehicle and determine representative point accordingly. Regarding claims 7 and 17, with claim 7 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The claim recites the following limitations: (a) determining an object box representing the object in the first frame; and (b) determining the point based on a center point of one line segment among line segments constituting the object box. Limitations (a) and (b) are mental processes as the human mind could determine an object box as well as a representative point based on a line segment. Regarding claims 8 and 18, with claim 8 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: determine the first moving direction of the object in the first frame based on a value obtained by adding a first distance moved by the vehicle to a second distance moved by the object with respect to the vehicle, wherein the second distance is determined based on a difference between the first position and the second position is a mental process as the human mind could determine a moving direction based on a value as well as a mathematical concept as addition is a mathematical operation. Regarding claims 9 and 19, with claim 9 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: wherein the processor is configured to determine whether the object is a moving object or a movable stationary object further based on a speed of the object, wherein the speed of the object is determined based on the first position and the second position is a mental process as the human mind could the speed based on different positions in two images as well as if the object is moving based on the speed. Regarding claims 10 and 20, with claim 10 being exemplary: the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process. The limitation: to assign, to the object, one of: a first identifier indicating that the object is a moving object, or a second identifier indicating that the object is a movable stationary object is a mental process because the human mind could assign identifiers to an object, for example by labeling the image with a pen. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”). Regarding claim 1, Zhang (IEEE) discloses an object recognition method performed by a processor (Page 8, §A. Data Collection, All the experiments was processed on an Intel i7-4700, 3.20 GHz core processor), the method comprising: determining, via a sensor associated with a vehicle (Page 7, §A. Data Collection, Our AV is equipped with a Velodyne HDL-64E LIDAR, 3 external cameras and two GPS receivers on the roof), an object in a first frame (Page 3, §II. Data Pre-Processing for 3D Point Cloud, … Measurement points in [the] current frame… If the area of the roughly rectangular bounding box of the moving cluster satisfies the preset threshold, moving cluster will be classified as a moving vehicle. Examiner considers the current frame as the “first frame” and the moving vehicle as the “object”); determining a first moving direction of the object in the first frame (Page 3, §III. Pose Estimation, Here we address the core problem of pose estimation, namely how to estimate the target-vehicle center position and heading angle. Examiner considers the heading angle “the first moving direction”); determining a reliability value of the first moving direction (Eqn. 30; Page 7, §B. The Bayesian Filtering Component, The probability that the driving behavior belongs to the turning class can be denoted by the parameter ξ. The output of the BF component is the expected value of ξ. Examiner considers the expected value of ξ “a reliability value”); determining a point representing the object in the first frame based on a representative moving direction of the object (Page 4, right column, last paragraph, §III. Pose Estimation, In this way, the approximate estimation of vehicle pose (x, y, θ) is determined based on the vehicle point cloud and the vehicle motion. Examiner considers (x,y) to be a point representing the object), wherein the representative moving direction is one of: the first moving direction based on the reliability value being greater than a threshold value (Page 7, paragraph after eqn. 32, §B. The Bayesian Filtering Component, Given E (ξ | y ) and time t, the HMM–BF algorithm outputs the final behavior classification based on the specified threshold τbf . The driving behavior at time t is classified as turning if E (ξ | y ) > τbf. Since the limitation is recited in the alternative, Examiner considers this citation to disclose the entire limitation), or a second moving direction of the object in a second frame preceding the first frame based on the reliability value being less than or equal to the threshold value; [and outputting, based on a first position of the point in the first frame and a second position of the point in at least one frame before the first frame, a signal indicating whether the object is a moving object or a movable stationary object]. However, Zhang (IEEE) fails to disclose outputting, based on a first position of the point in the first frame and a second position of the point in at least one frame before the first frame, a signal indicating whether the object is a moving object or a movable stationary object. Zhang (518) teaches outputting, based on a first position of the point in the first frame (Fig. 6B. Examiner considers any of the ends of the line segments on the ball the “the first position of the point”) and a second position of the point in at least one frame before the first frame (Fig. 6A. Examiner considers the corresponding point to the first point in Fig. 6B as “the second position of the point”), a signal indicating whether the object is a moving object or a movable stationary object (¶0047, To determine whether the object is a movable object in motion or not…Specifically, the processor 120 obtains distances between the object (ball) and a plurality of feature points in the image from the image shown in FIG. 6A. … Then, the processor 120 obtains distances between the object (ball) and the same feature points from the image shown in FIG. 6B, and determines whether the distances between the object (ball) and the plurality of feature points have changed at the two time points; if so, it is determined that the object (ball) is a movable object in motion; and otherwise, it is determined that the object (ball) is a static object. ¶0052, After the processor 120 determines whether the object is a movable object according to the above-mentioned embodiments, if the object is on an advancing path of the autonomous mobile device 10, the autonomous mobile device selectively performs obstacle avoidance by different obstacle avoidance distances according to whether the object is a movable object. Since the robot makes a decision based on the determination, a signal indicating whether the object is moving must be output). Both Zhang (IEEE) and Zhang (518) are analogous to the claimed invention because Zhang (IEEE) is in the field of vehicle systems for detecting external objections and motion and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the output signal of Zhang (518) into vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to prevent potential damage caused by a collision as suggested by Zhang (518) at, ¶0035, to avoid damage to people or machines caused by an excessively small obstacle avoidance distance. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) with the teachings of Zhang (518) to obtain the invention as specified in claim 1. Regarding claim 11, Zhang (IEEE) discloses an object recognition apparatus comprising: a sensor associated with a vehicle (Page 7, §A. Data Collection, Our AV is equipped with a Velodyne HDL-64E LIDAR, 3 external cameras and two GPS receivers on the roof); and a processor (Page 8, §A. Data Collection, All the experiments was processed on an Intel i7-4700, 3.20 GHz core processor) configured to: determine, via the sensor, an object in a first frame (Page 3, §II.Data Pre-Processing for 3D Point Cloud, … Measurement points in [the] current frame… If the area of the roughly rectangular bounding box of the moving cluster satisfies the preset threshold, moving cluster will be classified as a moving vehicle. Examiner considers the current frame as the “first frame” and the moving vehicle as the “object”); determine a first moving direction of the object in the first frame (Page 3, §III. Pose Estimation, Here we address the core problem of pose estimation, namely how to estimate the target-vehicle center position and heading angle. Examiner considers the heading angle “the first moving direction”); determine a reliability value of the first moving direction (Eqn. 30; Page 7, §B. The Bayesian Filtering Component, The probability that the driving behavior belongs to the turning class can be denoted by the parameter ξ. The output of the BF component is the expected value of ξ. Examiner considers the expected value of ξ “a reliability value”); determine a point representing the object in the first frame based on a representative moving direction of the object (Page 4, right column, last paragraph, §III. Pose Estimation, In this way, the approximate estimation of vehicle pose (x, y, θ) is determined based on the vehicle point cloud and the vehicle motion. Examiner considers (x,y) to be a point representing the object), wherein the representative moving direction is one of: the first moving direction based on the reliability value being greater than a threshold value (Page 7, paragraph after eqn. 32, §B. The Bayesian Filtering Component, Given E (ξ | y ) and time t, the HMM–BF algorithm outputs the final behavior classification based on the specified threshold τbf . The driving behavior at time t is classified as turning if E (ξ | y ) > τbf. Since the limitation is recited in the alternative, Examiner considers this citation to disclose the entire limitation), or a second moving direction of the object in a second frame preceding the first frame based on the reliability value being less than or equal to the threshold value; [and output, based on a first position of the point in the first frame and a second position of the point in at least one frame before the first frame, a signal indicating whether the object is a moving object or a movable stationary object]. However, Zhang (IEEE) fails to disclose output, based on a first position of the point in the first frame and a second position of the point in at least one frame before the first frame, a signal indicating whether the object is a moving object or a movable stationary object. Zhang (518) teaches output, based on a first position of the point in the first frame (Fig. 6B. Examiner considers any of the ends of the line segments on the ball the “the first position of the point”) and a second position of the point in at least one frame before the first frame (Fig. 6A. Examiner considers the corresponding point to the first point in Fig. 6B as “the second position of the point”), a signal indicating whether the object is a moving object or a movable stationary object (¶0047, To determine whether the object is a movable object in motion or not…Specifically, the processor 120 obtains distances between the object (ball) and a plurality of feature points in the image from the image shown in FIG. 6A. … Then, the processor 120 obtains distances between the object (ball) and the same feature points from the image shown in FIG. 6B, and determines whether the distances between the object (ball) and the plurality of feature points have changed at the two time points; if so, it is determined that the object (ball) is a movable object in motion; and otherwise, it is determined that the object (ball) is a static object. ¶0052, After the processor 120 determines whether the object is a movable object according to the above-mentioned embodiments, if the object is on an advancing path of the autonomous mobile device 10, the autonomous mobile device selectively performs obstacle avoidance by different obstacle avoidance distances according to whether the object is a movable object. Since the robot makes a decision based on the determination, a signal indicating whether the object is moving must be output). Both Zhang (IEEE) and Zhang (518) are analogous to the claimed invention because Zhang (IEEE) is in the field of vehicle systems for detecting external objections and motion and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the output signal of Zhang (518) into vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to prevent potential damage caused by a collision as suggested by Zhang (518) at, ¶0035, to avoid damage to people or machines caused by an excessively small obstacle avoidance distance. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) with the teachings of Zhang (518) to obtain the invention as specified in claim 11. Claims 2, 4, 7, 12, 14, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”) as applied to claims 1 and 11 above, and further in view of Liu et al. (2019, “Fast Dynamic Vehicle Detection in Road Scenarios Based on Pose Estimation with Convex-Hull Model”, Sensors, 19, 3136) (hereafter, “Liu”). Regarding claim 2, in which claim 1 is incorporated, Zhang (IEEE) discloses wherein the processor is configured to [determine the point by]: determining an object box representing the object in the first frame (Fig. 3, top-left panel, box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); [determining, based on the representative moving direction, a line segment associated with the object box; and determining the point further based on the line segment] However, Zhang (IEEE) in view of Zhang (518) fails to disclose determine the point and determining, based on the representative moving direction, a line segment associated with the object box; and determining the point further based on the line segment. Liu teaches determine the point (Page 7, Eqn. 9, 10. Examiner considers C as “the point”) and determining, based on the representative moving direction, a line segment associated with the object box (Fig. 3, 4; Page 6, §2.2 Position Inference, yv points to the moving direction of target vehicle…the vectors that point from the center of model to the center of each side constitute a normal vector set, and the vectors that point from the center of each side to the origin of the Lidar constitute the orientation vector set. For each edge, the angle between the orientation vector and normal vector show whether the edge can be seen under the perspective of the Lidar; Page 7, §2.2 Position Inference, Lines p1p2 and p2p3 are the two visible edges. As yv contributes to the normal vector set and line p1p2 is determined via the normal vector and orientation vector set, Examiner considers the determination as “based on the representative moving direction” and p1p2 as “line segment associated with the object box”); and determining the point further based on the line segment (Page 7, Eqn. 9, 10. The coordinates for the point C is based off of p1p2). Zhang (IEEE), Zhang (518), and Liu are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Liu into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been a more efficient and robust algorithm as suggested by Liu at Page 3, §1.2 Model-Based Method, The proposed pose-estimation method requires low computation and is robust to the environment. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Liu. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Liu to obtain the invention as specified in claim 2. Regarding claim 4, in which claim 1 is incorporated, Zhang (IEEE) discloses wherein the processor is configured to [determine the point] by: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); [assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box; and determining the point, further based on the plurality of indices]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose determine the point and assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box; and determining the point, further based on the plurality of indices. Liu teaches determine the point (Page 7, Eqn. 9, 10. Examiner considers C as “the point”) and assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box (Fig. 4. Examiner considers p1-p4 as indices); and determining the point, further based on the plurality of indices (Page 7, Eqn. 9, 10. The coordinates for the point C is based off of p1p2 which is defined by a “plurality of indices”). Zhang (IEEE), Zhang (518), and Liu are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Liu into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been a more efficient and robust algorithm as suggested by Liu at Page 3, §1.2 Model-Based Method, The proposed pose-estimation method requires low computation and is robust to the environment. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Liu. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Liu to obtain the invention as specified in claim 4. Regarding claim 7, in which claim 1 is incorporated, Zhang (IEEE) discloses wherein the processor is configured to [determine the point] by: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); and [determining the point based on a center point of one line segment among line segments constituting the object box]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose determine the point and determining the point based on a center point of one line segment among line segments constituting the object box. Liu teaches determine the point (Page 7, Eqn. 9, 10. Examiner considers C as “the point”) and determining the point based on a center point of one line segment among line segments constituting the object box (Fig. 3; Page 7, Eqn. 9, 10; Page 6, §2.2 Position Inference, yv points to the moving direction of target vehicle…the vectors that point from the center of model to the center of each side constitute a normal vector set, and the vectors that point from the center of each side to the origin of the Lidar constitute the orientation vector set. Examiner considers the coordinates of C to be based off the “center point” of one of the sides of the bounding box in Fig. 3 as C depends on p1p2 which in turn is derived from vectors based off of the center point). Zhang (IEEE), Zhang (518), and Liu are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Liu into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been a more efficient and robust algorithm as suggested by Liu at Page 3, §1.2 Model-Based Method, the proposed pose-estimation method requires low computation and is robust to the environment. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Liu. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Liu to obtain the invention as specified in claim 7. Regarding claim 12, in which claim 11 is incorporated, Zhang (IEEE) discloses wherein the determining of [the point] comprises: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); [determining, based on the representative moving direction, a line segment associated with the object box; and determining the point further based on the line segment] However, Zhang (IEEE) in view of Zhang (518) fails to disclose the point and determining, based on the representative moving direction, a line segment associated with the object box; and determining the point further based on the line segment. Liu teaches the point (Page 7, Eqn. 9, 10. Examiner considers C as “the point”) and determining, based on the representative moving direction, a line segment associated with the object box (Fig. 3, 4; Page 6, §2.2 Position Inference, yv points to the moving direction of target vehicle…the vectors that point from the center of model to the center of each side constitute a normal vector set, and the vectors that point from the center of each side to the origin of the Lidar constitute the orientation vector set. For each edge, the angle between the orientation vector and normal vector show whether the edge can be seen under the perspective of the Lidar; Page 7, §2.2 Position Inference, Lines p1p2 and p2p3 are the two visible edges. As yv contributes to the normal vector set and line p1p2 is determined via the normal vector and orientation vector set, Examiner considers the determination as “based on the representative moving direction” and p1p2 as “line segment associated with the object box”); and determining the point further based on the line segment (Page 7, Eqn. 9, 10. The coordinates for the point C is based off of p1p2). Zhang (IEEE), Zhang (518), and Liu are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Liu into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been a more efficient and robust algorithm as suggested by Liu at Page 3, §1.2 Model-Based Method, the proposed pose-estimation method requires low computation and is robust to the environment. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Liu. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Liu to obtain the invention as specified in claim 12. Regarding claim 14, in which claim 11 is incorporated, Zhang (IEEE) discloses wherein the determining of [the point] comprises: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); [assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box; and determining the point, further based on the plurality of indices]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose the point and assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box; and determining the point, further based on the plurality of indices. Liu teaches the point (Page 7, Eqn. 9, 10. Examiner considers C as “the point”) and assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box (Fig. 4. Examiner considers p1-p4 as indices); and determining the point, further based on the plurality of indices (Page 7, Eqn. 9, 10. The coordinates for the point C is based off of p1p2 which is defined by a “plurality of indices”). Zhang (IEEE), Zhang (518), and Liu are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Liu into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been a more efficient and robust algorithm as suggested by Liu at Page 3, §1.2 Model-Based Method, the proposed pose-estimation method requires low computation and is robust to the environment. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Liu. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Liu to obtain the invention as specified in claim 14. Regarding claim 17, in which claim 11 is incorporated, Zhang (IEEE) discloses wherein the determining of [the point] comprises: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); and [determining the point based on a center point of one line segment among line segments constituting the object box]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose the point and determining the point based on a center point of one line segment among line segments constituting the object box. Liu teaches the point (Page 7, Eqn. 9, 10. Examiner considers C as “the point”) and determining the point based on a center point of one line segment among line segments constituting the object box (Fig. 3; Page 7, Eqn. 9, 10; Page 6, §2.2 Position Inference, yv points to the moving direction of target vehicle…the vectors that point from the center of model to the center of each side constitute a normal vector set, and the vectors that point from the center of each side to the origin of the Lidar constitute the orientation vector set. Examiner considers the coordinates of C to be based off the “center point” of one of the sides of the bounding box in Fig. 3 as C depends on p1p2 which in turn is derived from vectors based off of the center point). Zhang (IEEE), Zhang (518), and Liu are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Liu into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been a more efficient and robust algorithm as suggested by Liu at Page 3, §1.2 Model-Based Method, the proposed pose-estimation method requires low computation and is robust to the environment. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Liu. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Liu to obtain the invention as specified in claim 17. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”) as applied to claims 1 and 11 above, and further in view of Tsaregorodtsev et al. (2023, “Automated Static Camera Calibration with Intelligent Vehicles”, IEEE Intelligent Vehicles Symposium (IV)) (hereafter, “Tsaregorodtsev”). Regarding claim 3, in which claim 1 is incorporated, Zhang (IEEE) discloses wherein the processor is configured to [determine the point] by: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); and determining, based on a third position of the object in the first frame and a fourth position of the object in the at least one frame before the first frame, a first heading of the object box in the first frame (Fig. 5; Eqn. 6; Page 4, §III. Pose estimation, To address this we propose a multi frame fitting (MFF) model that fits point clouds from three consecutive frames to estimate the middle frame… In this way, the approximate estimation of vehicle pose. Examiner notes that in Eqn. 6, positions from multiple time points (tx and ty) are used to determine the pose which Examiner considers as “heading”); [determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame; determining a line segment associated with the object box based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction; and determining the point further based on the line segment]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose determine the point and disclose determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame; determining a line segment associated with the object box based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction; and determining the point further based on the line segment. Tsaregorodtsev teaches determine the point (Page 4, §F. Result Refinement, The vehicle corner point with the lowest distance to the projected bottom line of the bounding box rectangle is chosen as the new vehicle reference point) and disclose determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame (Fig. 1, top left panel; Page 3, §B. Data Acquisition and Pre-processing, Therefore, we use the Hungarian Algorithm [25] to find the most cost-effective box-to-box association between two frames…the track can be associated with a newly detected box…it is possible to reconstruct the path of each vehicle that drove. Fig. 1 illustrates heading direction associated with the tracks); determining a line segment associated with the object box (Page 4, §F. Result Refinement, By using the orientation information contained in xt and the known vehicle dimensions, we can calculate the footprint of the vehicle, i.e., the world coordinates of its corners in the ground plane. Due to geometrical constraints, one of these ground points must lie on the bounding box’s bottom edge. For refinement, the bounding box of the vehicle is projected onto the ground plane resulting in a quadrilateral. The projection is made with one of the previously obtained hypothetical transformations. The required ground plane is estimated by performing a principal component analysis (PCA) [29] of the recorded localization track. Examiner considers the bottom edge a line segment) based on the first heading being within a first range (Fig. 1, middle panel. Examiner considers the headings at the locations where bounding boxes are measured to be the “first range”), the track heading being within a second range (Fig. 1, top left panel. Examiner considers the different headings associated with the entire track to be the “second range”), and the representative moving direction being a specified direction (Page 4, §F. Result Refinement, By using the orientation information contained in xt); and determining the point further based on the line segment (Page 4, §F. Result Refinement, The vehicle corner point with the lowest distance to the projected bottom line of the bounding box rectangle is chosen as the new vehicle reference point). Zhang (IEEE), Zhang (518), and Chen are analogous to the claimed invention because Zhang (IEEE) and Tsaregorodtsev are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Tsaregorodtsev into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to allow for road access by other users during calibration as suggested by Tsaregorodtsev at Abstract, furthermore, we do not limit road access for other road users during calibration. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Tsaregorodtsev . Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Tsaregorodtsev to obtain the invention as specified in claim 3. Regarding claim 13, in which claim 11 is incorporated, Zhang (IEEE) discloses wherein the determining of [the point] comprises: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); and determining, based on a third position of the object in the first frame and a fourth position of the object in the at least one frame before the first frame, a first heading of the object box in the first frame (Fig. 5; Eqn. 6; Page 4, §III. Pose estimation, To address this we propose a multi frame fitting (MFF) model that fits point clouds from three consecutive frames to estimate the middle frame… In this way, the approximate estimation of vehicle pose. Examiner notes that in Eqn. 6, positions from multiple time points (tx and ty) are used to determine the pose which Examiner considers as “heading”); [determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame; determining a line segment associated with the object box based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction; and determining the point further based on the line segment]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose the point and disclose determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame; determining a line segment associated with the object box based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction; and determining the point further based on the line segment. Tsaregorodtsev teaches the point (Page 4, §F. Result Refinement, The vehicle corner point with the lowest distance to the projected bottom line of the bounding box rectangle is chosen as the new vehicle reference point) and disclose determining, based on the first heading and a second heading of the object box in the at least one frame before the first frame, a track heading of the object box in the first frame (Fig. 1, top left panel; Page 3, §B. Data Acquisition and Pre-processing, Therefore, we use the Hungarian Algorithm [25] to find the most cost-effective box-to-box association between two frames…the track can be associated with a newly detected box…it is possible to reconstruct the path of each vehicle that drove. Fig. 1 illustrates heading direction associated with the tracks); determining a line segment associated with the object box (Page 4, §F. Result Refinement, By using the orientation information contained in xt and the known vehicle dimensions, we can calculate the footprint of the vehicle, i.e., the world coordinates of its corners in the ground plane. Due to geometrical constraints, one of these ground points must lie on the bounding box’s bottom edge. For refinement, the bounding box of the vehicle is projected onto the ground plane resulting in a quadrilateral. The projection is made with one of the previously obtained hypothetical transformations. The required ground plane is estimated by performing a principal component analysis (PCA) [29] of the recorded localization track. Examiner considers the bottom edge a line segment) based on the first heading being within a first range (Fig. 1, middle panel. Examiner considers the headings at the locations where bounding boxes are measured to be the “first range”), the track heading being within a second range (Fig. 1, top left panel. Examiner considers the different headings associated with the entire track to be the “second range”), and the representative moving direction being a specified direction (Page 4, §F. Result Refinement, By using the orientation information contained in xt); and determining the point further based on the line segment (Page 4, §F. Result Refinement, The vehicle corner point with the lowest distance to the projected bottom line of the bounding box rectangle is chosen as the new vehicle reference point). Zhang (IEEE), Zhang (518), and Chen are analogous to the claimed invention because Zhang (IEEE) and Tsaregorodtsev are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Tsaregorodtsev into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to allow for road access by other users during calibration as suggested by Tsaregorodtsev at Abstract, furthermore, we do not limit road access for other road users during calibration. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Tsaregorodtsev . Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Tsaregorodtsev to obtain the invention as specified in claim 13. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”) as applied to claims 1 and 11 above, and further in view of Chen et al. (US 2024/0265710) (hereafter, “Chen”). Regarding claim 6, in which claim 1 is incorporated, Zhang (IEEE) discloses wherein the processor is configured to [determine the point] by: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); and [determining the point based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle, or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose determine the point and determining the point based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle, or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle. Chen teaches determining the point (¶0056, For example, the reference point is a midpoint of the bottom boundary of the bounding box) based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle (Fig. 5A; ¶0042, The autonomous vehicle may track objects moving along a same direction; ¶0056, For example, the reference point is a midpoint of the bottom boundary of the bounding box. As illustrated in Fig. 5A of Chen, the vehicle pictured ahead (520) is in a lane parallel to the observer or “the vehicle” in the instant claim. As the bounding box is in a plane perpendicular to the observer, all four sides are equidistant longitudinally to the observer. Therefore, any line of the bounding box (540) is considered to be “a smallest longitudinal distance”. Since the limitation is recited in the alternative, Examiner considers Chen to teach the limitation in its entirety), or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle. Zhang (IEEE), Zhang (518), and Chen are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Chen into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to enable the operation of the vehicle as suggested by Chen at ¶0043, so as to allow the operation of the autonomous vehicle to proceed. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Chen. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Chen to obtain the invention as specified in claim 6. Regarding claim 16, in which claim 11 is incorporated, Zhang (IEEE) discloses wherein the determining of [the point] comprises: determining an object box representing the object in the first frame (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori); and [determining the point based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle, or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle]. However, Zhang (IEEE) in view of Zhang (518) fails to disclose the point and determining the point based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle, or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle. Chen teaches the point and determining the point (¶0056, For example, the reference point is a midpoint of the bottom boundary of the bounding box) based on one of: a first line segment having, among line segments associated with the object box, a smallest longitudinal distance to the vehicle, based on the representative moving direction being parallel to a longitudinal axis of the vehicle (Fig. 5A; ¶0042, The autonomous vehicle may track objects moving along a same direction; ¶0056, For example, the reference point is a midpoint of the bottom boundary of the bounding box. From Examiner’s interpretation of Fig. 5A, the vehicle pictured ahead is in a lane parallel to the observer or “the vehicle” in the instant claim. As the bounding box is in a plane perpendicular to the observer, all four sides are equidistant longitudinally to the observer. Therefore, any line of the bounding box is considered to be “a smallest longitudinal distance”. Since the limitation is recited in the alternative, Examiner considers Chen to teach the limitation in its entirety), or a second line segment having, among the line segments associated with the object box, a smallest lateral distance to the vehicle, based on the representative moving direction being perpendicular to the longitudinal axis of the vehicle. Zhang (IEEE), Zhang (518), and Chen are analogous to the claimed invention because Zhang (IEEE) and Liu are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the reference point determination of Chen into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to enable the operation of the vehicle as suggested by Chen at ¶0043, so as to allow the operation of the autonomous vehicle to proceed. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Chen. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Chen to obtain the invention as specified in claim 16. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”) as applied to claims 1 and 11 above, and further in view of Watanabe et al. (US 2023/0152811) (hereafter, “Watanabe”). Regarding claim 8, Zhang (IEEE) in view of Zhang (518) discloses the object recognition apparatus of claim 1. However, Zhang (IEEE) in view of Zhang (518) fails to disclose wherein the processor is configured to determine the first moving direction of the object in the first frame based on a value obtained by adding a first distance moved by the vehicle to a second distance moved by the object with respect to the vehicle, wherein the second distance is determined based on a difference between the first position and the second position. Watanabe teaches wherein the processor is configured to determine the first moving direction of the object in the first frame based on a value obtained by adding a first distance moved by the vehicle to a second distance moved by the object with respect to the vehicle (¶0120, The movement vector is information including a moving speed and a moving direction. The object detection unit 219 estimates the movement vector of the nearby object based on the change in the distance from the mobile robot 20 to the nearby object… according to the change of the position of the nearby object over time; ¶0124, The mobile robot 20 is moving upward. Examiner considers the calculation of the change in the distance from the robot to the object to indicate an addition of the robot’s movement and the object’s movement), wherein the second distance is determined based on a difference between the first position and the second position (¶0120, According to the change of the position of the nearby object over time. Examiner considers the change in the object’s position of time to indicate a difference between a first and a second position). Zhang (IEEE), Zhang (518), and Watanabe are analogous to the claimed invention because Zhang (IEEE) is in the field of vehicle systems for detecting external objections and motion, and Zhang (518) and Watanabe are in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the moving direction determination of Watanabe into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to improve the motion estimation as suggested by Watanabe at ¶0122, it is possible to improve the accuracy of the estimation of the moving speed and the moving direction. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Watanabe . Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Watanabe to obtain the invention as specified in claim 8. Regarding claim 18, Zhang (IEEE) in view of Zhang (518) discloses the object recognition method of claim 11. However, Zhang (IEEE) in view of Zhang (518) fails to disclose wherein the determining of the first moving direction of the object in the first frame comprises determining the first moving direction of the object in the first frame based on a value obtained by adding a first distance moved by the vehicle to a second distance moved by the object with respect to the vehicle, wherein the second distance is determined based on a difference between the first position and the second position. Watanabe teaches wherein the determining of the first moving direction of the object in the first frame comprises determining the first moving direction of the object in the first frame based on a value obtained by adding a first distance moved by the vehicle to a second distance moved by the object with respect to the vehicle (¶0120, The movement vector is information including a moving speed and a moving direction. The object detection unit 219 estimates the movement vector of the nearby object based on the change in the distance from the mobile robot 20 to the nearby object… according to the change of the position of the nearby object over time; ¶0124, The mobile robot 20 is moving upward. Examiner considers the calculation of the change in the distance from the robot to the object to indicate an addition of the robot’s movement and the object’s movement), wherein the second distance is determined based on a difference between the first position and the second position (¶0120, According to the change of the position of the nearby object over time. Examiner considers the change in the object’s position of time to indicate a difference between a first and a second position). Zhang (IEEE), Zhang (518), and Watanabe are analogous to the claimed invention because Zhang (IEEE) is in the field of vehicle systems for detecting external objections and motion, and Zhang (518) and Watanabe are in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the moving direction determination of Watanabe into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to improve the motion estimation as suggested by Watanabe at ¶0122, it is possible to improve the accuracy of the estimation of the moving speed and the moving direction. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Watanabe . Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Watanabe to obtain the invention as specified in claim 18. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”) as applied to claims 1 and 11 above, and further in view of Iketani (US 2008/0166024). Regarding claim 9, Zhang (IEEE) in view of Zhang (518) discloses the object recognition apparatus of claim 1. However, Zhang (IEEE) in view of Zhang (518) fails to disclose wherein the processor is configured to determine whether the object is a moving object or a movable stationary object further based on a speed of the object, wherein the speed of the object is determined based on the first position and the second position. Iketani teaches wherein the processor is configured to determine whether the object is a moving object or a movable stationary object further based on a speed of the object (Fig. 22; ¶0085 As will be described with reference to FIG. 22 or the like, the vector classifying portion 262 detects the type of the movement vector detected at each feature point based on … the speed of the automotive vehicle detected by the vehicle speed sensor 113. Examiner notes that the method in Fig. 22 ultimately determines whether an object is moving or stationary), wherein the speed of the object is determined based on the first position and the second position (Eqn. 1, 2; ¶0060, X(t-k) and Z(t-k) represent the x- and z-axis directional positions of the object calculated k times before. Examiner considers the two positions at two separate time points used in the equations as the first and second positions). Zhang (IEEE), Zhang (518), and Iketani are analogous to the claimed invention because Zhang (IEEE) and Iketani are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the motion determination of Iketani into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to improve detection performance as suggested by Iketani at ¶0105, it is possible to decrease the processing load and to thus improve the detection performance. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Iketani. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Iketani to obtain the invention as specified in claim 9. Regarding claim 19, Zhang (IEEE) in view of Zhang (518) discloses the object recognition method of claim 11. However, Zhang (IEEE) in view of Zhang (518) fails to disclose determining whether the object is a moving object or a movable stationary object further based on a speed of the object, wherein the speed of the object is determined based on the first position and the second position. Iketani teaches determining whether the object is a moving object or a movable stationary object further based on a speed of the object (Fig. 22; ¶0085 As will be described with reference to FIG. 22 or the like, the vector classifying portion 262 detects the type of the movement vector detected at each feature point based on … the speed of the automotive vehicle detected by the vehicle speed sensor 113. Examiner notes that the method in Fig. 22 ultimately determines whether an object is moving or stationary), wherein the speed of the object is determined based on the first position and the second position (Eqn. 1, 2; ¶0060, X(t-k) and Z(t-k) represent the x- and z-axis directional positions of the object calculated k times before. Examiner considers the two positions at two separate time points used in the equations as the first and second positions). Zhang (IEEE), Zhang (518), and Iketani are analogous to the claimed invention because Zhang (IEEE) and Iketani are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the motion determination of Iketani into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to improve detection performance as suggested by Iketani at ¶0105, it is possible to decrease the processing load and to thus improve the detection performance. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Iketani. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Iketani to obtain the invention as specified in claim 19. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (2019, “A Framework for Tuning Behavior Classification at Intersections Using 3D LIDAR”, IEEE Transactions on Vehicular Technology, Vol. 68, No. 8) (hereafter, “Zhang (IEEE)”) in view of Zhang et al. (US 2024/0103518) (hereafter, “Zhang (518)”) as applied to claims 1 and 11 above, and further in view of Chavez-Garcia et al. (2016, “Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking”, IEEE Transactions on Intelligent Transportation Systems, Vol. 17, No. 2) (hereafter, “Chavez-Garcia”). Regarding claim 10, Zhang (IEEE) in view of Zhang (518) discloses the object recognition apparatus of claim 1. However, Zhang (IEEE) in view of Zhang (518) fails to disclose wherein the processor is further configured to assign, to the object, one of: a first identifier indicating that the object is a moving object, or a second identifier indicating that the object is a movable stationary object. Chavez-Garcia teaches wherein the processor is further configured to assign, to the object, one of: a first identifier indicating that the object is a moving object (Fig. 8, Right panels. Examiner considers the speed labels as identifiers that the object is moving), or a second identifier indicating that the object is a movable stationary object (Fig. 8, Right-most panel. The label ped_61 is not accompanied by a speed label. Examiner considers this an identifier indicating an object is a movable stationary object (as pedestrians are movable)). Zhang (IEEE), Zhang (518), and Chavez-Garcia are analogous to the claimed invention because Zhang (IEEE) and Chavez-Garcia are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the motion identifiers of Chavez-Garcia into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to improve the tracking process as suggested by Chavez-Garcia at page 533, §X. Conclusion and Perspectives, the tracking stage benefits from the reduction of mis-detections and from the more accurate classification information to accelerate the tracking process. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Chavez-Garcia. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Chavez-Garcia to obtain the invention as specified in claim 10. Regarding claim 20, Zhang (IEEE) in view of Zhang (518) discloses the object recognition method of claim 11. However, Zhang (IEEE) in view of Zhang (518) fails to disclose assigning, to the object, one of: a first identifier indicating that the object is a moving object, or a second identifier indicating that the object is a movable stationary object. Chavez-Garcia teaches assigning, to the object, one of: a first identifier indicating that the object is a moving object (Fig. 8, Right panels. Examiner considers the speed labels as identifiers that the object is moving), or a second identifier indicating that the object is a movable stationary object (Fig. 8, Right-most panel. The label ped_61 is not accompanied by a speed label. Examiner considers this an identifier indicating an object is a movable stationary object (as pedestrians are movable)). Zhang (IEEE), Zhang (518), and Chavez-Garcia are analogous to the claimed invention because Zhang (IEEE) and Chavez-Garcia are in the field of vehicle systems for detecting external objections and motion, and Zhang (518) is in the field of detecting external objects and motion using sensor systems. It would have been obvious to a person of ordinary skill before the effective filing date of the claimed invention to incorporate the motion identifiers of Chavez-Garcia into the output signal of Zhang (518) and the vehicle object and motion direction detection system of Zhang (IEEE). The suggestion/motivation for doing so would have been to improve the tracking process as suggested by Chavez-Garcia at page 533, §X. Conclusion and Perspectives, the tracking stage benefits from the reduction of mis-detections and from the more accurate classification information to accelerate the tracking process. This method of improving Zhang (IEEE) was within the ordinary ability of one of ordinary skill in the art based on the teachings of Zhang (518) and Chavez-Garcia. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify Zhang (IEEE) and the teachings of Zhang (518) with the teachings of Chavez-Garcia to obtain the invention as specified in claim 20. Allowable Subject Matter Claims 5 and 15 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and to overcome the 35 U.S.C. 101 rejection above. The following is a statement of reasons for the indication of allowable subject matter: Regarding claims 5 and 15, Zhang (IEEE) discloses the determining of an object box (Fig. 3, top-left panel, red box; Page 4, right column, §III. Pose Estimation, The length and width of rectangular bounding box of vehicle measurement model modeled a priori), Tsaregorodtsev teaches the determining of a track heading (Fig. 1, top left panel; Page 3, §B. Data Acquisition and Pre-processing, Therefore, we use the Hungarian Algorithm [25] to find the most cost-effective box-to-box association between two frames…the track can be associated with a newly detected box…it is possible to reconstruct the path of each vehicle that drove), and Liu teaches assigning indices to the object box (Fig. 4. Examiner considers p1-p4 as indices). However, none of the cited references of the record individually or in combination teach “assigning a different index, of a plurality of indices, to each of a plurality of vertices forming the object box, based on the first heading being within a first range, the track heading being within a second range, and the representative moving direction being a specified direction”, in combination with other limitations, in claims 5 and 15 as a whole. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bialer et al. (US Patent Application Publication No. 2024/0116503) teaches determining the heading of a pedestrian (¶0004, The orientation angle is between the heading of the pedestrian and a normal vector perpendicular to the direction of the road). Yang et al. (US Patent Application Publication No. 2021/0158399) teaches a confidence score when estimating pose/heading (¶0057, A confidence score is determined for the estimated poses and moving directions). Ohki (US Patent Application Publication No. 2015/0043786) teaches determining the moving direction of an object (¶0011, The moving object acquisition unit may further acquire the region of the moving object in a reference image which is the immediately preceding image with respect to the target image among the plurality of images, and the moving direction acquisition unit may detect a direction from specific coordinates within the region of the moving object in the reference image to specific coordinates within the region of the moving object in the target image). Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMAO DING whose telephone number is (571)272-7237. The examiner can normally be reached Mon-Fri 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /X.D./ Examiner, Art Unit 2676 /Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Apr 08, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month