Prosecution Insights
Last updated: April 19, 2026
Application No. 18/431,194

PREDICTION-BASED SYSTEM AND METHOD FOR TRAJECTORY PLANNING OF AUTONOMOUS VEHICLES

Non-Final OA §103
Filed
Feb 02, 2024
Examiner
ABD EL LATIF, HOSSAM M
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
TuSimple, Inc.
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
203 granted / 256 resolved
+27.3% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
48 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 256 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on February 17, 2026 has been entered. Response to Arguments Applicant’s amendments and remarks filed on 02/17/2026 with respect to previous claim rejections under 35 U.S.C. 103 has been fully considered and are moot. With respect to the newly amended subject matter and applicant’s arguments, the Examiner relies upon newly cited references Peng et al (US 2019/0339384 A1) and Nishiwaki et al (US 6,265,991 B1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 8-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable in view of Shashua et al. (US 2017/0010618 A1), (Hereinafter Shashua) in view of Fukumoto et al (US 2019/0272750 A1), (hereinafter Fukumoto) in further view of Peng et al (US 2019/0339384 A1). Regarding claim 1, Shashua discloses a system comprising: a data processor (see Shashua para “0018”), and a prediction-based trajectory planning module, executable by the data processor, configured to: (see Shashua para “0332” “velocity and acceleration module 406 may store software configured to analyze data received from one or more computing and electromechanical devices in vehicle 200 that are configured to cause a change in velocity and/or acceleration of vehicle 200”), receive a proposed trajectory for a host vehicle (see Shashua para “0332” “processing unit 110 may execute instructions associated with velocity and acceleration module 406 to calculate a target speed for vehicle 200 based on data derived from execution of monocular image analysis module 402… processing unit 110 may calculate a target speed for vehicle 200 based on sensory input (e.g., information from radar) and input from other systems of vehicle 200, such as throttling system 220, braking system…”), and cause modification of the proposed host vehicle trajectory if the proposed host vehicle trajectory will conflict with any of the predicted trajectories of the one or more proximate dynamic objects (see Shashua para “0606” “When processing unit 110 determines that the determined autonomous steering does not comply with one or more constraints imposed by the additional considerations, processing unit 110 may modify the autonomous steering action to help ensure that all the constraints may be satisfied”). But Shashua fails to explicitly teach generate predicted trajectories for each of one or more dynamic objects in proximity to the host vehicle, the predicted trajectories for each of the one or more proximate dynamic objects corresponding to likely actions or reactions by each of the one or more proximate dynamic objects based on human driving behavior context data if the host vehicle follows the proposed host vehicle trajectory. However, Fukumoto teaches generate predicted trajectories for each of one or more dynamic objects in proximity to the host vehicle, the predicted trajectories for each of the one or more proximate dynamic objects corresponding to likely actions or reactions by each of the one or more proximate dynamic objects based on human driving behavior context data if the host vehicle follows the proposed host vehicle trajectory (see Fukumoto paragraphs “0015”, “0027”, “0047”, “0052”, “0064-0071” and “0073” “in order to make traffic smooth in a situation where an autonomous driving vehicle and a conventional vehicle coexist, it is required that the autonomous driving vehicle predict an action of the driver of the other conventional vehicle and perform travel control based on the prediction”, “Travel state determination section 1e generates the travel command signal for allowing or restricting entry into intersection LQ in accordance with the action of object vehicle B predicted based on the detection signal of action monitor 5. Travel state determination section 1e operates, for example, when state determination section 1d detects object vehicle B approaching intersection LQ that is ahead of subject vehicle A in the travel direction. Then, the travel command signal generated by travel state determination section 1e is transmitted to, for example, vehicle ECU”, “when travel lane L1 on which subject vehicle A is traveling is a priority road (step S4: YES), travel state determination section 1e generates the travel command signal for allowing entry into intersection LQ to cause subject vehicle A to enter intersection LQ without a stop because it is predicted that object vehicle B will stop before intersection LQ (step S9)… ” regarding predicting the future action of another vehicle and monitors a motion of a driver of object vehicle B to predict an action of object vehicle B and act accordingly). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to predict an action of another vehicle trajectory” as taught by Fukumoto (paras. [0015] – [0068-0072]) in order to determine whether to allow or restrict the own vehicle to pass or no to avoid any collision that can occur with another vehicle. But modified Shashua fails to explicitly disclose wherein the system is configured to use regression to predict acceleration of a dynamic object. However, Peng teaches wherein the system is configured to use regression to predict acceleration of a dynamic object (see Peng para “0055” " Specifically, based on the predicted object information, the radar data processing unit 104-4 can determine a real-time motion model of the movable platform 100, which may include at least one of a uniform motion model corresponding to a zero acceleration, a uniformly accelerated motion model corresponding to a uniform acceleration, or a nonuniformly accelerated motion model corresponding to a nonuniform acceleration. The different motion models can be pre-built and the radar data processing unit 104-4 can choose one or more appropriate models for the purpose of tracking the matched object. Then, based on the real-time motion model of the movable platform 100, the radar data processing unit 104-4 can apply a predetermined filtering algorithm to the object information of the matched object, to predict future object information of the matched object"). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to predict an acceleration of an object nearby” as taught by Peng (para. [0055]) in order to enhance predictive accuracy and improve real-time tracking performance. Regarding claim 2, Shashua discloses configured to receive, filter, and smooth perception data (see Shashua paras “0340” and “0346” “kalman filter” and “filter the identified objects”). Regarding claim 3, Shashua discloses configured to generate coordinate transformations of perception data relative to the one or more proximate dynamic objects (see Shashua para “0423” “the road model may be detected in the image coordinate frame and transformed to a three dimensional space that may be virtually attached to the camera”). Regarding claim 4, Shashua discloses configured to use training data comprising labeling data that includes context information defining directionality and rate behaviors of dynamic objects represented in the training data (see Shashua paras “0503-0506” “Feature points (FPs) extracted from frames of different drives at a similar GPS position and heading may be potentially matched within a GPS uncertainty radius” and The limitation “rate behaviors of dynamic objects represented in the training data” is comprised within the citation by virtue of known image capture rate and distances of features in the images). Regarding claim 5, Shashua discloses configured to use training data comprising labeling data that includes context information defining directionality and rate behaviors of dynamic objects represented in the training data, (see Shashua paras “0503-0506” “Feature points (FPs) extracted from frames of different drives at a similar GPS position and heading may be potentially matched within a GPS uncertainty radius” and The limitation “rate behaviors of dynamic objects represented in the training data” is comprised within the citation by virtue of known image capture rate and distances of features in the images) the context information defining directionality behaviors further defining a left turn, no turn, or a right turn (see Shashua paras “0354-0359” “At step 630, processing unit 110 may execute navigational response module 408 to cause one or more navigational responses in vehicle 200 based on the analysis performed at step 620 and the techniques as described above in connection with FIG. 4. Navigational responses may include, for example, a turn”). the context information defining rate behaviors further defining accelerating or decelerating (see Shashua paras “0123” and “0353” “determine…acceleration of the leading vehicle” and “receive from each of the plurality of autonomous vehicles”). Regarding claim 6, Shashua discloses wherein causing modification of the proposed host vehicle trajectory comprises: determining if the proposed host vehicle trajectory will conflict with any of the predicted trajectories of the one or more proximate dynamic objects; and modifying the proposed host vehicle trajectory based on the determined conflicts until the conflicts are eliminated (see Shashua para “0606” “When processing unit 110 determines that the determined autonomous steering does not comply with one or more constraints imposed by the additional considerations, processing unit 110 may modify the autonomous steering action to help ensure that all the constraints may be satisfied”). Regarding claim 8, Shashua discloses configured to determine if any of the predicted trajectories for the one or more proximate dynamic objects may cause the host vehicle to violate a pre-defined goal based on a related score being below a minimum acceptable threshold (see Shashua paras “0563-0565” “score” being related to determine if any of the predicted trajectories for the one or more proximate dynamic objects may cause the host vehicle to violate a pre- defined goal by virtue of said determination being modeled upon a scene that relies on accurate landmark scores). Regarding claim 9, Shashua discloses wherein the proposed host vehicle trajectory is output to a vehicle control subsystem causing the host vehicle to follow the output proposed trajectory (see Shashua para “0037” “processor may be further programmed to adjust the steering system of the vehicle to move the vehicle from a current position of the vehicle to a position on the predetermined road model trajectory”). Regarding claim 10, Shashua discloses a method comprising: receiving a proposed trajectory for a host vehicle (see Shashua para “0332” “processing unit 110 may execute instructions associated with velocity and acceleration module 406 to calculate a target speed for vehicle 200 based on data derived from execution of monocular image analysis module 402… processing unit 110 may calculate a target speed for vehicle 200 based on sensory input (e.g., information from radar) and input from other systems of vehicle 200, such as throttling system 220, braking system…”), and causing modification of the proposed host vehicle trajectory if the proposed host vehicle trajectory will conflict with any of the predicted trajectories of the one or more proximate dynamic objects (see Shashua para “0606” “When processing unit 110 determines that the determined autonomous steering does not comply with one or more constraints imposed by the additional considerations, processing unit 110 may modify the autonomous steering action to help ensure that all the constraints may be satisfied”). But Shashua fails to explicitly teach generating predicted trajectories for each of one or more dynamic objects in proximity to the host vehicle, the predicted trajectories for each of the one or more proximate dynamic objects corresponding to likely actions or reactions by each of the one or more proximate dynamic objects based on human driving behavior context data if the host vehicle follows the proposed host vehicle trajectory. However, Fukumoto teaches generating predicted trajectories for each of one or more dynamic objects in proximity to the host vehicle, the predicted trajectories for each of the one or more proximate dynamic objects corresponding to likely actions or reactions by each of the one or more proximate dynamic objects based on human driving behavior context data if the host vehicle follows the proposed host vehicle trajectory (see Fukumoto paragraphs “0015”, “0027”, “0047”, “0052”, “0064-0071” and “0073” “in order to make traffic smooth in a situation where an autonomous driving vehicle and a conventional vehicle coexist, it is required that the autonomous driving vehicle predict an action of the driver of the other conventional vehicle and perform travel control based on the prediction”, “Travel state determination section 1e generates the travel command signal for allowing or restricting entry into intersection LQ in accordance with the action of object vehicle B predicted based on the detection signal of action monitor 5. Travel state determination section 1e operates, for example, when state determination section 1d detects object vehicle B approaching intersection LQ that is ahead of subject vehicle A in the travel direction. Then, the travel command signal generated by travel state determination section 1e is transmitted to, for example, vehicle ECU”, “when travel lane L1 on which subject vehicle A is traveling is a priority road (step S4: YES), travel state determination section 1e generates the travel command signal for allowing entry into intersection LQ to cause subject vehicle A to enter intersection LQ without a stop because it is predicted that object vehicle B will stop before intersection LQ (step S9)… ” regarding predicting the future action of another vehicle and monitors a motion of a driver of object vehicle B to predict an action of object vehicle B and act accordingly). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to predict an action of another vehicle trajectory” as taught by Fukumoto (paras. [0015] – [0068-0072]) in order to determine whether to allow or restrict the own vehicle to pass or no to avoid any collision that can occur with another vehicle. But modified Shashua fails to explicitly disclose wherein the method further comprises: using regression to predict acceleration of a dynamic object. However, Peng teaches wherein the method further comprises: using regression to predict acceleration of a dynamic object (see Peng para “0055” " Specifically, based on the predicted object information, the radar data processing unit 104-4 can determine a real-time motion model of the movable platform 100, which may include at least one of a uniform motion model corresponding to a zero acceleration, a uniformly accelerated motion model corresponding to a uniform acceleration, or a nonuniformly accelerated motion model corresponding to a nonuniform acceleration. The different motion models can be pre-built and the radar data processing unit 104-4 can choose one or more appropriate models for the purpose of tracking the matched object. Then, based on the real-time motion model of the movable platform 100, the radar data processing unit 104-4 can apply a predetermined filtering algorithm to the object information of the matched object, to predict future object information of the matched object"). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to predict an acceleration of an object nearby” as taught by Peng (para. [0055]) in order to enhance predictive accuracy and improve real-time tracking performance. Regarding claim 11, Shashua discloses obtaining proximate dynamic object position and proximate dynamic object velocity (see Shashua para “0353” “For example, processing unit 110 may determine the position, velocity (e.g., direction and speed), and/or acceleration of the leading vehicle, using the techniques described in connection with FIGS. 5A and 5B, above”). Regarding claim 12, Shashua discloses determining a position of each proximate dynamic object relative to the host vehicle (see Shashua para “0123” “receive from each of the plurality of autonomous vehicles”). Regarding claim 13, Shashua discloses obtaining perception data from an array of perception information gathering devices or sensors (see Shashua para “0288” “image capture devices 122, 124, and 126”). Regarding claim 14, Shashua discloses obtaining training data including labeling data obtained from human labelers or automated labeling processes (see Shashua paras “0503-0506” “FP pairs extracted from the clip database described in the first scheme may also be labeled by a human responsible for annotating FP matches between clips”). Regarding claim 15, Shashua discloses obtaining perception data from a sensor from the group consisting of: a camera or image capture device, a Global Positioning System (GPS) transceiver, and a laser range finder/LIDAR unit (see Shashua para “0288” “image capture devices 122, 124, and 126”). Regarding claim 17, Shashua discloses determining if any of the predicted trajectories for the one or more proximate dynamic objects may cause the host vehicle to violate a pre-defined goal (see Shashua para paras “0563-0565” “score” being related to determine if any of the predicted trajectories for the one or more proximate dynamic objects may cause the host vehicle to violate a pre- defined goal by virtue of said determination being modeled upon a scene that relies on accurate landmark scores). Regarding claim 18, Shashua discloses causing the host vehicle to follow the proposed trajectory (see Shashua para “0037” “processor may be further programmed to adjust the steering system of the vehicle to move the vehicle from a current position of the vehicle to a position on the predetermined road model trajectory”). Regarding claim 19, Shashua discloses a non-transitory machine-readable storage medium embodying instructions which, when executed by a machine, cause the machine to: receive a proposed trajectory for a host vehicle (see Shashua paras “0106” and “0332” “processing unit 110 may execute instructions associated with velocity and acceleration module 406 to calculate a target speed for vehicle 200 based on data derived from execution of monocular image analysis module 402… processing unit 110 may calculate a target speed for vehicle 200 based on sensory input (e.g., information from radar) and input from other systems of vehicle 200, such as throttling system 220, braking system…”), and cause modification of the proposed host vehicle trajectory if the proposed host vehicle trajectory will conflict with any of the predicted trajectories of the one or more proximate dynamic objects (see Shashua para “0606” “When processing unit 110 determines that the determined autonomous steering does not comply with one or more constraints imposed by the additional considerations, processing unit 110 may modify the autonomous steering action to help ensure that all the constraints may be satisfied”). But Shashua fails to explicitly teach generate predicted trajectories for each of one or more dynamic objects in proximity to the host vehicle, the predicted trajectories for each of the one or more proximate dynamic objects corresponding to likely actions or reactions by each of the one or more proximate dynamic objects based on human driving behavior context data if the host vehicle follows the proposed host vehicle trajectory. However, Fukumoto teaches generate predicted trajectories for each of one or more dynamic objects in proximity to the host vehicle, the predicted trajectories for each of the one or more proximate dynamic objects corresponding to likely actions or reactions by each of the one or more proximate dynamic objects based on human driving behavior context data if the host vehicle follows the proposed host vehicle trajectory (see Fukumoto paragraphs “0015”, “0027”, “0047”, “0052”, “0064-0071” and “0073” “in order to make traffic smooth in a situation where an autonomous driving vehicle and a conventional vehicle coexist, it is required that the autonomous driving vehicle predict an action of the driver of the other conventional vehicle and perform travel control based on the prediction”, “Travel state determination section 1e generates the travel command signal for allowing or restricting entry into intersection LQ in accordance with the action of object vehicle B predicted based on the detection signal of action monitor 5. Travel state determination section 1e operates, for example, when state determination section 1d detects object vehicle B approaching intersection LQ that is ahead of subject vehicle A in the travel direction. Then, the travel command signal generated by travel state determination section 1e is transmitted to, for example, vehicle ECU”, “when travel lane L1 on which subject vehicle A is traveling is a priority road (step S4: YES), travel state determination section 1e generates the travel command signal for allowing entry into intersection LQ to cause subject vehicle A to enter intersection LQ without a stop because it is predicted that object vehicle B will stop before intersection LQ (step S9)… ” regarding predicting the future action of another vehicle and monitors a motion of a driver of object vehicle B to predict an action of object vehicle B and act accordingly). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to predict an action of another vehicle trajectory” as taught by Fukumoto (paras. [0015] – [0068-0072]) in order to determine whether to allow or restrict the own vehicle to pass or no to avoid any collision that can occur with another vehicle. But modified Shashua fails to explicitly disclose wherein the instructions, when executed by a machine, further cause the machine to: use regression to predict acceleration of a dynamic object. However, Peng teaches use regression to predict acceleration of a dynamic object (see Peng para “0055” " Specifically, based on the predicted object information, the radar data processing unit 104-4 can determine a real-time motion model of the movable platform 100, which may include at least one of a uniform motion model corresponding to a zero acceleration, a uniformly accelerated motion model corresponding to a uniform acceleration, or a nonuniformly accelerated motion model corresponding to a nonuniform acceleration. The different motion models can be pre-built and the radar data processing unit 104-4 can choose one or more appropriate models for the purpose of tracking the matched object. Then, based on the real-time motion model of the movable platform 100, the radar data processing unit 104-4 can apply a predetermined filtering algorithm to the object information of the matched object, to predict future object information of the matched object"). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to predict an acceleration of an object nearby” as taught by Peng (para. [0055]) in order to enhance predictive accuracy and improve real-time tracking performance. Regarding claim 20, Shashua discloses configured to generate predicted accelerations for each of the one or more proximate dynamic objects near the host vehicle (see Shashua paras “0123” and “0353” “determine…acceleration of the leading vehicle” and “receive from each of the plurality of autonomous vehicles”). Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable in view of Shashua et al. (US 2017/0010618 A1), (Hereinafter Shashua) in view of Fukumoto et al (US 2019/0272750 A1), (hereinafter Fukumoto) in further view of Peng et al (US 2019/0339384 A1) as applied to claim 3 above, in further view of Nishiwaki et al (US 6,265,991 B1). Regarding claim 21, Shashua fails to explicitly teach wherein different number of proximate dynamic objects positions are equivalently used to define context information for the host vehicle. However, Nishiwaki teaches wherein different number of proximate dynamic objects positions are equivalently used to define context information for the host vehicle (see Nishiwaki col 2, lines 29-60 and col 7, lines 42-50 “detecting distances from an own-vehicle to a plurality of objects around the own-vehicle and lateral positions of the objects relative to a running direction of the own-vehicle, thereby measuring object positional data on coordinates defined by the running direction and a transverse direction of the own-vehicle… and a vehicle's own lane determining unit for determining whether other vehicles running ahead are in the same lane as the own-vehicle or not based on the curvature of the road estimated by the road curvature estimating unit” and “the output means 4 outputs the number of the detected objects, the distances to the respective detected objects, the positions of the detected objects relative to the own-vehicle in the lateral direction”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to provide a vehicular front monitoring apparatus which can reliably recognize a road configuration ahead of an own-vehicle through simpler computation using only a radar system” as taught by Nishiwaki (col 2, lines 29-60 and col 7, lines 42-50) in order to improve accuracy of defining the surrounding traffic context of the host vehicle. Regarding claim 22, Shashua fails to explicitly teach wherein a coordinate system (I, d) is used to define a location of the host vehicle relative to locations of proximate dynamic objects, wherein I axis is aligned parallel with lane markers of a roadway, and d axis is oriented perpendicularly to the I axis and the lane markers of the roadway. However, Nishiwaki teaches wherein a coordinate system (I, d) is used to define a location of the host vehicle relative to locations of proximate dynamic objects, wherein I axis is aligned parallel with lane markers of a roadway, and d axis is oriented perpendicularly to the I axis and the lane markers of the roadway (see Nishiwaki col 2, lines 29-60 and col 7, lines 42-50 “detecting distances from an own-vehicle to a plurality of objects around the own-vehicle and lateral positions of the objects relative to a running direction of the own-vehicle, thereby measuring object positional data on coordinates defined by the running direction and a transverse direction of the own-vehicle… and a vehicle's own lane determining unit for determining whether other vehicles running ahead are in the same lane as the own-vehicle or not based on the curvature of the road estimated by the road curvature estimating unit”, “thereby measuring object positional data (xi, yi) on coordinates defined by the running direction (yi) and a lateral direction (xi) to the running direction” and “the output means 4 outputs the number of the detected objects, the distances to the respective detected objects, the positions of the detected objects relative to the own-vehicle in the lateral direction”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Shashua for self-aware system for adaptive navigation “to provide a vehicular front monitoring apparatus which can reliably recognize a road configuration ahead of an own-vehicle through simpler computation using only a radar system” as taught by Nishiwaki (col 2, lines 29-60 and col 7, lines 42-50) in order to improve accuracy of defining the surrounding traffic context of the host vehicle. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HOSSAM M ABD EL LATIF whose telephone number is (571)272-5869. The examiner can normally be reached M-F 8 am-5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HOSSAM M ABD EL LATIF/Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Jul 21, 2025
Non-Final Rejection — §103
Nov 13, 2025
Response Filed
Nov 24, 2025
Final Rejection — §103
Feb 17, 2026
Request for Continued Examination
Feb 26, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595024
BICYCLE ELECTRIC COMPONENT SETTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583457
Method for Assisting a Vehicle User During a Lane Change Maneuver Taking into Account Different Areas in the Surroundings of the Vehicle, and Driver Assistance System for a Vehicle
2y 5m to grant Granted Mar 24, 2026
Patent 12552563
MOTOR CONTROL OPTIMIZATIONS FOR UNMANNED AERIAL VEHICLES
2y 5m to grant Granted Feb 17, 2026
Patent 12530621
ARTIFICIAL INTELLIGENCE ENABLED VEHICLE OPERATING SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12528493
CONTROL DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
98%
With Interview (+19.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 256 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month