Prosecution Insights
Last updated: April 19, 2026
Application No. 18/855,659

CONFLICT CONTROL METHOD FOR SHARED DRIVING, AND STORAGE MEDIUM AND ELECTRONIC DEVICE

Non-Final OA §102§103
Filed
Oct 10, 2024
Examiner
ZALESKAS, JOHN M
Art Unit
3747
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Jingdong Kunpeng (Jiangsu) Technology Co. Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 6m
To Grant
82%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
386 granted / 623 resolved
-8.0% vs TC avg
Strong +20% interview lift
Without
With
+19.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
32 currently pending
Career history
655
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
32.7%
-7.3% vs TC avg
§102
28.6%
-11.4% vs TC avg
§112
31.6%
-8.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 623 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 4, 5, 9-11, 13, 14, 17, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Non-Patent Literature (NPL) “Shared Steering Torque Control for Lane Change Assistance: A Stochastic Game-Theoretic Approach” to Ji et al. (hereinafter: “Ji”). With respect to claim 1, Ji teaches a conflict control method for shared driving, performed by a shared driving system, the method comprising: establishing, based on a driver's deterministic steering torque and a driver's stochastic steering torque, a game model for human-machine path tracking control corresponding to a human-machine interaction action [for example, as discussed by at least part A of section III on pages 3095-3096, equations (7) and (8) (e.g., “a game model for human-machine path tracking control corresponding to a human-machine interaction action”) are LQ cost functions for evaluating path-tracking performance for a driver (e.g., “human”) and an intelligent electric power steering (IEPS) system (e.g., “machine”) which are established based, in part, on a deterministic driver steering torque input (e.g., “driver's deterministic steering torque,” as defined in the nomenclature section of page 3093) and an uncertain (random) driver steering torque input (e.g., “driver's stochastic steering torque,” as defined in the nomenclature section of page 3093)]; obtaining human-machine torque conflict information by solving the game model for human-machine path tracking control [for example, as discussed by at least parts B & C of section II on pages 3096-3099 and section IV on pages 3099-3100, each of a feedback (FB) Nash equilibrium solution, an open-loop (OL) Nash equilibrium solution, a FB Stackelberg equilibrium solution, and an OL Stackelberg equilibrium solution (e.g., “human-machine torque conflict information”) is obtained based, in part, on solving the equations (7) and (8) under each of a FB Nash strategy, an OL Nash strategy, a FB Stackelberg strategy, and an OL Stackelberg strategy, respectively]; determining a shared control strategy based on the human-machine torque conflict information [for example, as discussed by at least sections IV & V on pages 3099-3100, a first strategy (e.g., the FB Stackelberg strategy) (e.g., “shared control strategy”) of the FB Nash strategy, the OL Nash strategy, the FB Stackelberg strategy, and the OL Stackelberg strategy is adopted (e.g., “determining”) based, in part, on the feedback (FB) Nash equilibrium solution, the open-loop (OL) Nash equilibrium solution, the FB Stackelberg equilibrium solution, and the OL Stackelberg equilibrium solution]; and controlling a shared driving vehicle based on the shared control strategy [for example, as discussed by at least sections IV-VII on pages 3099-3104, a vehicle (e.g., “shared driving vehicle”) is controlled based, in part, on the first strategy]. With respect to claim 2, Ji teaches the conflict control method for shared driving according to claim 1, wherein the game model for human-machine path tracking control comprises a closed-loop game model [as discussed above with respect to claim 1, the FB Nash strategy and/or the FB Stackelberg strategy is/are useable with respect to the equations (7) and (8), such that the equations (7) and (8) are definably inclusive of a “closed-loop game model”], and establishing the game model for human-machine path tracking control corresponding to the human-machine interaction action, comprises: establishing, based on the driver's deterministic steering torque and the driver's stochastic steering torque, a first discrete state update equation for a dynamics system of the shared driving vehicle in a closed-loop information mode [for example, as discussed by at least part A of section III on pages 3095-3096, equations (2) and (3) (e.g., “first discrete state update equation”) are established based, in part, on the deterministic driver steering torque input and the uncertain (random) driver steering torque input, including in case(s) where the FB Nash strategy and/or the FB Stackelberg strategy is/are to be used with respect to the equations (7) and (8) (e.g., “in a closed-loop information mode”)]; obtaining a path tracking augmentation system that comprises a human-machine preview state by augmenting the first discrete state update equation through a human-machine preview dynamic process [for example, as discussed by at least part A of section III on pages 3095-3096, equation (6) (e.g., “path tracking augmentation system”), which is a driver-IEPS-road global shared control system, is obtained based on augmenting an equation (5) to account for preview scope, where the equation (5) is derived by substituting an equation (4) into the equation (2), where the equation (4) represents dynamics of the driver/IEPS’s target path preview (e.g., “human-machine preview dynamic process”), such that the equation (6) includes a definable “human-machine preview state” by virtue of inclusion of the equation (4)]; and establishing, based on the path tracking augmentation system, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking control [for example, as discussed by at least part A of section III on pages 3095-3096, the equations (7) and (8) (e.g., “a driver trajectory cost function and a driving system trajectory cost function”) are established based on the equation (6)]. With respect to claim 4, Ji teaches the conflict control method for shared driving according to claim 2, wherein obtaining the human-machine torque conflict information by solving the game model for human-machine path tracking control, comprises: determining, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Nash equilibrium condition (for example, as discussed by at least section B on page 3096); and calculating, based on the first discrete state update equation and the recursive relationships, closed-loop Nash equilibrium solutions respectively corresponding to the driver and the driving system, as the human-machine torque conflict information (for example, as discussed by at least section B on page 3096-3098). With respect to claim 5, Ji teaches the conflict control method for shared driving according to claim 2, wherein obtaining the human-machine torque conflict information by solving the game model for human-machine path tracking control, comprises: determining, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Stackelberg equilibrium condition (for example, as discussed by at least section C on page 3098); determining a driver reaction function based on the recursive relationship for the steering control value function corresponding to the driving system (for example, as discussed by at least section C on page 3098); calculating, based on the first discrete state update equation, the driver reaction function, and the recursive relationship for the steering control value function corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driving system (for example, as discussed by at least section C on page 3098); and calculating, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driver, as the human-machine torque conflict information (for example, as discussed by at least section C on page 3098-3099). With respect to claim 9, Ji teaches a non-transitory computer-readable storage medium having a computer program stored thereon, which when executed by a processor, causes a conflict control method for shared driving to be implemented, wherein the conflict control method for shared driving comprises: establishing, based on a driver's deterministic steering torque and a driver's stochastic steering torque, a game model for human-machine path tracking control corresponding to a human-machine interaction action; obtaining human-machine torque conflict information by solving the game model for human-machine path tracking control; determining a shared control strategy based on the human-machine torque conflict information; and controlling a shared driving vehicle based on the shared control strategy [as discussed in detail above with respect to claim 1, in view of at least the Abstract and sections IV-VII on pages 3093 & 3099-3104, which disclose implementation via computer and programming (hardware and software)]. With respect to claim 10, Ji teaches an electronic device, comprising: one or more processors; and a storage unit for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to be configured to: establish, based on a driver's deterministic steering torque and a driver's stochastic steering torque, a game model for human-machine path tracking control corresponding to a human-machine interaction action; obtain human-machine torque conflict information by solving the game model for human-machine path tracking control; determine a shared control strategy based on the human-machine torque conflict information; and control a shared driving vehicle based on the shared control strategy (as discussed in detail above with respect to claims 1 and 9). With respect to claim 11, Ji teaches the electronic device according to claim 10, wherein the one or more processors are further configured to: establish, based on the driver's deterministic steering torque and the driver's stochastic steering torque, a first discrete state update equation for a dynamics system of the shared driving vehicle in a closed-loop information mode; obtain a path tracking augmentation system that comprises a human-machine preview state by augmenting the first discrete state update equation through a human-machine preview dynamic process; and establish, based on the path tracking augmentation system, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking control (as discussed in detail above with respect to claims 2 and 10). With respect to claim 13, Ji teaches the electronic device according to claim 11, wherein the one or more processors are further configured to: determine, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Nash equilibrium condition; and calculate, based on the first discrete state update equation and the recursive relationships, closed-loop Nash equilibrium solutions respectively corresponding to the driver and the driving system, as the human-machine torque conflict information (as discussed in detail above with respect to claims 4 and 11). With respect to claim 14, Ji teaches the electronic device according to claim 11, wherein the one or more processors are further configured to: determine, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Stackelberg equilibrium condition; determine a driver reaction function based on the recursive relationship for the steering control value function corresponding to the driving system; calculate, based on the first discrete state update equation, the driver reaction function, and the recursive relationship for the steering control value function corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driving system; and calculate, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driver, as the human-machine torque conflict information (as discussed in detail above with respect to claims 5 and 11). With respect to claim 17, Ji teaches the non-transitory computer-readable storage medium according to claim 9, wherein the game model for human-machine path tracking control comprises a closed-loop game model, and establishing the game model for human-machine path tracking control corresponding to the human-machine interaction action, comprises: establishing, based on the driver's deterministic steering torque and the driver's stochastic steering torque, a first discrete state update equation for a dynamics system of the shared driving vehicle in a closed-loop information mode; by augmenting the first discrete state update equation through a human-machine preview dynamic process; and establishing, based on the path tracking augmentation system, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking control (as discussed in detail above with respect to claims 2 and 9). With respect to claim 19, Ji teaches the non-transitory computer-readable storage medium according to claim 17, wherein obtaining the human-machine torque conflict information by solving the game model for human-machine path tracking control, comprises: determining, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Nash equilibrium condition; and calculating, based on the first discrete state update equation and the recursive relationships, closed-loop Nash equilibrium solutions respectively corresponding to the driver and the driving system, as the human-machine torque conflict information; or determining, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Stackelberg equilibrium condition; determining a driver reaction function based on the recursive relationship for the steering control value function corresponding to the driving system; calculating, based on the first discrete state update equation, the driver reaction function, and the recursive relationship for the steering control value function corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driving system; and calculating, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driver, as the human-machine torque conflict information (as discussed in detail above with respect to claim 4 and 17; alternatively, as discussed in detail above with respect to claims 5 and 17; because “determining, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Nash equilibrium condition; and calculating, based on the first discrete state update equation and the recursive relationships, closed-loop Nash equilibrium solutions respectively corresponding to the driver and the driving system, as the human-machine torque conflict information” and “determining, based on a stochastic dynamic programming algorithm, recursive relationships for steering control value functions respectively corresponding to a driver and a driving system under a Stackelberg equilibrium condition; determining a driver reaction function based on the recursive relationship for the steering control value function corresponding to the driving system; calculating, based on the first discrete state update equation, the driver reaction function, and the recursive relationship for the steering control value function corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driving system; and calculating, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system, an open-loop Stackelberg equilibrium solution corresponding to the driver, as the human-machine torque conflict information” are recited in the alternative, it is sufficient to address one of the claimed alternatives). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 3, 6, 7, 12, 15, 16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ji in view of NPL “Modelling of a Human Driver’s Interaction with Vehicle Automated Steering Using Cooperative Game Theory” to Na et al. (hereinafter: “Na”), and in view of NPL “Application of Open-Loop Stackelberg Equilibrium to Modeling a Driver’s Interaction with Vehicle Active Steering Control in Obstacle Avoidance” to Cole et al. (hereinafter: “Cole”). With respect to claim 3, Ji teaches the conflict control method for shared driving according to claim 1, wherein the game model for human-machine path tracking control comprises an open-loop game model [as discussed above with respect to claim 1, the OL Nash strategy and/or the OL Stackelberg strategy is/are useable with respect to the equations (7) and (8), such that the equations (7) and (8) are definably inclusive of an “open-loop game model”], and establishing the game model for human-machine path tracking control corresponding to the human-machine interaction action, comprises: establishing, based on the driver's deterministic steering torque and the driver's stochastic steering torque, a second discrete state update equation for a dynamics system of the shared driving vehicle in an open-loop information mode [for example, as discussed by at least part A of section III on pages 3095-3096, equations (2) and (3) (e.g., “second discrete state update equation”) are established based, in part, on the deterministic driver steering torque input and the uncertain (random) driver steering torque input, including in case(s) where the OL Nash strategy and/or the OL Stackelberg strategy is/are to be used with respect to the equations (7) and (8) (e.g., “in an open-loop information mode”)]. Ji appears to lack a clear teaching as to whether establishing the game model for human-machine path tracking control corresponding to the human-machine interaction action further includes determining a prediction output vector in a prediction time domain based on the second discrete state update equation, and determining a driver reference trajectory vector and a driving system reference trajectory vector; and establishing, based on the prediction output vector, the driver reference trajectory vector, and the driving system reference trajectory vector, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking Na teaches determining a prediction output vector in a prediction time domain based on a discrete state update equation, and determining a driver reference trajectory vector and a driving system reference trajectory vector; and establishing, based on the prediction output vector, the driver reference trajectory vector, and the driving system reference trajectory vector, a driver trajectory cost function and a driving system trajectory cost function, to obtain a game model for human-machine path tracking control (for example, as discussed by at least section III on pages 1097-1100). Cole also teaches determining a prediction output vector in a prediction time domain based on a discrete state update equation, and determining a driver reference trajectory vector and a driving system reference trajectory vector; and establishing, based on the prediction output vector, the driver reference trajectory vector, and the driving system reference trajectory vector, a driver trajectory cost function and a driving system trajectory cost function, to obtain a game model for human-machine path tracking control (for example, as discussed by at least section III on pages 675-681). It would have been obvious to one having ordinary skill in the art at the time the invention was made to have modified the conflict control method of Ji with the teachings of Na and/or Cole such that establishing the game model for human-machine path tracking control corresponding to the human-machine interaction action further includes determining a prediction output vector in a prediction time domain based on the second discrete state update equation, and determining a driver reference trajectory vector and a driving system reference trajectory vector; and establishing, based on the prediction output vector, the driver reference trajectory vector, and the driving system reference trajectory vector, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking to beneficially implement a particular technique for obtaining open-loop Nash equilibrium solutions (e.g., via techniques disclosed by Na) and/or for obtaining open-loop Stackelberg equilibrium solutions (e.g., via techniques disclosed by Cole) in place of the generic techniques for the same disclosed by Ji. With respect to claim 6, Ji modified supra teaches the conflict control method for shared driving according to claim 3, wherein obtaining the human-machine torque conflict information by solving the game model for human-machine path tracking control, comprises: obtaining a closed form solution corresponding to the game model by solving the game model for human-machine path tracking control (as discussed in detail above with respect to claims 2 and 4); obtaining, based on the closed form solution corresponding to the game model, a relationship expression between human-machine steering control and a target trajectory (as discussed in detail above with respect to claims 2 and 4); and obtaining open-loop Nash equilibrium solutions respectively corresponding to a driver and a driving system by solving the relationship expression based on a convex iterative algorithm, as the human-machine torque conflict information (for example, as discussed by at least section III on pages 1097-1100 of Na). With respect to claim 7, Ji modified supra teaches the conflict control method for shared driving according to claim 3, wherein obtaining the human-machine torque conflict information by solving the game model for human-machine path tracking control, comprises: converting the driving system trajectory cost function into a driving system trajectory optimization function that considers a driver reaction function (as discussed in detail above with respect to claim 3); obtaining an open-loop Stackelberg equilibrium solution corresponding to a driving system by solving the driving system trajectory optimization function (as discussed in detail above with respect to claim 3); and calculating, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system and the driver trajectory cost function, an open-loop Stackelberg equilibrium solution corresponding to a driver, as the human-machine torque conflict information (as discussed in detail above with respect to claim 3). With respect to claim 12, Ji modified supra teaches the electronic device according to claim 10, wherein the one or more processors are further configured to: establish, based on the driver's deterministic steering torque and the driver's stochastic steering torque, a second discrete state update equation for a dynamics system of the shared driving vehicle in an open-loop information mode; determine a prediction output vector in a prediction time domain based on the second discrete state update equation, and determine a driver reference trajectory vector and a driving system reference trajectory vector; and establish, based on the prediction output vector, the driver reference trajectory vector, and the driving system reference trajectory vector, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking control (as discussed in detail above with respect to claims 3 and 10). With respect to claim 15, Ji modified supra teaches the electronic device according to claim 12, wherein the one or more processors are further configured to: obtain a closed form solution corresponding to the game model by solving the game model for human-machine path tracking control; obtain, based on the closed form solution corresponding to the game model, a relationship expression between human-machine steering control and a target trajectory; and obtain open-loop Nash equilibrium solutions respectively corresponding to a driver and a driving system by solving the relationship expression based on a convex iterative algorithm, as the human-machine torque conflict information (as discussed in detail above with respect to claims 6 and 12). With respect to claim 16, Ji modified supra teaches the electronic device according to claim 12, wherein the one or more processors are further configured to: convert the driving system trajectory cost function into a driving system trajectory optimization function that considers a driver reaction function; obtain an open-loop Stackelberg equilibrium solution corresponding to a driving system by solving the driving system trajectory optimization function; and calculate, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system and the driver trajectory cost function, an open-loop Stackelberg equilibrium solution corresponding to a driver, as the human-machine torque conflict information (as discussed in detail above with respect to claims 7 and 12). With respect to claim 18, Ji modified supra teaches the non-transitory computer-readable storage medium according to claim 9, wherein the game model for human-machine path tracking control comprises an open-loop game model, and establishing the game model for human-machine path tracking control corresponding to the human-machine interaction action, comprises: establishing, based on the driver's deterministic steering torque and the driver's stochastic steering torque, a second discrete state update equation for a dynamics system of the shared driving vehicle in an open-loop information mode; determining a prediction output vector in a prediction time domain based on the second discrete state update equation, and determining a driver reference trajectory vector and a driving system reference trajectory vector; and establishing, based on the prediction output vector, the driver reference trajectory vector, and the driving system reference trajectory vector, a driver trajectory cost function and a driving system trajectory cost function, to obtain the game model for human-machine path tracking control (as discussed in detail above with respect to claims 3 and 9). With respect to claim 20, Ji modified supra teaches the non-transitory computer-readable storage medium according to claim 18, wherein obtaining the human-machine torque conflict information by solving the game model for human-machine path tracking control, comprises: obtaining a closed form solution corresponding to the game model by solving the game model for human-machine path tracking control; obtaining, based on the closed form solution corresponding to the game model, a relationship expression between human-machine steering control and a target trajectory; and obtaining open-loop Nash equilibrium solutions respectively corresponding to a driver and a driving system by solving the relationship expression based on a convex iterative algorithm, as the human-machine torque conflict information; or converting the driving system trajectory cost function into a driving system trajectory optimization function that considers a driver reaction function; obtaining an open-loop Stackelberg equilibrium solution corresponding to a driving system by solving the driving system trajectory optimization function; and calculating, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system and the driver trajectory cost function, an open-loop Stackelberg equilibrium solution corresponding to a driver, as the human-machine torque conflict information (as discussed in detail above with respect to claim 6 and 18; alternatively, as discussed in detail above with respect to claims 7 and 18; because “obtaining a closed form solution corresponding to the game model by solving the game model for human-machine path tracking control; obtaining, based on the closed form solution corresponding to the game model, a relationship expression between human-machine steering control and a target trajectory; and obtaining open-loop Nash equilibrium solutions respectively corresponding to a driver and a driving system by solving the relationship expression based on a convex iterative algorithm, as the human-machine torque conflict information” and “converting the driving system trajectory cost function into a driving system trajectory optimization function that considers a driver reaction function; obtaining an open-loop Stackelberg equilibrium solution corresponding to a driving system by solving the driving system trajectory optimization function; and calculating, based on the open-loop Stackelberg equilibrium solution corresponding to the driving system and the driver trajectory cost function, an open-loop Stackelberg equilibrium solution corresponding to a driver, as the human-machine torque conflict information” are recited in the alternative, it is sufficient to address one of the claimed alternatives). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is provided on the attached PTO-892 Notice of References Cited form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ZALESKAS whose telephone number is (571)272-5958. The examiner can normally be reached M-F 8:00 AM - 4:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Logan Kraft can be reached at 571-270-5065. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN M ZALESKAS/Primary Examiner, Art Unit 3747
Read full office action

Prosecution Timeline

Oct 10, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600336
METHOD FOR CONTROLLING A BRAKING SYSTEM, BRAKING SYSTEM AND MOTOR VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12570261
HYDRAULIC BRAKE SYSTEM FOR A VEHICLE, VEHICLE, METHOD FOR OPERATING A HYDRAULIC BRAKE SYSTEM FOR A VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12565072
Active suspension vehicle and control method
2y 5m to grant Granted Mar 03, 2026
Patent 12565182
AUTONOMOUS BRAKE WEAR ESTIMATION
2y 5m to grant Granted Mar 03, 2026
Patent 12559073
SELF-CALIBRATING WHEEL SPEED SIGNALS FOR ADJUSTING BRAKE AND CHASSIS CONTROLS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
82%
With Interview (+19.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 623 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month