Prosecution Insights
Last updated: April 18, 2026
Application No. 18/933,054

CLOSED-LOOP SIMULATOR FOR MULTIAGENT BEHAVIOR WITH CONTROLLABLE DIFFUSION

Non-Final OA §101§103
Filed
Oct 31, 2024
Examiner
SWEENEY, BRIAN P
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Laboratories America Inc.
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
716 granted / 766 resolved
+41.5% vs TC avg
Moderate +8% lift
Without
With
+7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
21 currently pending
Career history
787
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
19.0%
-21.0% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
32.8%
-7.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 766 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of the Claims This action is in response to applicant’s filing on October 31, 2024. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 The examiner has analyzed the claims to determine 35 U.S.C. 101 eligibility. The claim language that utilizes “determine actions for a plurality of agents in a driving scenario using a diffusion model” is not an abstract idea. Therefore, the claim is not rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al., “DiffScene: Diffusion-Based Safety-Critical Scenario Generation for Autonomous Vehicles”, AdvML-Frontiers 2023 (hereinafter Xu) in view of Nanjing University, CN 116150767 A. (hereinafter Nanjing) Regarding claim 1, Xu teaches a computer-implemented method, comprising: determining actions for a plurality of agents in a driving scenario using a diffusion model, based on individual controllable behavior patterns for the plurality of agents; (Xu, see at least 3.1 Problem Statement) updating a state of the driving scenario based on the determined actions for the plurality of agents; (Xu, see at least 3.1 Problem Statement) repeating the determination of actions and the update of the state in a closed-loop fashion to generate simulated trajectories for the plurality of agents; (Xu, see at least 3.2 Diffusion-based Scenario Generation) and Xu does not specifically teach the following. However, Nanjing teaches training a planner model to select actions for an operating agent based on the simulated trajectories. (Nanjing, ¶ [0016] “Further, in the step 2, the expected driving trajectory refers to the expected driving trajectory of the car under test after being attacked by confrontation, which is composed of a series of coordinate points, and the shape of the trajectory includes curves, S-curves, and the like.”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Xu with those of Nanjing as both utilize machine learning in the field of autonomous driving to make decisions. In addition, this would be combining prior art elements according to known methods to yield predictable results. Regarding claim 2, Xu in view of Nanjing teaches a computer-implemented method, wherein the diffusion model denoises actions, further comprising applying a dynamics model to generate trajectories from the denoised actions. (Xu, see at least Fig. 1) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 3, Xu teaches a computer-implemented method. Xu does not specifically teach the following. However, Nanjing teaches wherein the diffusion model uses a gradient of a guidance function to denoise actions, including a route-based objective function. (Nanjing, ¶ [0020] “Step 5: Extract the sub-gradient of the test area in the gradient, and iteratively update the input of the test object using optimization technology based on the sub-gradient;”) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 4, Xu teaches a computer-implemented method. Xu does not specifically teach the following. However, Nanjing teaches wherein the diffusion model uses a gradient of a guidance function to denoise actions, including a Gaussian-based objective function. (Nanjing, ¶ [0018] “Further, in step 4, positioning the test area in the input and adding Gaussian disturbance noise to the test area to simulate the influence of environmental factors refers to mapping the range of the billboard from the three-dimensional space to the two-dimensional image plane by means of three-dimensional positioning In the coordinates, locate the area of the billboard in the input image, and add Gaussian noise and physical transformation to this area to simulate data loss and unknown noise disturbances during the process of sensor acquisition. Moreover, the Gaussian noise process is differentiable, which is convenient for subsequent calculation of gradients.”) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 5, Xu in view of Nanjing teaches a computer-implemented method, wherein the diffusion model controls the behavior patterns based on instructions generated by a large language model. (Xu, see at least Fig. 1) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 6, Xu teaches a computer-implemented method. Xu does not specifically teach the following. However, Nanjing teaches wherein at least one of the plurality of agents is set to engage in adversarial behavior. (Nanjing, Fig. 1 “Fig. 1 has described the overall execution flowchart of the closed-loop testing method of the present invention, and the whole process is closed-loop. The sensor data of the car is transmitted to the end-to-end autopilot software and the adversarial sample generation module. The former calculates a new decision to control the car, causing the position of the car to change. The latter calculates a new adversarial sample based on the current state of the car and displays it in the virtual scene. On the billboard object in the car, the two together change the environment in which the car is located, thereby changing the data acquisition of the sensor at the next moment. In the above description, the decision-making of the automatic driving software is the control information, and the dynamic change of the scene is the feedback information. The whole process is closed-loop, and the test process of control-feedback-control is realized.”) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 7, Xu in view of Nanjing teaches a computer-implemented method, wherein updating the scenario includes moving the agents within the driving scenario in accordance with trajectories that are affected by the determined actions. (Xu, see at least Fig. 1) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 8, Xu teaches a computer-implemented method. Xu does not specifically teach the following. However, Nanjing teaches further comprising generating a driving action using the planner model responsive to a new scenario and performing the driving action in an autonomous vehicle. (Nanjing, see at least ¶ [0017] “Further, in the step 3, the current location of the test object is located, and based on the test target, the expected deflection angle of the test object is calculated using the pure tracking control algorithm, which means that in order for the test object to drive according to the test target, the pure tracking control algorithm according to The current position of the test object is found, and the nearest coordinate point in the test object is found, and the test object is modeled with a two-wheeled bicycle model, and the angle at which the test object needs to be deflected is calculated. The test object refers to a trajectory of the desired form.”) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 9, Xu teaches a computer-implemented method. Xu does not specifically teach the following. However, Nanjing teaches wherein the new scenario is based on camera information collected by the autonomous vehicle. (Nanjing, see at least ¶ [0014] “Further, in step 1, binding the third-party virtual camera to the texture pattern of the billboard object inside the simulator refers to using UE4 Editor to replace the path of the data source of the surface texture pattern of the billboard object in the virtual test scene with The path to the output file of the virtual camera.”) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Regarding claim 10, Xu teaches a computer-implemented method. Xu does not specifically teach the following. However, Nanjing teaches wherein the driving action is selected from the group consisting of a steering action, a braking action, and an acceleration action. (Nanjing, see at least ¶ [0017] “Further, in the step 3, the current location of the test object is located, and based on the test target, the expected deflection angle of the test object is calculated using the pure tracking control algorithm, which means that in order for the test object to drive according to the test target, the pure tracking control algorithm according to The current position of the test object is found, and the nearest coordinate point in the test object is found, and the test object is modeled with a two-wheeled bicycle model, and the angle at which the test object needs to be deflected is calculated. The test object refers to a trajectory of the desired form.”) (see claim 1 above for rationale supporting obviousness, motivation, and reason to combine.) Independent claim 11 is rejected using substantially the same rationale as claim 1 above. Claim 12 is rejected using substantially the same rationale as claim 2 above. Claim 13 is rejected using substantially the same rationale as claim 3 above. Claim 14 is rejected using substantially the same rationale as claim 4 above. Claim 15 is rejected using substantially the same rationale as claim 5 above. Claim 16 is rejected using substantially the same rationale as claim 6 above. Claim 17 is rejected using substantially the same rationale as claim 7 above. Claim 18 is rejected using substantially the same rationale as claim 8 above. Claim 19 is rejected using substantially the same rationale as claim 9 above. Claim 20 is rejected using substantially the same rationale as claim 10 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN P SWEENEY whose telephone number is (313)446-4906. The examiner can normally be reached on Monday-Thursday from 7:30AM to 5:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James J. Lee, can be reached at telephone number 571-270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. /BRIAN P SWEENEY/ Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Oct 31, 2024
Application Filed
Mar 30, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600345
EXHAUST GAS PURIFICATION UTILIZING A CLUTCH TO SWITCH BRTWEEN DRIVING FORCES IN A HYBRID VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12600342
METHOD FOR CONTROLLING A HYBRID POWERTRAIN AND HYBRID POWERTRAIN OPERATING ACCORDING TO SUCH A METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12594926
SYSTEM AND METHOD FOR POWER ALLOCATION TO HIGH-VOLTAGE THERMAL LOADS FROM MULTIPLE ENERGY SOURCES IN A HYBRID POWERTRAIN DURING COLD CONDITIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12594927
SYSTEM FOR AN INTERNAL COMBUSTION ENGINE WITH AN ELECTRIC TORQUE ASSIST
2y 5m to grant Granted Apr 07, 2026
Patent 12588578
ROW DETECTION SYSTEM, AGRICULTURAL MACHINE HAVING A ROW DETECTION SYSTEM, AND METHOD OF ROW DETECTION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+7.5%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 766 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month