Prosecution Insights
Last updated: April 19, 2026
Application No. 17/926,598

GENERATING SIMULATED EDGE-CASE DRIVING SCENARIOS

Non-Final OA §102§103
Filed
Nov 20, 2022
Examiner
RUDY, ANDREW J
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cognata Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
637 granted / 768 resolved
+30.9% vs TC avg
Strong +15% interview lift
Without
With
+15.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
10 currently pending
Career history
778
Total Applications
across all art units

Statute-Specific Performance

§101
26.4%
-13.6% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
4.3%
-35.7% vs TC avg
§112
26.0%
-14.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 768 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Applicant cancelled claims 8, 16, 19, 21, 24, 25 and 27-30. Claims 1-7, 9-15, 17-18, 20, 22-23 and 26 are pending. Priority 3. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Drawings 4. The drawings filed on November 20, 2022 are accepted. Claim Rejections - 35 USC § 102 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 6. Claims 1, 3, 4, 7-9, 12-15, 17 and 26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Atsmon, US 2020/0098172. Regarding claim 1, Atsmon discloses from Figures 1-4 and related text, a system for generating simulated driving scenarios, comprising at least one hardware processor, e.g. [0002], relating relates to a model of a geographical area, for training an autonomous driving system [0012], a simulated virtual realistic model [Fig. 1, 0031, 0071, 0114], a simulator, e.g. 210, may insert one or more simulated dynamic objects to the simulated virtual realistic model and the static objects detected in the imagery data. The simulator 210 may use one or more computer vision classifiers, e.g., a Convolutional Neural Network (CNN), an SVM and/or the like for classifying the static object(s) detected in the visual imagery data to predefined labels as known in the art. In particular, the classifier(s) may identify and label target static objects depicted in the visual imagery data, for example, a road infrastructure object, a building, a monument, a structure, a natural object, a terrain surface and/or the like, e.g., [0125]. The simulated dynamic objects may include, e.g., a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like. The simulated dynamic objects may further include one a dynamically changing road infrastructure object, e.g., a light changing traffic light, an opened/closed railroad gate and/or the like, both static and dynamic objects may be used by the model as inputs to the driving scenario, e.g. [0134], where the machine learning model is trained using another machine learning model, e.g., Fig. 2; [0114]. A cGAN, as known in the art, may be trained to apply a plurality of visual data transformations, for example, pixel to pixel, label to pixel and/or the like. The cGAN may therefore generate visual appearance imagery, e.g. one or more images, for each of the labels which classify the static object(s) in the labeled model. The cGAN may be trained to perform the reverse operation of the classifier(s), e.g., a classification function, such that the cGAN may generate a corresponding visual imagery for label(s) assigned by the classifier(s) to one or more of the static objects in the labeled model; the training of the machine learning model is run in a reverse operation through the classifiers that are created also using another machine learning model, such as the cGAN, an interesting driving scenario, e.g. [0059], a plurality of driver behavior patterns, e.g. [102], detected for each driver, e.g. [0081]. The simulated virtual realistic model is created by obtaining visual imagery data of the geographical area which may be processed by one or more trained classifiers to identify one or more objects in the visual imagery data the virtual realistic model is adjusted according to one or more lighting and/or environmental conditions to emulate various real world lighting effects, weather conditions, ride scenarios and/or the like, e.g. [0134]. Regarding claims 3-4, Atsmon, wherein the at least one hardware processor is further adapted for providing at least some of the plurality of simulated driving scenarios to at least one autonomous driving model, e.g. [0002], where training, evaluation and/or validation of the autonomous driving systems may be done automatically by an automated system executing the simulated virtual realistic model, e.g. [0071], and validating the at least one autonomous driving model [0089], and an autonomous driving system (ADS) and advanced driver-assistance system (ADAS), e.g. [0005]. Regarding claim 7, Atsmon discloses the plurality of input driving objects comprises at least one of: a moving object of a simulated driving environment, and a static object of a simulated driving environment, e.g. Fig. 1, steps 106, 112, [0114]. As shown at 106, the simulator 210 labels the static objects detected in the imagery data. Regarding claim 8, Atsmon discloses the moving object is selected from a group of moving objects consisting of: a vehicle, and a person, e.g. Fig. 1, step 112; [0125]. Regarding claim 9, Atsmon discloses generating at least one of the plurality of simulated driving scenarios having the machine learning model further provided with a map, e.g. a 2D map or a 3D map, describing a topography of a simulated driving environment, e.g. [0091, 0112]. Regarding claim 12, Atsmon discloses wherein at least one of the plurality of simulated driving scenarios comprises a plurality of movement vectors of a plurality of simulated objects of a simulated driving environment, e.g. [0085, 0086], where the dynamic objects may be controlled according to movement patterns predefined and/or learned for the certain geographical area. Regarding claim 13, Atsmon discloses applying at least one environment-characteristic adjustment to the at least one generated Scenario, e.g. [0035, 0102], wherein the synthetic imaging data is adjusted according to one or more environmental characteristics, for example, a lighting condition, a weather condition attribute and timing attribute. This may further increase ability for adjusting the virtual realistic model. Regarding claim 14, Atsmon discloses the machine learning model is a generator network of a Generative Adversarial Neural Network (GAN) or of a Conditional Generative Adversarial Neural Network (cGAN), e.g. [0081], wherein the classifier(s) may classify the identified static objects to class labels based on a training sample set adjusted for classifying objects of the same type as the target objects. Regarding claim 15, Atsmon discloses a neural network, e.g. Fig. 2; [0114, 0118]. Regarding claim 17, Atsmon discloses the machine learning model is further provided with a plurality of simulation parameters, e.g. [0012], characteristic of at least one interesting driving scenario, each of the plurality of driver motion behavior patterns, e.g. [0059, 0065, 0134]. Regarding claim 26, Atsmon discloses generating simulated driving scenarios, e.g. creating a simulator, e.g. 210, a simulated virtual realistic model of a geographical area, e.g. [0002, 0012, 0031, 0059, 0071,0081, 0091, 0114, 0125]. Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 7. Claims 10 and 11, are rejected under 35 U.S.C. 103 as being unpatentable over Atsmon, US 2020/0098172, in view of Avidan et al., US 2019/0205667. Atsmon disclosed invention is noted above. Atsom does not disclose the plurality of input driving objects comprises at least one object generated by a random object generator. Avidan teaches, e.g. Figs. 1-13 and related text, a vehicle, e.g. 101, a machine learning for simulated driving, e.g. [0036], a plurality of input driving objects, where at least one object is generated by a random object generator, e.g. [0059, 0060, 0071], generated by the synthetic image generator, e.g. 203, using random scenery objects such as buildings, trees, road furniture (signs, lights, etc.), parked vehicles, advertisement posters, from a geographic database, e.g. 109. In addition, variable 3D moving objects (or designed to illustrate the action of interest), such as pedestrians, cyclists, vehicles, animals, debris can be generated to follow realistic (but still random) trajectories/scenarios. It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Atsmon with a random object generator of Avidan, e.g. [0059-0060] for the purpose of providing random objects, thereby filing the driving simulation using real-world objects that are randomized in order to train various. Regarding Claim 11, Atsmon does not disclose the machine learning model is further provided with a plurality of constraints; and wherein the machine learning model generates at least one of the plurality of simulated driving scenarios according to the plurality of constraints. Avidan discloses a simulated driving model, a machine learning model provided with a plurality of constraints, e.g. [0031,0033, 0034, 0036, 0042, 0047], 'For example, labeled datasets for training CNNs or equivalent in the automotive scenario, e.g., for achieving crash detection prediction, are relatively scarce. As used herein, a labeled dataset is image data, e.g., image sequences, that has been annotated with one or more labels that represent ground truth classes for the depicted action or dynamic movement. It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Atsmon in view of Avidan for the purpose of providing various scenarios using the constraints, thereby creating training data for the variety of scenarios. The motivation for doing such is to provide alternative scenarios. 7. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Atsmon, US 2020/0098172, in view of Kazemi et. al., US 2018/0292824. Atsmon disclosed invention is noted from paragraph 6 above. Atsom does not disclose the neural network is trained using an imitation learning method. Kazemi discloses, e.g. Figs. 1-15 and related text, an autonomous vehicle motion planning system, a neural network using an imitation learning method, e.g. [0050], the automatic tuning system can employ the autonomous vehicle motion planning system to generate autonomous motion plans based on the humanly-controlled driving session logs, e.g. [0111, 0135], a scenario controller(s), e.g. 206, that may make discrete-type decisions, designed to classify the current state of one or more corresponding scenarios, e.g. [0135], Also, the automatic tuning computing system e.g. 402, can obtain one or more humanly-executed motion plans, e.g. Fig. 4. It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Atsmon with the imitation learning method of Kazemi for the purpose of providing imitation learning to the training of the model, thereby inputting high quality data and exhibits good driving behavior, e.g. [0128]. The motivation for such is to produce the highest quality imitation learning method for user interaction. Allowable Subject Matter 8. Claims 2, 5, 6, 18, 20, 22 and 23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 2, the prior art of record, individually or in combination, does not teach or fairly suggest the system of claim 1, wherein training the other machine learning model comprises using a plurality of recorded data sets, each recorded while a vehicle traverses a physical scene and comprises a recorded driving scenario and a plurality of recorded driving commands, the training is according to a difference between the plurality of recorded driving commands and a plurality of computed driving commands computed by the other machine learning model in response to the recorded driving scenario. Claims 5 and 6 depend from claim 2. Regarding claim 18, the prior art of record, individually or in combination, does not teach or fairly suggest the system where the plurality of simulation parameters comprises a plurality of time-space-matrix distance values describing a plurality of distances, during an identified time interval, between a vehicle simulated by an autonomous driver and one or more objects of the plurality of input objects. Claims 20, 22 depend from claim 18. Claim 23 depends from claim 22. 9. Further pertinent references of interest are noted on the attached PTO-892. 10. Applicant’s Information Disclosure Statements (IDS’s) submitted January 8, 2023 and June 9, 2024 have been reviewed. Note the attached IDS’s. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JOSEPH RUDY whose telephone number is 571-272-6789. The examiner can generally be reached on Monday thru Friday from about 10am-6pm EST. If attempts to reach the examiner by telephone are unsuccessful the examiner’s supervisor, Fadey Jabr, can be reached on 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW JOSEPH RUDY/ Primary Examiner Art Unit 3668 571-272-6789
Read full office action

Prosecution Timeline

Nov 20, 2022
Application Filed
Jan 24, 2026
Examiner Interview (Telephonic)
Jan 28, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600369
DRIVING ABILITY DETERMINING SYSTEM AND DRIVING ABILITY DETERMINING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12588577
Ground Following Optimization with Position Control Systems and Methods
2y 5m to grant Granted Mar 31, 2026
Patent 12583468
SIGNAL DISTRIBUTION TO AND/OR FROM CONTROLLERS IN VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12583548
CONTROLLER AND CONTROL METHOD FOR RIDER ASSISTANCE SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12576885
AUTONOMOUS VEHICLE CONTROL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+15.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 768 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month