Prosecution Insights
Last updated: April 18, 2026
Application No. 18/008,140

GENERATING SIMULATION ENVIRONMENTS FOR TESTING AV BEHAVIOUR

Non-Final OA §101§102§103
Filed
Dec 02, 2022
Examiner
MONTES, NARCISO EDUARDO
Art Unit
2189
Tech Center
2100 — Computer Architecture & Software
Assignee
Five AI Limited
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+45.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
30.0%
-10.0% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17, 20-21, and 24 are rejected under 35 U.S.C 101 because the claimed invention is directed to a judicial exception without significantly more. Claim 1. STEP 1: Yes. The claim recites a “method” which is a process. STEP 2A PRONG ONE: The claim recites multiple mental processes. marking multiple locations to create at least one path for an agent vehicle in the rendered image of the environment This describes an observation, evaluation, judgment or opinion that can be done in the mind or with the aid of pen and paper. In this case it would be evaluation of the path in marking it. generating at least one path which passes through the multiple locations, This describes an observation, evaluation, judgment or opinion that can be done in the mind or with the aid of pen and paper. In this case it would be evaluating that the path passes through multiple locations. defining at least one behavioural parameter for controlling behaviour of the agent vehicle associated with the at least one path when the scenario is run in a simulation environment This describes an observation, evaluation, judgment or opinion that can be done in the mind or with the aid of pen and paper. In this case it would be a judgment to determine the parameter of the agent vehicle. STEP 2A PRONG TWO: The claim does not integrate the exception into a practical application. STEP 2B: The claim does not recite an inventive concept or significantly more than the exception. The “rendering of the display” is generic computer use as a tool to apply the abstract idea. MPEP 2106.05(f). The “recording” is extra solution activity of saving. MPEP 2106.05(g). Conclusion: Claim 1 is directed to a mental process, not integrated into a practical application and lacks an inventive concept. Therefore, it is ineligible under 35 U.S.C 101. Regarding Claim 2: The method of claim 1 comprising detecting that a user has selected one of the marked locations and has repositioned it in the image, and generating at least one new path which passes through the existing multiple locations and the repositioned location. Here detecting and generating a path that passes through multiple locations is a mental process that can be done with the aid of pen and paper. Regarding Claim 3: The method of claim 1 comprising detecting that a user has highlighted a set of adjacent locations of the multiple locations and repositioned that set, and generating at least one new path which passes through the existing multiple locations and the repositioned set of multiple locations. Here detecting is a mental process of evaluation and observation used in the generation of the path which can be done with the aid of pen and paper. Regarding Claim 4: The method of claim 1 in which the at least one path comprises at least one curved section. This claim merely defines the shape of the path and does not resolve the issues from the claim it depends upon. Regarding Claim 5: The method of claim 1 wherein the step of generating the at least one path comprises interpolating between the marked multiple locations to generate a continuous path which passes through the marked multiple locations. This claim merely defines that interpolating is done when executing the mental process. This does not remedy the issues from the claim it depends upon. Regarding Claim 6: The method of claim 1 comprising detecting that a user has selected one of the marked locations and displaying at the selected location a path parameter at that marked location. This is extra solution activity of displaying and does not remedy the issues from the claim it depends upon. Regarding Claim 7: The method of claim 6, wherein the path parameter is a default speed for an agent vehicle on the path when the scenario is run in an execution environment. This merely defines the path parameter and does not remedy the issues from the claim it depends upon. Regarding Claim 8: The method of claim 1 wherein the step of rendering the image of the environment on the display comprises accessing an existing scenario from a scenario database, and displaying that existing scenario on the display. This merely defines the accessing of the data which falls under data gathering activity and does not remedy the issues from the claim it depends upon. Regarding Claim 9: The method of claim 8, wherein the existing scenario comprises a static layer for rendering static objects in the environment and a dynamic layer for controlling motion of moving agents in the environment. This merely defines what is in the “scenario” and does not remedy the issues from the claim it depends upon. Regarding Claim 10: The method of claim 1 comprising detecting that a user has selected a playback mode at the editing interface and simulating motion of the agent vehicle according to the at least one path and the at least one behavioural parameter in the scenario in the playback mode. This claim provides more detail of the mental process of evaluating the path of the agent vehicle that can be done in the mind or with the aid of pen and paper. Regarding Claim 11: The method of claim 1 comprising receiving at the editing interface user input defining a target region, at least one trigger agent, and at last one triggered action, the presence of the target agent in the target region being detected in a simulation when the scenario is run in a simulation environment and causing the triggered action to be effected. This claim recites a mental process of evaluation of when a trigger occurs. It does not remedy the issues from the claim it depends upon. Regarding Claim 12: The method of claim 1 wherein the at least one path does not conform to any driveable track of the road layout in the scene. This claim just further describes an aspect of the path and it does not remedy the issues from the claim it depends upon. Regarding Claim 13: The method of claim 1 wherein the road layout comprises at least one traffic junction, and wherein the at least one path for the agent vehicle traverses the junction in a manner likely to create a possible collision event with an ego vehicle on the road layout when the scenario is run in a simulation environment. This just defines the scenarios’ purpose and is a field of use limitation. MPEP 2106.05(h). It does not remedy the issues from the claim it depends upon. Claim 14. STEP 1: Yes. The claim recites a “system” which is a machine. STEP 2A PRONG ONE: The claim recites multiple mental processes. an editing interface configured to receive user input for marking multiple locations to create at least one path for an agent vehicle in the image of the environment and for defining at least one behavioural parameter for controlling behaviour of the agent vehicle associated with the at least one path when the scenario is run in a simulation environment; This is a mental process of observation to define a parameter. a path generation module configured to generate at least one path which passes through the multiple locations; and This is a mental process of generation a path that passes through multiple locations with the aid of pen and paper. STEP 2A PRONG TWO: The claim does not integrate the exception into a practical application. STEP 2B: The claim does not recite an inventive concept or significantly more than the exception. The limitations “a display configured to present an image of an environment comprising a road layout”, “one or more processors; and”, and “computer memory comprising instructions executable by the one or more processors to implement:” is generic computer components used as a tool to apply the abstract idea. MPEP 2106.05(f). Conclusion: Claim 14 is directed to a mental process, not integrated into a practical application and lacks an inventive concept. Therefore, it is ineligible under 35 U.S.C 101. Regarding Claim 15: The system of claim 14 wherein the path generation module is configured to detect that a user has selected one of the marked locations and has repositioned it in the image, and to generate at least one new path which passes through the existing multiple locations and the repositioned location. This claim recites a mental process of observation of locations and paths. It does not remedy the issues from the claim it depends upon. Regarding Claim 16: The system of claim 14 wherein the path generation module is configured to detect that a user has highlighted a set of adjacent locations of the multiple locations and repositioned that set, and to generate at least one new path which passes through the existing multiple locations and the repositioned set of multiple locations. This is a mental process of observation that can be done with the aid of pen and paper to create a path. It does not remedy the issues from the claim it depends upon. Regarding Claim 17: The system of claim 14, wherein the path generation module is configured to generate the at least one path by interpolating between the marked multiple locations to generate a continuous path which passes through the marked multiple locations. This is a mental process of for making a path that passes through multiple locations with the aid of pen and paper. It does not remedy the issues from the claim it depends upon. Regarding Claim 20: The system of claim 14, the computer memory comprising a scenario database which stores existing scenarios accessible for display, each existing scenario comprising a static layer for rendering static objects in the environment and a dynamic layer for controlling motion of moving agents in the environment. This is a mental process of for making a path that passes through multiple locations with the aid of pen and paper. It does not remedy the issues from the claim it depends upon. Regarding Claim 21: The system of claim 14, configured to detect that a user has selected a playback mode at the editing interface and to simulate motion of the agent vehicle according to the at least one path and the at least one behavioural parameter in the scenario in the playback mode. This is a mental process of observation where the motion is traced moving along a path with a parameter such as velocity that can be done with the aid of pen and paper. It does not remedy the issues from the claim it depends upon. Claim 24. STEP 1: Yes. The claim recites a “computer program product … executed by a computer” which is a manufacture. STEP 2A PRONG ONE: The claim recites multiple mental processes. marking multiple locations to create at least one path for an agent vehicle in the rendered image of the environment This describes an observation, evaluation, judgment or opinion that can be done in the mind or with the aid of pen and paper. In this case it would be evaluation of the path in marking it. generating at least one path which passes through the multiple locations, This describes an observation, evaluation, judgment or opinion that can be done in the mind or with the aid of pen and paper. In this case it would be evaluating that the path passes through multiple locations. defining at least one behavioural parameter for controlling behaviour of the agent vehicle associated with the at least one path when the scenario is run in a simulation environment This describes an observation, evaluation, judgment or opinion that can be done in the mind or with the aid of pen and paper. In this case it would be a judgment to determine the parameter of the agent vehicle. STEP 2A PRONG TWO: The claim does not integrate the exception into a practical application. STEP 2B: The claim does not recite an inventive concept or significantly more than the exception. The “rendering of the display” is generic computer use as a tool to apply the abstract idea. MPEP 2106.05(f). The “recording” is extra solution activity of saving. MPEP 2106.05(g). Conclusion: Claim 24 is directed to a mental process, not integrated into a practical application and lacks an inventive concept. Therefore, it is ineligible under 35 U.S.C 101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4, and 24 are rejected under 35 U.S.C 102 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”]. Regarding Claim 1, CORLESS teaches A computer implemented method of generating a scenario to be run in a simulation environment for testing the behaviour of an autonomous vehicle, the method comprising: rendering on a display of a computer device an image of an environment comprising a road layout, PNG media_image1.png 628 1192 media_image1.png Greyscale This slide shows a display rendering an environment of a road layout. (Pg. 5). receiving at an editing interface user input for marking multiple locations to create at least one path for an agent vehicle in the rendered image of the environment, PNG media_image1.png 628 1192 media_image1.png Greyscale Slide 5 shows an interface for users to edit in vehicles and their paths. (Pg. 5). generating at least one path which passes through the multiple locations, and rendering the at least one path in the image, PNG media_image2.png 614 1204 media_image2.png Greyscale Slide 7 shows a generated path through multiple locations and rendered on the display. (Pg. 7). receiving at the editing interface user input defining at least one behavioural parameter for controlling behaviour of the agent vehicle associated with the at least one path when the scenario is run in a simulation environment, and PNG media_image2.png 614 1204 media_image2.png Greyscale Slide 7 shows using trajectory parameters for the ego agent associated with the path followed. (Pg. 7). recording the scenario comprising the environment, the marked path and the at least one behavioural parameter. PNG media_image3.png 702 1236 media_image3.png Greyscale Slide 6 shows saving the scenario of the environment, path, vehicles, and its parameters. (Pg. 6). Regarding Claim 4, CORLESS teaches The method of any claim 1, in which the at least one path comprises at least one curved section. PNG media_image4.png 694 1240 media_image4.png Greyscale (Pg. 7). This shows the path comprising a curved section. Claim 24 recites substantially the same limitations as claim 1 except this claim is directed to a “a non-transitory computer readable medium”. Therefore this, claim is rejected under the same rational as addressed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 5, and 8-10 are rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], and “Driving Scenario Designer” (2019) [herein “WAYBACK MACHINE”]. Regarding Claim 2, CORLESS does not explicitly teach but WAYBACK MACHINE teaches The method of claim 1 comprising detecting that a user has selected one of the marked locations and has repositioned it in the image, and generating at least one new path which passes through the existing multiple locations and the repositioned location. PNG media_image5.png 610 674 media_image5.png Greyscale (Pg. 4) PNG media_image6.png 466 676 media_image6.png Greyscale (Pg. 15) This shows selecting a waypoint to be repositioned in the image and then recalculating the path to pass through the new set of waypoints. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of WAYBACK MACHINE’s method of generating paths with CORLESS’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust simulation system as stated by WAYBACK MACHINE “…enables you to design synthetic driving scenarios for testing your autonomous driving systems.”. (Pg. 1). Regarding Claim 5, CORLESS does not explicitly teach but WAYBACK MACHINE teaches The method of claim 1, wherein the step of generating the at least one path comprises interpolating between the marked multiple locations to generate a continuous path which passes through the marked multiple locations. PNG media_image7.png 554 1568 media_image7.png Greyscale (Pg. 3). PNG media_image8.png 900 1616 media_image8.png Greyscale (Pg. 4). This shows the path being interpolated when the three points are selected to create a road or path for the agent. Regarding Claim 8, CORLESS does not explicitly teach but WAYBACK MACHINE teaches The method of claim 1 wherein the step of rendering the image of the environment on the display comprises accessing an existing scenario from a scenario database, and displaying that existing scenario on the display. PNG media_image9.png 428 872 media_image9.png Greyscale (Pg. 4). “Generate vision sensor detections from a prebuilt driving scenario of a Euro NCAP test protocol. For more details on prebuilt scenarios available from the app, see Prebuilt Driving Scenarios in Driving Scenario Designer. For more details on available Euro NCAP scenarios, see Euro NCAP Driving Scenarios in Driving Scenario Designer. Load a Euro NCAP autonomous emergency braking (AEB) scenario of a collision with a pedestrian child. At collision time, the point of impact occurs 50% of the way across the width of the car.”. (Pg. 4).“Run the scenario, and adjust settings as needed. Then click Save > Roads & Actors to save the road and car models to a MAT-file.”. (Pg. 4). This shows accessing a preexisting scenario from a storage medium for display in simulation. Regarding Claim 9, CORLESS does not explicitly teach but WAYBACK MACHINE teaches The method of claim 8, wherein the existing scenario comprises a static layer for rendering static objects in the environment and a dynamic layer for controlling motion of moving agents in the environment. PNG media_image10.png 652 714 media_image10.png Greyscale (Pg. 2). This shows as described by the specification what is constituted as a static layer of road ways and dynamic layer of ego vehicles as shown above. Regarding Claim 10, CORLESS does not explicitly teach but WAYBACK MACHINE teaches The method of claim 1 comprising detecting that a user has selected a playback mode at the editing interface and simulating motion of the agent vehicle according to the at least one path and the at least one behavioural parameter in the scenario in the playback mode. PNG media_image10.png 652 714 media_image10.png Greyscale (Pg. 2). PNG media_image11.png 450 684 media_image11.png Greyscale (Pg. 6). The above images show detecting when the user selects a playback mode in this case continue/run to play the scenario simulation for a vehicle along a path and a parameter in this case speed. Claim 3 is rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], and “Blender 2.79 Modeling » Curves » Editing » Introduction” (2019) [herein “BLENDER”]. Regarding Claim 3, CORLESS does not explicitly teach but BLENDER teaches The method of claim 1 comprising detecting that a user has highlighted a set of adjacent locations of the multiple locations and repositioned that set, and generating at least one new path which passes through the existing multiple locations and the repositioned set of multiple locations. “…curve control points and handles can be grabbed/moved G, rotated R or scaled S as described in the Basic Transformations section.”. (Pg. 1). “When more than one vertex is selected, the median values are edited and “Median” is added in front of the labels.”. (Pg. 1). “For Bézier curves, this smoothing operation reduces the distance between the selected control point/s and their neighbors, while keeping the neighbors anchored.”. (Pg. 8). “Deletes the selected control points, while the remaining segment is fitted to the deleted curve by adjusting its handles.”. (Pg. 6). “The first controls (X, Y, Z) show the coordinates of the selected point or handle (vertex).”. (Pg. 1). This shows the user selecting multiple points to reposition them. It also shows altering points that will regenerate the path on the coordinate plane. It would of have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of BLENDER’s method of editing paths with CORLESS’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust editable simulation system as stated by BLENDER “Like other elements in Blender, curve control points and handles can be grabbed/moved rotated R or scaled G, S as described in the Basic Transformations section. When in Edit Mode, proportional editing is also available for transformation actions.”. (Pg. 1). Claims 6 and 7 are rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], and “Driving Scenario Designer” (2018) [herein “Driving Scenario Designer”]. Regarding Claim 6, CORLESS does not explicitly teach but DRIVING SCENARIO DESIGNER teaches The method of claim 1, comprising detecting that a user has selected one of the marked locations and displaying at the selected location a path parameter at that marked location. PNG media_image12.png 824 1454 media_image12.png Greyscale (2:04). This shows the user selecting a location for the ego vehicle and displaying a parameter. In this case the parameter is location. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to be motivated to combine CORLESS and DRIVING SCENARIO DESIGNER because DRIVING SCENARIO DESIGNER is art recognized for the intended purpose of visually displaying a user selected location a path parameter at that marked location and a default speed parameter. (SEE MPEP 2144.07). Regarding Claim 7, CORLESS does not explicitly teach but DRIVING SCENARIO DESIGNER teaches The method of claim 6. wherein the path parameter is a default speed for an agent vehicle on the path when the scenario is run in an execution environment. PNG media_image12.png 824 1454 media_image12.png Greyscale (2:04). This shows the default path parameter being the speed. In this case it is set to 30 m/s. Claim 11 is rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], and US20180107770A1 by CAHOON et al. (2019) [herein “CAHOON”]. Regarding Claim 11, CORLESS does not explicitly teach but CAHOON teaches The method of claim 1 comprising receiving at the editing interface user input defining a target region, at least one trigger agent, and at last one triggered action, the presence of the target agent in the target region being detected in a simulation when the scenario is run in a simulation environment and causing the triggered action to be effected. “In at least one example, a condition primitive can cause the performance of some action or evaluation of some other condition to be delayed until a condition associated with the condition primitive is satisfied. Condition primitives can include, but are not limited to, a “wait” condition, a “wait for” condition, a “distance between or near” condition, a “speed” condition, an “in region” condition, etc.”. (0027). “An “in region” condition can instruct the simulator to delay the performance of some action or evaluation of some other condition until an entity is within a specified region. That is, an “in region” condition can be satisfied based on at least a portion of an entity being within a volume of space in a scene (e.g., environment) corresponding to a specified region. In at least one example, based on at least a portion of an entity being within a volume of space in a scene (e.g., environment) corresponding to a specified region, a Boolean signaling can be relayed to the simulator indicating that an entity is within the specified region.”. (0031). “…actions corresponding to action primitives can be performed by the simulator when they are encountered during instantiation of a sequence.”. (0032). This shows a simulator having regions with triggers that are detected and then activated to cause an action. It would of have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of CAHOON’s method of setting triggers with CORLESS’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust simulation system with detectible triggered events for testing as stated by CAHOON “In at least one example, actions corresponding to action primitives can be performed by the simulator when they are encountered during instantiation of a sequence. Actions can include turning a trigger on or off. Furthermore, actions can include identifying success or failure, for instance, at the end of a test.”. (0032). Claims 12 and 13 are rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], and “ASAM Open SCENARIO: User Guide” (2020) [herein “ASAM”]. Regarding Claim 12, CORLESS does not explicitly teach but ASAM teaches The method of claim 1 wherein the at least one path does not conform to any driveable track of the road layout in the scene. “Open SCENARIO defines a data model and a derived file format for the description of scenarios used in driving and traffic simulators, as well as in automotive virtual development, testing and validation. The primary use-case of Open SCENARIO is to describe complex, synchronized Maneuvers that involve multiple instances of Entity, like Vehicles, Pedestrians and other traffic participants. The description of a scenario may be based on driver Actions (e.g. performing a lane change) or on instances of Trajectory (e.g. derived from a recorded driving Maneuver). The standard provides the description methodology for scenarios by defining hierarchical elements, from which scenarios, their attributes and relations are constructed. This methodology comprises:”. (Pg. 5). “Instantiation of instances of Entity, such as Vehicles, or Pedestrians, acting on and off the road.”. (Pg. 5). “Routes are used to navigate instances of Entity through the road network based on a list of Waypoints on the road which are linked in order, resulting in directional Routes. An Entity's movement between the Waypoints is left to the simulator using the RouteStrategy as constraint.”. (Pg. 13). “Instances of Trajectory can be specified using just the three positional dimensions (along the X, Y, and Z axes, see Section 3.1.7 for coordinate system definitions).”. (Pg. 14). This shows the vehicle path being able to not conform to a road layout. Meaning it could drive off the roadway. It would of have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of ASAM’s method of tracking and vehicle trajectory with CORLESS’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust simulation system with greater trajectory control and pathing for testing as stated by ASAM “The primary use-case of Open SCENARIO is to describe complex, synchronized Maneuvers that involve multiple instances of Entity, like Vehicles, Pedestrians and other traffic participants. The description of a scenario may be based on driver Actions (e.g. performing a lane change) or on instances of Trajectory (e.g. derived from a recorded driving Maneuver). The standard provides the description methodology for scenarios by defining hierarchical elements, from which scenarios, their attributes and relations are constructed.”. (Section 1.1.2). Regarding Claim 13, CORLESS does not explicitly teach but ASAM teaches The method of claim 1 wherein the road layout comprises at least one traffic junction, and wherein the at least one path for the agent vehicle traverses the junction in a manner likely to create a possible collision event with an ego vehicle on the road layout when the scenario is run in a simulation environment. “This scenario recreates a critical situation at an intersection where the Ego vehicle and another vehicle are in a collision course. The Ego vehicle is instantiated with an initial speed of 10m/s and is assigned a route guiding it straight through an intersection (south to north). A second vehicle is instantiated with no speed and a route which will guide it straight through the same intersection (west to east).”. (Pg. 45). “A publicly developed and vendor-independent standard, such as Open SCENARIO, supports this endeavor by enabling the exchange and usability of scenarios in various simulation applications.”. (Pg. 5). This shows a road layout with a traffic junction, that has an agent vehicle in a collision course with another vehicle that is run in a simulation. Claim 14 is rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], and US20110118927A1 by CIMA et al. (2011) [herein “CIMA”]. Regarding Claim 14, CORLESS teaches A computer system for generating a scenario to be run in a simulation environment for testing the behaviour of an autonomous vehicle, the computer system comprising: a display configured to present an image of an environment comprising a road layout; PNG media_image1.png 628 1192 media_image1.png Greyscale This slide shows a display rendering an environment of a road layout. (Pg. 5). an editing interface configured to receive user input for marking multiple locations to create at least one path for an agent vehicle in the image of the environment and for defining at least one behavioural parameter for controlling behaviour of the agent vehicle associated with the at least one path when the scenario is run in a simulation environment; PNG media_image1.png 628 1192 media_image1.png Greyscale (Pg. 5). PNG media_image2.png 614 1204 media_image2.png Greyscale (pg. 7). This shows a user interface for marking locations for a path. It also shows defining parameters such as position and speed to be run in a simulation environment. a path generation module configured to generate at least one path which passes through the multiple locations; and PNG media_image2.png 614 1204 media_image2.png Greyscale (Pg. 7). This shows generating a path which passes through multiple locations. a rendering module configured to render the at least one path in the image, the computer memory configured to record the scenario comprising the environment, the marked path and the at least one behavioural parameter. PNG media_image3.png 702 1236 media_image3.png Greyscale This shows a rendered image system that has a storage medium being saved with a scenario that has the environment and actors’ path / parameters. CORLESS does not explicitly teach but CIMA teaches one or more processors; and computer memory comprising instructions executable by the one or more processors to implement: “The present invention also relates to a system for producing vehicle control commands for an autonomous vehicle that includes an input device, a display device, and a computer system. The computer system receives and stores data from the input device, displays data on the display device, stores program steps for program control, and processes data. The computer system, through the input device, receiving data relating to a plurality of proposed vehicle locations.”. (0005). This shows a computer system with memory and processor. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of CIMA’s hardware for tracking and vehicle simulation software with CORLESS’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust simulation system with software for testing as stated by CIMA “The computer system presents at least the simulated vehicle orientation on the display device. In addition, the computer system, through the input device, receives a user verification of the simulated vehicle orientation for at least one point on the simulated vehicle path, and produces vehicle control commands for the autonomous vehicle from the simulated vehicle path and simulated vehicle orientation, the vehicle control commands controlling the autonomous vehicle to follow the simulated vehicle path and the simulated vehicle orientation.”. (0005). Claims 15, 17, and 20-21 are rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], US20110118927A1 by CIMA et al. (2011) [herein “CIMA”], and “Driving Scenario Designer” (2019) [herein “WAYBACK MACHINE”]. Regarding Claim 15, CORLESS and CIMA do not explicitly teach but WAYBACK MACHINE teaches The system of claim 14 wherein the path generation module is configured to detect that a user has selected one of the marked locations and has repositioned it in the image, and to generate at least one new path which passes through the existing multiple locations and the repositioned location. PNG media_image5.png 610 674 media_image5.png Greyscale (Pg. 4) PNG media_image6.png 466 676 media_image6.png Greyscale (Pg. 15) This shows selecting a waypoint to be repositioned in the image and then recalculating the path to pass through the new set of waypoints. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of WAYBACK MACHINE’s method of generating paths with the combination of CORLESS-CIMA’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust simulation system as stated by WAYBACK MACHINE “…enables you to design synthetic driving scenarios for testing your autonomous driving systems.”. (Pg. 1). Regarding Claim 17, CORLESS and CIMA do not explicitly teach but WAYBACK MACHINE teaches The system of claim 14, wherein the path generation module is configured to generate the at least one path by interpolating between the marked multiple locations to generate a continuous path which passes through the marked multiple locations. PNG media_image7.png 554 1568 media_image7.png Greyscale (Pg. 3). PNG media_image8.png 900 1616 media_image8.png Greyscale (Pg. 4). This shows the path being interpolated when the three points are selected to create a road or path for the agent. Regarding Claim 20, CORLESS and CIMA do not explicitly teach but WAYBACK MACHINE teaches The system of claim 14, the computer memory comprising a scenario database which stores existing scenarios accessible for display, each existing scenario comprising a static layer for rendering static objects in the environment and a dynamic layer for controlling motion of moving agents in the environment. PNG media_image10.png 652 714 media_image10.png Greyscale (Pg. 2). This shows as described by the specification what is constituted as a static layer of road ways and dynamic layer of ego vehicles as shown above. Regarding Claim 21, CORLESS and CIMA do not explicitly teach but WAYBACK MACHINE teaches The system of claim 14, configured to detect that a user has selected a playback mode at the editing interface and to simulate motion of the agent vehicle according to the at least one path and the at least one behavioural parameter in the scenario in the playback mode. PNG media_image10.png 652 714 media_image10.png Greyscale (Pg. 2). PNG media_image11.png 450 684 media_image11.png Greyscale (Pg. 6). The above images show detecting when the user selects a playback mode in this case continue/run to play the scenario simulation for a vehicle along a path and a parameter in this case speed. Claim 16 is rejected under 35 U.S.C 103 as being unpatentable over CORLESS et al. “What’s New in Automated Driving with MATLAB and Simulink” (2019) [herein “CORLESS”], US20110118927A1 by CIMA et al. (2011) [herein “CIMA”], and “Blender 2.79 Modeling » Curves » Editing » Introduction” (2019) [herein “BLENDER”]. Regarding Claim 16, CORLESS and CIMA do not explicitly teach but BLENDER teaches The system of claim 14 wherein the path generation module is configured to detect that a user has highlighted a set of adjacent locations of the multiple locations and repositioned that set, and to generate at least one new path which passes through the existing multiple locations and the repositioned set of multiple locations. The method of claim 1 comprising detecting that a user has highlighted a set of adjacent locations of the multiple locations and repositioned that set, and generating at least one new path which passes through the existing multiple locations and the repositioned set of multiple locations. “…curve control points and handles can be grabbed/moved G, rotated R or scaled S as described in the Basic Transformations section.”. (Pg. 1). “When more than one vertex is selected, the median values are edited and “Median” is added in front of the labels.”. (Pg. 1). “For Bézier curves, this smoothing operation reduces the distance between the selected control point/s and their neighbors, while keeping the neighbors anchored.”. (Pg. 8). “Deletes the selected control points, while the remaining segment is fitted to the deleted curve by adjusting its handles.”. (Pg. 6). “The first controls (X, Y, Z) show the coordinates of the selected point or handle (vertex).”. (Pg. 1). This shows the user selecting multiple points to reposition them. It also shows altering points that will regenerate the path on the coordinate plane. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to incorporate the teachings of BLENDER’s method of editing paths with CORLESS-CIMA’s simulation system in order to make it more robust. The motivation for doing so would have been to create a more robust editable simulation system as stated by BLENDER “Like other elements in Blender, curve control points and handles can be grabbed/moved rotated R or scaled G, S as described in the Basic Transformations section. When in Edit Mode, proportional editing is also available for transformation actions.”. (Pg. 1). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 10254759 B1 by FAUST et al (2019), US20190354643A1 by SHUM et al (2019), and US20190163181A1 by LIU et al (2019). Any inquiry concerning this communication or earlier communications from the examiner should be directed to NARCISO EDUARDO MONTES whose telephone number is (571)272-5773. The examiner can normally be reached Mon-Fri 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, REHANA PERVEEN can be reached at (571) 272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.E.M./Examiner, Art Unit 2189 /REHANA PERVEEN/Supervisory Patent Examiner, Art Unit 2189
Read full office action

Prosecution Timeline

Dec 02, 2022
Application Filed
Apr 02, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month