Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. The amendment filed 01/30/2026 has been received and considered. Claims 1-2 and 4-20 are presented for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
2. Claims 1-2 and 4-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per Claim 1, 8, and 15, they recite the limitation “a plurality of unsuccessful simulation scenarios for analysis to determine root causes due to modes of failure” which is vague and indefinite since "unsuccessful …scenarios" does not set ranges. Also, what is the metes and bounds of the “modes of failure” and how is it determined in terms of unsuccessful …scenarios"?
As per claim 1, 10, and 17, they recite the limitation “such that a subsequent analysis is more likely to include distinct failure modes” which is vague and indefinite since "more likely to include" does not set ranges. The limitation is interpreted as an intended use such as “for a subsequent analysis”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
3. Claim 1-2, 4-16, and 4-20 are rejected under 35 U.S.C. 103 as being unpatentable over Englard et al. (US 20200074230 A1), in view of Shen et al. (CN 109544993 A), and further in view of Muehlenstaedt et al. (US 20230222268 A1).
(Fig. 1) of automatically classifying interactions in a simulation scenario ([0034], Fig. 9), the method comprising, by a processor (Fig. 1):
(Claim 8) a data store containing a plurality of simulation scenarios ([0047]-[0049]); and
a non-transient memory that stores programming instructions that are configured to cause the processor (Fig. 1, [0047]-[0049]) to:
(Claim 15) a memory that stores programming instructions that are configured to cause a processor to classify interactions in a simulation scenario (Fig. 1, [0047]-[0049]) by:
(Claim 1, 8, and 15) executing a simulation scenario that includes features of a scene through which a vehicle may travel (Fig. 2A & 4A, [0060]-[0062] “The virtual environment, as referred to herein, may include a computer rendered environment including streets, roads, intersections, overpasses, vehicles, pedestrians, buildings or other structures, traffic lights or signs, or any other object or surface capable of being rendered in a virtual environment, such as a 2D or 3D environment.”, “an example photo-realistic scene 200 of a virtual environment in the direction of travel of an autonomous vehicle within the virtual environment of scene 200”, [0064] “a virtual vehicle (e.g., any of vehicles 401, 451, 700, and/or 760) may be simulated in a virtual environment such that the virtual vehicle travels along a standard route. In such embodiments, data outputs of actions taken by the virtual vehicle may be observed and/or recorded as feature data for purposes of training machine learning models as described herein.”; [0073] “photo-realistic scene 400 of a virtual environment in the direction of travel of an autonomous vehicle (e.g., vehicle 401) within the virtual environment, ”; Fig. 9 element 904 ->908), the simulation scenario including one or more actors (Fig. 2A & 4A, [0061]-[0062] “photo-realistic scene 200 includes renderings of objects and surfaces, including vehicles 210, 212, 214 moving within each of the lanes divided lane markings 204 and 206, and also renderings of vehicle 230 moving in an opposite direction that the autonomous vehicle within the virtual environment.”, [0073] “additional surfaces with which virtual vehicle 401 may interact, including, but not limited to, pedestrian 409, vehicles 412 and 414…”; Fig. 9 element 904 ->908: Examiner note – generating a plurality of imaging scenes defining a virtual environment corresponds. to “executing…”);
identifying an intersection between a first road and a second road in the simulation scenario ([0043] “correspondences between actions (e.g., driving forward in a certain setting) and safety-related outcomes (e.g., avoiding collision) can be expressed as a ground truth and used in generating training dataset(s)… The ground truth data may be used for various types of training.”, [0073] “Photo-realistic scene 400 further includes additional surfaces with which virtual vehicle 401 may interact, …Other objects or surfaces with which virtual vehicle 401 may interact include intersection 407 and pothole 408.”; Fig. 9 element 904 ->908), wherein the intersection is in a planned path of the vehicle ([0073] “Photo-realistic scene 400 further includes surfaces with which virtual vehicle 401 may interact, including, but not limited to, roads 402 and 403, sidewalks 404 and 405 and crosswalk 406. Photo-realistic scene 400 further includes additional surfaces with which virtual vehicle 401 may interact, including, but not limited to, pedestrian 409, vehicles 412 and 414, traffic sign 415, traffic light 416, tree 417, and building 418. Other objects or surfaces with which virtual vehicle 401 may interact include intersection 407 and pothole 408.”, Fig. 4B [0087]-[0092] “descriptor data (e.g., descriptors 451-482) may be included in training data/datasets to train machine learning models and/or self-driving control architectures for controlling autonomous vehicles ”, “such descriptors include the vehicle 451 itself and other objects and surfaces that vehicle 451 may interact with. These include surfaces or objects such as lane markings 452 and 454, center lane marking 456, pothole 458, sidewalk 460, intersection 462,”; Fig. 9 element 904 ->908: Examiner note – defining how objects or surfaces interact with each other in the virtual environment and controlling an autonomous vehicle interacting with other vehicles on other road and intersection within the virtual environment corresponds to “identifying…”);
….
executing a plurality of additional simulation scenarios from a data store ([0049] “The data stored in memory 152 may include all or part of any of the data or information described herein, including, for example, the photo-realistic scenes, the depth-map-realistic scenes, the environment-object data, feature training dataset(s), or other information or scenes as described herein.”, [0095] “a scenario simulator (not shown) may be configured to generate one or more simulated environment scenarios, wherein each of the simulated environment scenario(s) corresponds to a variation of a particular object, surface, or situation within the virtual environment. A particular object, surface, or situation may include, for example, a road (e.g., 402 or 403), an intersection (e.g., 407), a stop sign, or a traffic light (e.g., 416)”).
Englard et al. fails to teach explicitly in response to one of the actors occupying a lane of either the first road or the second road, classifying the interaction between the vehicle and the actor into one or more classifications based on the intersection, the path of the vehicle, and the lane occupied by the actor;
for each simulation scenario, classifying one or more interactions between the vehicle and one or more actors;
flagging a plurality of unsuccessful simulation scenarios for analysis to determine root causes due to modes of failure; and
(Claim 1) ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes.
Shen et al. teaches in response to one of the actors occupying a lane of either the first road or the second road, classifying the interaction between the vehicle and the actor into one or more classifications based on the intersection, the path of the vehicle, and the lane occupied by the actor (“four basic right turning condition, the it is classified as the following four situations, vehicle turning together into the same lane and the left side when incorporated into the lane of the vehicle collision (condition a), when the vehicle turns right to be incorporated in a lane crossing the road of pedestrian collision (condition II), crossing the road pedestrian collides to the front when the vehicle turns right (three), right turning lane with the lane the vehicle collision (four) and four kinds of conditions.” on Pg 6);
for each simulation scenario, classifying one or more interactions between the vehicle and one or more actors (“four basic right turning condition, the it is classified as the following four situations, vehicle turning together into the same lane and the left side when incorporated into the lane of the vehicle collision (condition a), when the vehicle turns right to be incorporated in a lane crossing the road of pedestrian collision (condition II), crossing the road pedestrian collides to the front when the vehicle turns right (three), right turning lane with the lane the vehicle collision (four) and four kinds of conditions.” on Pg 6).
Englard et al. and Shen et al. are analogous art because they are both related to a vehicle behavior simulation.
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate Shen et al. into Englard et al.’s invention to provide an improved simulating system which can accurately and efficiently calculating the collision possibility for the vehicle interaction such as turning right at the intersection (Shen et al.: Pg 5).
Englard et al. as modified by Shen et al. fails to teach explicitly flagging a plurality of unsuccessful simulation scenarios for analysis to determine root causes due to modes of failure; and (Claim 8) ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes.
Muehlenstaedt et al. teaches flagging a plurality of unsuccessful simulation scenarios for analysis to determine root causes due to modes of failure (Fig. 7 and the description, [0082]-[0089], “The simulator 510 may analyze the data logs whether an execution of the scenario is successful or unsuccessful. … a simulation execution is classified as successful or unsuccessful, e.g., based on whether the AV executes an expected action in the scenario within expected tolerance limits. .. . The scenario may be classified as unsuccessful if the AV completes a planned trajectory 102 in the scenario within a threshold period of time, but fails to maintain the secondary success criterion (e.g., fails to maintain a minimum distance to other objects, actors, or lane boundaries, violates a traffic rule or regulation, etc.).”,“unsuccessful scenario variations may need to be analyzed and/or triaged by experts to try to understand how to improve the AV’s motion planning system adding a human cost to each unsuccessful scenario variations.”);
(Claim 1) ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes (Figure 9-10, [0090-[0091], [0096]-[0097] “to collect simulation outcomes (i.e., successful or unsuccessful). The training scenario variations and the corresponding outcomes may be used to train a machine learning model…”, “uncertainty estimate (e.g., a corresponding uncertainty score) per scenario variation”, “ranking of scenario variations according to their likelihood of changing successful and unsuccessful results.”, “the selection and prioritization of scenario variations for execution (and/or triaging) may also be based on failure modes of the scenario variations. … Doing clustering by failure mode enables the system to pick the highest ranked scenarios per failure mode, and thus ensures a diverse selection of scenario variations for triaging and/or execution. The regions 901, 902, 903 illustrated in FIG. 9 (discussed above) may be clustered based on failure modes and the associated boundaries are used for selecting scenario variations per failure mode…. Based on the clusters, the system may treat each cluster as though it represents a single failure mode. Therefore, the system may revise or refine priorities for training scenarios to emphasize each distinct failure mode”).
Englard et al., Shen et al., and Muehlenstaedt et al. are analogous art because they are all related to a vehicle behavior simulation.
The motivation to combine the teaching of Muehlenstaedt et al. is to provide a simulating operation of an autonomous vehicle (AV) which identifying and developing an effective set of relevant simulation scenarios by ranking to each of the scenario variations (Muehlenstaedt et al.: [0004], [0014]).
As per Claim 2, 9, and 16, Englard et al. fails to teach explicitly wherein the one or more classifications comprise (i) moving in the same direction, (ii) moving in opposing directions, (iii) actor crossing from the left, or (iv) actor crossing from the right.
Shen et al. teaches wherein the one or more classifications comprise (i) moving in the same direction, (ii) moving in opposing directions, (iii) actor crossing from the left, or (iv) actor crossing from the right (“four basic right turning condition, the it is classified as the following four situations, vehicle turning together into the same lane and the left side when incorporated into the lane of the vehicle collision (condition a), when the vehicle turns right to be incorporated in a lane crossing the road of pedestrian collision (condition II), crossing the road pedestrian collides to the front when the vehicle turns right (three), right turning lane with the lane the vehicle collision (four) and four kinds of conditions.” on Pg 6).
As per Claim 4, 11, and 18, Englard et al. fails to teach explicitly wherein classifying the interaction comprises: in response to the vehicle entering the intersection from a first lane segment of the first road and one of the actors entering the intersection from a second lane segment of the second road, classifying the interaction based on the planned path of the vehicle and a direction of the second lane segment.
Shen et al. teaches in response to the vehicle entering the intersection from a first lane segment of the first road and one of the actors entering the intersection from a second lane segment of the second road, classifying the interaction based on the planned path of the vehicle and a direction of the second lane segment (“four basic right turning condition, the it is classified as the following four situations, vehicle turning together into the same lane and the left side when incorporated into the lane of the vehicle collision (condition a), when the vehicle turns right to be incorporated in a lane crossing the road of pedestrian collision (condition II), crossing the road pedestrian collides to the front when the vehicle turns right (three), right turning lane with the lane the vehicle collision (four) and four kinds of conditions.” on Pg 6).
As per Claim 5, 12, and 19, Englard et al. fails to teach explicitly wherein classifying the interaction comprises: in response to one of the actors leaving the intersection on an exit lane segment, classifying the interaction based on the planned path of the vehicle and a direction of the exit lane segment.
Shen et al. teaches in response to one of the actors leaving the intersection on an exit lane segment, classifying the interaction based on the planned path of the vehicle and a direction of the exit lane segment (“four basic right turning condition, the it is classified as the following four situations, vehicle turning together into the same lane and the left side when incorporated into the lane of the vehicle collision (condition a), when the vehicle turns right to be incorporated in a lane crossing the road of pedestrian collision (condition II), crossing the road pedestrian collides to the front when the vehicle turns right (three), right turning lane with the lane the vehicle collision (four) and four kinds of conditions.” on Pg 6).
As per Claim 6, 13, and 20, Englard et al. teaches storing the classifications and the corresponding timestamps in a data store associated with the simulation scenario ([0049] “The data stored in memory 152 may include all or part of any of the data or information described herein, including, for example, the photo-realistic scenes, the depth-map-realistic scenes, the environment-object data, feature training dataset(s), or other information or scenes as described herein.”, [0076]-[0077], “scene 400 may represent one such scene of hundreds or thousands of scenes generated over a particular time span.”, [0118], “one or more “future occupancy grids” that indicate predicted object positions, boundaries and/or orientations at one or more future times (e.g., one, two, and five seconds ahead). ”).
Englard et al. fails to teach explicitly further comprising: classifying a plurality of interactions between the vehicle and the one or more actors in the simulation scenario, wherein each classification comprises one or more corresponding timestamps.
Shen et al. teaches classifying a plurality of interactions between the vehicle and the one or more actors in the simulation scenario (“four basic right turning condition, the it is classified as the following four situations, vehicle turning together into the same lane and the left side when incorporated into the lane of the vehicle collision (condition a), when the vehicle turns right to be incorporated in a lane crossing the road of pedestrian collision (condition II), crossing the road pedestrian collides to the front when the vehicle turns right (three), right turning lane with the lane the vehicle collision (four) and four kinds of conditions.” on Pg 6), wherein each classification comprises one or more corresponding timestamps (“each simulation for calculating the collision probability in the sampling period T, one period is divided into a plurality of (T/pt) time, PT is a constant. the start of period (previous period) and ending defined as time 0 and time T, from time 0 to time T is divided into Zopt time, 2pt time, 3pt time. .. T time. Respectively calculating each time period T of whether collision occurs, if the collision according to his security profile condition, by cumulating the probability and the probability curve to be plotted by the software” on Pg 9).
As per Claim 7 and 14, Englard et al. teaches wherein: the simulation scenario comprises semantic information ([0040] “the virtual environment may be at least partially generated based on geo-spatial data… geo-spatial data (e.g., height maps or geo-spatial semantic data such as road versus terrain versus building data) as retrieved from remote sources (e.g., Mapbox images, Google Maps images, etc.). For example, the geo-spatial data may be used as a starting point to construct detailed representations of roads,”); and the classification is based on direction-change semantics of one or more lane segments occupied by the vehicle or the actor in the intersection (0040], [0106], [0135] “occupancy grids may be used as input to a machine learning model (e.g., as training data) to determine decisions or predictions an autonomous vehicle makes during operation to turn, steer, avoid objects ”, Fig. 6B [0143] “the occupancy grid 650 includes (i.e., includes representations of) a number of objects or surfaces, and areas associated with objects or surfaces, including: a road 655, dynamic objects 656A-D (i.e., vehicles 656A-C and a pedestrian 656D), lane markings 660, 662, and traffic light areas 664. The example occupancy grid 650 may include data representing each of the object/area positions, as well as data representing the object/area types (e.g., including classification data that is generated by, or is derived from data generated by, the classification module 512).”).
As per Claim 10 and 17, Englard et al. fails to teach explicitly ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes.
Muehlenstaedt et al. teaches ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes (Figure 9-10, [0090-[0091], [0096]-[0097] “to collect simulation outcomes (i.e., successful or unsuccessful). The training scenario variations and the corresponding outcomes may be used to train a machine learning model…”, “uncertainty estimate (e.g., a corresponding uncertainty score) per scenario variation”, “ranking of scenario variations according to their likelihood of changing successful and unsuccessful results.”, “the selection and prioritization of scenario variations for execution (and/or triaging) may also be based on failure modes of the scenario variations. … Doing clustering by failure mode enables the system to pick the highest ranked scenarios per failure mode, and thus ensures a diverse selection of scenario variations for triaging and/or execution. The regions 901, 902, 903 illustrated 9 in FIG. 9 (discussed above) may be clustered based on failure modes and the associated boundaries are used for selecting scenario variations per failure mode…. Based on the clusters, the system may treat each cluster as though it represents a single failure mode. Therefore, the system may revise or refine priorities for training scenarios to emphasize each distinct failure mode”).
Response to Arguments
4. Applicant's arguments filed 01/30/2026 have been fully considered but they are not persuasive.
Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument – in view of Muehlenstaedt et al.
Furthermore, as rejected above, as per limitation “executing a plurality of additional simulation scenarios from a data store; for each simulation scenario, classifying one or more interactions between the vehicle and one or more actors; flagging a plurality of unsuccessful simulation scenarios for analysis to determine root causes due to modes of failure; and ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes”, Examiner relies on the teaching in Englard et al. to teach the limitation of "executing a plurality of additional simulation scenarios from a data store" while Shen et al. is relied upon for a teaching of "for each simulation scenario, classifying one or more interactions between the vehicle and one or more actors ", and Muehlenstaedt et al. is relied upon for a teaching of “flagging a plurality of unsuccessful simulation scenarios for analysis to determine root causes due to modes of failure; and ranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modesranking the unsuccessful simulation scenarios based on the one or more classifications such that a subsequent analysis is more likely to include distinct failure modes”.
Conclusion
5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sholinga et al. (US 11030364 B2)
6. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUNHEE KIM whose telephone number is (571)272-2164. The examiner can normally be reached Monday-Friday 9am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571)272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EUNHEE KIM
Primary Examiner
Art Unit 2188
/EUNHEE KIM/Primary Examiner, Art Unit 2188