Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. This office action is in response to application number 18/776,023 filed on 07/17/2024, in
which claims 1-20 are presented for examination.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted on 07/17/2024 has been received
and considered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
4. Claim(s) 1,2,6,7,8,12,13,16,18,19 is/are rejected under 35 U.S.C. 102 as being unpatentable over (US 20240017745 A1) to Cao et al. (hereinafter Cao).
Regarding claim 1, Cao discloses A trajectory generation system for controlling an autonomous vehicle, comprising: a behavior generator including a plurality of states of an object relative to the autonomous vehicle, each of the plurality of states of the object corresponding to a behavior of the object at a given time, the behavior generator configured to generate one of the plurality of states; (Cao Paragraph 0062: “agent (e.g., vehicle, pedestrian, bicyclist, animal)”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds) and other environmental contexts such as maps. In at least one embodiment, a trajectory prediction model 228 takes a sequence of observed states for each agent at a fixed time interval Δt, and outputs a predicted future trajectory for each agent.”) (Cao Paragraph 0065: “In at least one embodiment, future trajectories of all N agents over T future time steps is denoted as Y=(Y.sup.1, . . . , Y.sup.T), where Y.sup.t=(y.sub.1.sup.t, . . . , y.sub.N.sup.t) denotes states of N agents at a future time step t (t>0).”) an artificial intelligence information generator including one or more first parameters for determining a trajectory of the object and in communication with the behavior generator to obtain the one of the plurality of states from the behavior generator and to provide the one or more first parameters based on the one of the plurality of states; (Cao Paragraph 0062: “trajectory prediction model 128 is one or more neural network models”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds) and other environmental contexts such as maps. In at least one embodiment, a trajectory prediction model 228 takes a sequence of observed states for each agent at a fixed time interval Δt, and outputs a predicted future trajectory for each agent.”) (Cao Paragraph 0065: “In at least one embodiment, future trajectories of all N agents over T future time steps is denoted as Y=(Y.sup.1, . . . , Y.sup.T), where Y.sup.t=(y.sub.1.sup.t, . . . , y.sub.N.sup.t) denotes states of N agents at a future time step t (t>0).”) a user defined information generator including one or more second parameters for determining a variant trajectory of the object and in communication with the behavior generator to obtain the one of the plurality of states from the behavior generator and to provide the one or more second parameters based on the one of the plurality of states; (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used in a context where an adversary vehicle attacks a prediction module of an ego vehicle by driving along an adversarial trajectory X.sub.adv(⋅)”) (Cao Paragraph 0065: “In at least one embodiment, future trajectories of all N agents over T future time steps is denoted as Y=(Y.sup.1, . . . , Y.sup.T), where Y.sup.t=(y.sub.1.sup.t, . . . , y.sub.N.sup.t) denotes states of N agents at a future time step t (t>0).”) (Cao Paragraph 0066: “In at least one embodiment, an attack goal is to mislead a neural network model's predictions at each time step and subsequently make an ego vehicle execute unsafe driving behaviors.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”) and a simulator in communication with the artificial intelligence information generator and the user defined information generator and configured to generate a future position of the object by performing a computation operation on the one of the plurality of states using a combination of the one or more first parameters and the one or more second parameters. (Cao Paragraph 0097: “In at least one embodiment, adversarial trajectories are part of simulated (synthesized) traffic scenarios designed to pose safety risks to simulated vehicles.”) (Cao Paragraph 0098: “In at least one embodiment, a neural network model is a trajectory prediction model such as trajectory prediction model 128 as described further herein at least in conjunction with FIG. 1. In at least one embodiment, adversarial trajectories are input into a neural network model to evaluate that neural network model's vulnerability to adversarial attacks from adversarial agents following adversarial trajectories.”) (Cao Paragraph 0336: “In at least one embodiment, graphics core 1700 uses one or more neural networks to help control an autonomous vehicle (AV) based, at least in part, on one or more adversarial motions of one or more objects detected by the AV and one or more predictive models to predict one or more other motions of the one or more objects”)
Regarding claim 2, Cao discloses The system of claim 1, wherein the future position of the object is provided to an autonomous driving controller to control the autonomous vehicle based on the future position of the object. (Cao Paragraph 0062: “ In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 can be installed on an autonomous vehicle, real or simulated, to assist it with navigation and route planning.”) (Cao Paragraph 0139: “In at least one embodiment, controller(s) 936 may include one or more onboard (e.g., integrated) computing devices that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving”) (Cao Paragraph 0144: “In at least one embodiment, vehicle 900 uses one or more neural networks to help control its actions based, at least in part, on one or more adversarial motions of one or more objects detected by the AV and one or more predictive models to predict one or more other motions of the one or more objects.”)
Regarding claim 6, Cao discloses The system of claim 5, wherein one of the plurality of behavior modules provides the one of the plurality of states to at least one of the artificial intelligence information generator (Cao Paragraph 0065: “ In at least one embodiment, a trajectory prediction model 228 takes a sequence of observed states for each agent at a fixed time interval Δt, and outputs a predicted future trajectory for each agent.”) (Cao Paragraph 0067: “In at least one embodiment, diagram 200 illustrates aspects of taking trajectory data recorded from real-world driving behaviors”) (Note: Behavior and state is used to mean the same thing. These two words are used interchangeably.) or the user defined information generator.
Regarding claim 7, Cao discloses The system of claim 1, wherein the one or more first parameters are determined based on data gathered from real life events. (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0070: “In at least one embodiment, history trajectory includes data points related to position, heading, and speed, captured from real-world driving”)
Regarding claim 8, Cao discloses The system of claim 7, wherein each of the one or more second parameters is determined by applying a variance value to a corresponding first parameter. (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used in a context where an adversary vehicle attacks a prediction module of an ego vehicle by driving along an adversarial trajectory X.sub.adv(⋅)”)
Regarding claim 12, Cao discloses The system of claim 1, wherein the object is a vehicle or a pedestrian. (Cao Paragraph 0062: “an agent (e.g., vehicle, pedestrian, bicyclist, animal)”)
Regarding claim 13, Cao discloses A method of operating an autonomous vehicle, comprising: training, using a simulation platform, a first machine learning (ML) algorithm configured to control operation of an autonomous vehicle based on interaction with a simulated vehicle object and a simulated environmental condition in which the autonomous vehicle is operating; (Cao Paragraph 0062: “trajectory prediction model 128 is one or more neural network models”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds) and other environmental contexts such as maps.”) wherein operation of the vehicle object is controlled by a combination of a second ML algorithm and an input from a human user and wherein the simulated environmental condition is partly user-controlled. (Cao Paragraph 0056: “framework 100 includes an adversarial dynamic optimization (AdvDO) module 102 configured to perform on one or more techniques described further herein”) (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”)
Regarding claim 16, Cao discloses The method of claim 13, wherein the second ML algorithm is configured to control a trajectory of the vehicle object. (Cao Paragraph 0060: “In at least one embodiment, adversarial dynamic optimization module 102 is configured to modify control action data, which is data that includes values for acceleration and curvature of a vehicle. In at least one embodiment, vehicles include any type of vehicle, such as cars, motorcycles, trucks, buses, trains, boats, amphibious vehicles, aircraft, and spacecraft. In at least one embodiment, acceleration and curvature of a vehicle correspond with control of a vehicle's acceleration actuator (e.g., pedal) and steering. In at least one embodiment, adversarial dynamic optimization module 102 is configured to optimize control actions, which is described further herein at least in conjunction with FIG. 2. In at least one embodiment, adversarial dynamic optimization module 102 is configured to optimize trajectories including adversarial trajectories.”)
Regarding claim 18, Cao discloses A trajectory generation method for controlling an autonomous vehicle, comprising: generating a state of an object relative to the autonomous vehicle, out of a plurality of states, each of the plurality of states corresponding to a behavior of the object at a given time; (Cao Paragraph 0062: “agent (e.g., vehicle, pedestrian, bicyclist, animal)”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds) and other environmental contexts such as maps. In at least one embodiment, a trajectory prediction model 228 takes a sequence of observed states for each agent at a fixed time interval Δt, and outputs a predicted future trajectory for each agent.”) (Cao Paragraph 0065: “In at least one embodiment, future trajectories of all N agents over T future time steps is denoted as Y=(Y.sup.1, . . . , Y.sup.T), where Y.sup.t=(y.sub.1.sup.t, . . . , y.sub.N.sup.t) denotes states of N agents at a future time step t (t>0).”) applying the state of the object to an artificial intelligence information generator including one or more first parameters for determining a trajectory of the object(Cao Paragraph 0062: “trajectory prediction model 128 is one or more neural network models”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds) and other environmental contexts such as maps. In at least one embodiment, a trajectory prediction model 228 takes a sequence of observed states for each agent at a fixed time interval Δt, and outputs a predicted future trajectory for each agent.”) and a user defined information generator including one or more second parameters for determining a variant trajectory of the object; (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used in a context where an adversary vehicle attacks a prediction module of an ego vehicle by driving along an adversarial trajectory X.sub.adv(⋅)”) (Cao Paragraph 0065: “In at least one embodiment, future trajectories of all N agents over T future time steps is denoted as Y=(Y.sup.1, . . . , Y.sup.T), where Y.sup.t=(y.sub.1.sup.t, . . . , y.sub.N.sup.t) denotes states of N agents at a future time step t (t>0).”) (Cao Paragraph 0066: “In at least one embodiment, an attack goal is to mislead a neural network model's predictions at each time step and subsequently make an ego vehicle execute unsafe driving behaviors.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”) and generating a future position of the object relative to the autonomous vehicle by performing a computation operation on the state of the object using a combination of the one or more first parameters and the one or more second parameters, (Cao Paragraph 0097: “In at least one embodiment, adversarial trajectories are part of simulated (synthesized) traffic scenarios designed to pose safety risks to simulated vehicles.”) (Cao Paragraph 0098: “In at least one embodiment, a neural network model is a trajectory prediction model such as trajectory prediction model 128 as described further herein at least in conjunction with FIG. 1. In at least one embodiment, adversarial trajectories are input into a neural network model to evaluate that neural network model's vulnerability to adversarial attacks from adversarial agents following adversarial trajectories.”) (Cao Paragraph 0336: “In at least one embodiment, graphics core 1700 uses one or more neural networks to help control an autonomous vehicle (AV) based, at least in part, on one or more adversarial motions of one or more objects detected by the AV and one or more predictive models to predict one or more other motions of the one or more objects”) wherein the one or more first parameters are determined based on data gathered from real life events, (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0070: “In at least one embodiment, history trajectory includes data points related to position, heading, and speed, captured from real-world driving”) wherein each of the one or more second parameters is determined by applying a variance value to a corresponding first parameter. (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used in a context where an adversary vehicle attacks a prediction module of an ego vehicle by driving along an adversarial trajectory X.sub.adv(⋅)”)
Regarding claim 19, Cao discloses The method of claim 18, further comprising providing the future position of the object to an autonomous driving controller to control the autonomous vehicle based on the future position of the object. (Cao Paragraph 0062: “ In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 can be installed on an autonomous vehicle, real or simulated, to assist it with navigation and route planning.”) (Cao Paragraph 0139: “In at least one embodiment, controller(s) 936 may include one or more onboard (e.g., integrated) computing devices that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving”) (Cao Paragraph 0144: “In at least one embodiment, vehicle 900 uses one or more neural networks to help control its actions based, at least in part, on one or more adversarial motions of one or more objects detected by the AV and one or more predictive models to predict one or more other motions of the one or more objects.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
5. Claim(s) 3 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao (US 20240017745 A1) in view of (US 20220164245 A1) to Lepird et al. (hereinafter Lepird).
Regarding claim 3, Cao discloses claim 2, accordingly the rejection of claim 2 is incorporated above.
Cao does not disclose The system of claim 2, wherein the autonomous driving controller provides feedback data associated with an actual behavior of the object to the behavior generator to update the plurality of states in the behavior generator.
However, Lepird does teach The system of claim 2, wherein the autonomous driving controller provides feedback data associated with an actual behavior of the object to the behavior generator to update the plurality of states in the behavior generator. (Lepird Paragraph 0020: “feedback assessment system described in this disclosure may be implemented using one or more on-board computing devices of an autonomous vehicle.”) (Lepird Paragraph 0024: “As illustrated in FIG. 1, a prediction feedback manager may subscribe to an inference message channel 106 and may receive 202 one or more inference messages from the channel. An inference message channel may broadcast one or more inference messages. An inference message refers to a message that includes information about the current state of an autonomous vehicle or a current state of one or more objects or actors in an environment of the autonomous vehicle.”) (Lepird Paragraph 0034: “The feedback source may apply 212 one or more of its processing operations to the message set to generate feedback.”) (Lepird Paragraph 0034: “feedback source may generate 212 a feedback message that includes information pertaining to the comparison. This information may include one or more forecast errors. A forecast error refers to a difference or discrepancy between a forecast or prediction and an inference or actual observation associated with the forecast or prediction. For example, if the prediction or forecast was not accurate, the feedback source may generate 212 a feedback message that includes an indication of the difference between the forecast or prediction and the actual observation as a forecast error.”) (Lepird Paragraph 0035: “In various embodiments, information pertaining to the comparison may include updated information about a prediction or a forecast. For example, the system may determine with high confidence that an object was parked at t=0. At t=0.1, the system may observe the object moving at 10 m/s.”) (Lepird Paragraph 0054: “Autonomous vehicle 301 may further include certain components (as illustrated, for example, in FIG. 4) included in vehicles, such as, an engine, wheels, steering wheel, transmission, etc., which may be controlled by the on-board computing device 312 using a variety of communication signals and/or commands, such as, for example, acceleration signals or commands, deceleration signals or commands, steering signals or commands, braking signals or commands, etc.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include The system of claim 2, wherein the autonomous driving controller provides feedback data associated with an actual behavior of the object to the behavior generator to update the plurality of states in the behavior generator taught by Lepird. This would have been for the benefit to provide The system may receive one or more forecast messages pertaining to a track comprise one or more programming instructions by monitoring a forecast channel that is broadcasting the one or more forecast messages. [Lepird Paragraph 0004]
Regarding claim 20, Cao discloses claim 19, accordingly the rejection of claim 19 is incorporated above.
Cao does not disclose The method of claim 19, further comprising receiving, from the autonomous driving controller, feedback data associated with an actual behavior of the object to update the plurality of states.
However, Lepird does teach The method of claim 19, further comprising receiving, from the autonomous driving controller, feedback data associated with an actual behavior of the object to update the plurality of states. (Lepird Paragraph 0020: “feedback assessment system described in this disclosure may be implemented using one or more on-board computing devices of an autonomous vehicle.”) (Lepird Paragraph 0024: “As illustrated in FIG. 1, a prediction feedback manager may subscribe to an inference message channel 106 and may receive 202 one or more inference messages from the channel. An inference message channel may broadcast one or more inference messages. An inference message refers to a message that includes information about the current state of an autonomous vehicle or a current state of one or more objects or actors in an environment of the autonomous vehicle.”) (Lepird Paragraph 0034: “The feedback source may apply 212 one or more of its processing operations to the message set to generate feedback.”) (Lepird Paragraph 0034: “feedback source may generate 212 a feedback message that includes information pertaining to the comparison. This information may include one or more forecast errors. A forecast error refers to a difference or discrepancy between a forecast or prediction and an inference or actual observation associated with the forecast or prediction. For example, if the prediction or forecast was not accurate, the feedback source may generate 212 a feedback message that includes an indication of the difference between the forecast or prediction and the actual observation as a forecast error.”) (Lepird Paragraph 0035: “In various embodiments, information pertaining to the comparison may include updated information about a prediction or a forecast. For example, the system may determine with high confidence that an object was parked at t=0. At t=0.1, the system may observe the object moving at 10 m/s.”) (Lepird Paragraph 0054: “Autonomous vehicle 301 may further include certain components (as illustrated, for example, in FIG. 4) included in vehicles, such as, an engine, wheels, steering wheel, transmission, etc., which may be controlled by the on-board computing device 312 using a variety of communication signals and/or commands, such as, for example, acceleration signals or commands, deceleration signals or commands, steering signals or commands, braking signals or commands, etc.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include The method of claim 19, further comprising receiving, from the autonomous driving controller, feedback data associated with an actual behavior of the object to update the plurality of states taught by Lepird. This would have been for the benefit to provide The system may receive one or more forecast messages pertaining to a track comprise one or more programming instructions by monitoring a forecast channel that is broadcasting the one or more forecast messages. [Lepird Paragraph 0004]
6. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao (US 20240017745 A1) in view of Knittel (US 20240116544 A1).
Regarding claim 4, Cao discloses claim 1, accordingly, the rejection of claim 1 is incorporated above.
Cao does not disclose The system of claim 1, wherein the behavior generator includes a finite state machine configured to be in one of the plurality of states at a given time.
However, Knittel does teach The system of claim 1, wherein the behavior generator includes a finite state machine configured to be in one of the plurality of states at a given time. (Knittel Paragraph 0053: “The range of possible adverse behaviours can be very broad and can range from relatively common behaviours such as failing to observe another agent to very unusual actions such as an agent's steering or accelerating to the position of the ego vehicle.”) (Knittel Paragraph 0074: “agent behaviour including adverse actions, such as agent failing to observe other agents or not reacting in an appropriate manner to avoid a collision. Such an expert system may be constructed by manually defining the ways that these mistakes may take place. Some adverse events may be recreated by restricting observed information, such as producing an agent plan without the observation of other agents. In other embodiments, the actions could be defined in different ways such as the finite state machine operating on a given agent state or planned trajectory, for example by encoding excessive acceleration or delayed braking, either randomly or based on certain circumstances.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include The system of claim 1, wherein the behavior generator includes a finite state machine configured to be in one of the plurality of states at a given time taught by Knittel. This would have been for the benefit to provide a method of training a computer implemented behaviour model for predicting actions of an actor vehicle agent in a vehicular scene, wherein the behaviour model is configured to recognise very low probability events occurring in the vehicular scene. [Knittel Paragraph 0028]
7. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao (US 20240017745 A1) in view of Knittel (US 20240116544 A1) further in view of (US 20250094670 A1) to Rubin et al. (hereinafter Rubin).
Regarding claim 5, Cao in view of Knittel teach claim 4, accordingly, the rejection of claim 4 is incorporated above.
Cao in view of Knittel does not teach The system of claim 4, wherein the finite state machine includes a plurality of behavior modules corresponding to the plurality of states, respectively, to transition between different behavior modules upon occurrence a predetermined event.
However, Rubin does teach The system of claim 4, wherein the finite state machine includes a plurality of behavior modules corresponding to the plurality of states, respectively, to transition between different behavior modules upon occurrence a predetermined event. (Rubin Paragraph 0001: “The present disclosure relates to a supportive software-based “toolbox” for improving the design and implementation of finite state machine (FSM)-modeled systems having multiple states, state transitions/mode changes, and other potentially complex behavior.”) (Rubin Paragraph 0107: “ Block B303 relates to generating a plurality of stateflow representations for the FSM-modeled system according to the domain alternatives. The stateflow representations may be used for reflecting behavior of the FSM-modeled system when operating according to one or more of the domain alternatives.”) (Rubin Paragraph 0162: “A state transition table module 718 may be configured for defining a state transition table, such as with the constructs suitable for portraying possible states which may occur upon a given event, transition to another state, etc.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao in view of Knittel to include The system of claim 4, wherein the finite state machine includes a plurality of behavior modules corresponding to the plurality of states, respectively, to transition between different behavior modules upon occurrence a predetermined event taught by Rubin. This would have been for the benefit to provide a modeling system configured for providing a supportive software-based “toolbox” for aiding in the design and implementation of finite state machine (FSM)-modeled systems, such as with processor or other computer-based automation and analysis tools capable of leveraging software to improve the limited capabilities of designers so as to meaningfully interface with testing scenarios, reports, and other information for a FSM-modeled system. [Rubin Paragraph 0003]
8. Claim(s) 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao (US 20240017745 A1) in view of (US 20240348663 A1) to Crabtree et al. (hereinafter Crabtree).
Regarding claim 9, Cao discloses […] uses the artificial intelligence information generator (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds) and other environmental contexts such as maps. In at least one embodiment, a trajectory prediction model 228 takes a sequence of observed states for each agent at a fixed time interval Δt, and outputs a predicted future trajectory for each agent.”) (Cao Paragraph 0065: “In at least one embodiment, future trajectories of all N agents over T future time steps is denoted as Y=(Y.sup.1, . . . , Y.sup.T), where Y.sup.t=(y.sub.1.sup.t, . . . , y.sub.N.sup.t) denotes states of N agents at a future time step t (t>0).”) or the user defined information generator. (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”)
Cao does not disclose The system of claim 1, wherein the one or more second parameters include a transition trigger parameter to determine whether the simulator.
However, Crabtree does teach The system of claim 1, wherein the one or more second parameters include a transition trigger parameter to determine whether the simulator (Crabtree Paragraph 0148: “Furthermore, the simulation environment computing system can utilize knowledge graphs to model the relationships and transitions between different system states. The knowledge graph can represent states as nodes and the transitions between states as edges. Each edge can be labeled with the corresponding actions, events, or parameter changes that triggered the transition.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include The system of claim 1, wherein the one or more second parameters include a transition trigger parameter to determine whether the simulator taught by Crabtree. This would have been for the benefit to provide artificial intelligence techniques for improving simulation modeling of real-world systems and aiding in the systematic reduction of epistemic uncertainty across modeling and supporting improved function as a control plane in complex systems. [Crabtree Paragraph 0052]
Regarding claim 10, Cao does disclose The system of claim 9, wherein the simulator generates the future position of the object using the user defined information generator (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”) […] from the artificial intelligence information generator (Cao Paragraph 0062: “trajectory prediction model 128 is one or more neural network models”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds)”) to the user defined information generator (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”) […].
Cao does not disclose […] by switching […] based on the transition trigger parameter.
However, Crabtree does teach […] by switching […] based on the transition trigger parameter. (Crabtree Paragraph 0148: “Furthermore, the simulation environment computing system can utilize knowledge graphs to model the relationships and transitions between different system states. The knowledge graph can represent states as nodes and the transitions between states as edges. Each edge can be labeled with the corresponding actions, events, or parameter changes that triggered the transition.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include […] by switching […] based on the transition trigger parameter taught by Crabtree. This would have been for the benefit to provide artificial intelligence techniques for improving simulation modeling of real-world systems and aiding in the systematic reduction of epistemic uncertainty across modeling and supporting improved function as a control plane in complex systems. [Crabtree Paragraph 0052]
Regarding claim 11, Cao discloses The system of claim 9, wherein the simulator generates the future position of the object using the artificial intelligence information generator (Cao Paragraph 0062: “trajectory prediction model 128 is one or more neural network models”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds)”) […] from the user defined information generator (Cao Paragraph 0066: “In at least one embodiment, framework 100 is used, at least in part, to generate adversarial trajectories followed by adversarial agents (e.g., adversary vehicle), where adversarial trajectories are those that have been generated to challenge the predictions of a neural network, such as trajectory prediction model 228.”) (Cao Paragraph 0068: “In at least one embodiment, control actions 212 represent driving data recorded from actual driving behavior), where driving data includes acceleration values and curvature values. In at least one embodiment, control actions 212 represent driving data input by a user. In at least one embodiment, control actions 212 include values for acceleration and curvature for a vehicle, values which are used in adversarial trajectory generation”) to the artificial intelligence information generator (Cao Paragraph 0062: “trajectory prediction model 128 is one or more neural network models”) (Cao Paragraph 0062: “In at least one embodiment, trajectory prediction model 128 predicts (infers) future trajectories of an agent (e.g., vehicle, pedestrian, bicyclist, animal) based on past trajectories of that agent as described further herein at least in conjunction with FIG. 2. In at least one embodiment, trajectory prediction model 128 predicts future trajectories of other agents to assist with control, navigation, and route planning for a primary agent's benefit, where a primary agent can be referred to as an ego vehicle.”) (Cao Paragraph 0065: “In at least one embodiment, framework 100 is used to model a future trajectory distribution of N agents (e.g., vehicles) conditioned on their history states (e.g., prior positions, heading, acceleration, speeds)”) […].
Cao does not disclose […] by switching […] based on the transition trigger parameter.
However, Crabtree does teach […] by switching […] based on the transition trigger parameter. (Crabtree Paragraph 0148: “Furthermore, the simulation environment computing system can utilize knowledge graphs to model the relationships and transitions between different system states. The knowledge graph can represent states as nodes and the transitions between states as edges. Each edge can be labeled with the corresponding actions, events, or parameter changes that triggered the transition.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include […] by switching […] based on the transition trigger parameter taught by Crabtree. This would have been for the benefit to provide artificial intelligence techniques for improving simulation modeling of real-world systems and aiding in the systematic reduction of epistemic uncertainty across modeling and supporting improved function as a control plane in complex systems. [Crabtree Paragraph 0052]
9. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao (US 20240017745 A1) in view of (US 20230206055 A1) to Tetelman et al. (hereinafter Tetelman).
Regarding claim 14, Cao discloses claim 13, accordingly, the rejection of claim 13 is incorporated above.
Cao does not disclose The method of claim 13, further including: operating the autonomous vehicle using the first ML algorithm for road driving; collecting driving data from the road driving; and using the driving data collected from the road driving to train the second ML algorithm.
However, Tetelman does teach The method of claim 13, further including: operating the autonomous vehicle using the first ML algorithm for road driving; collecting driving data from the road driving; and using the driving data collected from the road driving to train the second ML algorithm. (Tetelman Paragraph 0018: “The vehicles 140 may be commercial vehicles, test vehicles, and/or may be autonomous vehicles (AVs).”) (Tetelman Paragraph 0024: “The machine learning models may be deployed in the vehicles for testing and additional data may be collected. Other models (e.g., driver assistant models, semi-autonomous vehicle models, perception models, etc.), may also be deployed in the vehicles for testing. The additional data may be ingested by the data science system 110 and may be used to develop further machine learning models or update/improve existing machine learning models, restarting the development cycle.”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cao to include The method of claim 13, further including: operating the autonomous vehicle using the first ML algorithm for road driving; collecting driving data from the road driving; and using the driving data collected from the road driving to train the second ML algorithm taught by Tetelman. This would have been for the benefit to provide A data science platform may help address these issues when training and/or developing machine learning models. In one embodiment, a data science system provides an end-to-end platform that supports ingesting the data, view/browsing the data, visualizing the data, selecting different sets of data, processing and/or augmenting the data, pro