Prosecution Insights
Last updated: April 19, 2026
Application No. 18/057,913

REAL-TIME ASSESSMENT OF RESPONSES TO EVENT DETECTION IN UNSUPERVISED SCENARIOS

Final Rejection §103§112
Filed
Nov 22, 2022
Examiner
CAMPOS, ALFREDO
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
26 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
3.9%
-36.1% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103 §112
You Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12-22-2025 have been fully considered but they are not persuasive. Regarding 103 argument applicant argues on page 10 and 11 “As mentioned above, the claims have been amended to focus on a specific architectural structure of the distributed computing node network. No combination of the cited art teaches of a similar network architecture. Accordingly, for at least these reasons, as discussed above, the pending claims should be found to be distinguished from and patentable over the cited art and rejections of record. While the foregoing remarks have primarily focused on some of the differences between the independent claims and the art, this does not mean that these are the only patentable distinctions.1 For instance, each of the dependent claims presents additional distinctions over the cited art of record.” The applicant argues amended limitations and the amended limitations have not been examined rendering the argument moot. Claim Objections Claim 11 is objected to because of the following informalities: “the central node, wherein the central node is a type of node that trains the event detection model and that distributes instances of the event detection model to the multiple far edge nodes by way of the near edge node” the limitation is missing a “;” at the end. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 1 the limitation “the central node, wherein the central node is a type of node that trains the event detection model and that distributes instances of the event detection model to the multiple far edge nodes by way of the near edge node;” lacks written description. The specification in paragraph [0076] teaches “Embodiment 7. The method as recited in any of embodiments 1-6, wherein the near edge node is an element of a provider site that is operable to train the model instance, and to provide the model instance as a service to a group of edge nodes that includes the edge node.” The limitation of claim 1 above recites that the central node trains the event detection model and distributes it to the far edge node by the near edge now. However the near edge is thought to train and distribute the model instance to the edge node. The specification in paragraph [0063] line 15-20 “the near edge node and a central node may be elements of a service provider site that may train and provide a model for use by clients, such as edge nodes. In general however, no particular allocation of the functions disclosed herein, including in Figure 7, is necessarily required. As such, the allocation disclosed in Figure 7 is presented only by way of example, and is not intended to limit the scope of the invention in any way” does teach that the central node or the near edge node trains the model. Yet the specification does not teach that the central node distributes the model instance via the near edge node. The limitation is interpreted to train at a central node and the near edge node training and distribute the updated model. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 10, 11, 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (US12049221B2) (“Sun”) in view of Bandeira et al. (WO2022/122750) (“Bandeira”) and further in view of Kobilarov et al. (US10671076B1) (“Kobilarov”). Regarding claim 1 and analogous claim 11, as best understood based on the 112(a) issued identified above. Sun teaches A method[,] comprising: obtaining access to a distributed computing node network comprising (Sun FIG. 1B, PNG media_image1.png 961 632 media_image1.png Greyscale Col 3 line 29-42 FIG. 1B is a diagram of an example system 100. The system 100 includes an on-board system 110 and a training system 120. The on-board system 110 is located on-board a vehicle 102. The vehicle 102 in FIG. 1B is illustrated as an automobile, but the on-board system 102 can be located onboard any appropriate vehicle type. The vehicle 102 can be a fully autonomous vehicle that determines and executes fully-autonomous driving decisions in order to navigate through an environment. The vehicle 102 can also be a semi-autonomous vehicle that uses predictions to aid a human driver. For example, the vehicle 102 can autonomously apply the brakes if a prediction indicates that a human driver is about to collide with another vehicle [obtaining access to a distributed computing node network comprising]): far edge node, wherein the far edge node is a type of node that is logically disposed at an edge of the distributed computing node network, wherein the far edge node includes an instance of an event detection model that is structured to predict a specific event based on the trajectories of the far edge node, wherein the far edge node associates the trajectories to a trajectory class, and wherein the far edge node is disposed on or included as a part of a vehicle (Sun FIG. 1B, PNG media_image1.png 961 632 media_image1.png Greyscale [far edge node, wherein the far edge node is a type of node that is logically disposed at an edge of the distributed computing node network,] Col 3 line 43-55 The on-board system 110 includes one or more sensor subsystems 130. The sensor subsystems 130 include a combination of components that receive reflections of electromagnetic radiation, e.g., lidar systems that detect reflections of laser light, radar systems that detect reflections of radio waves, and camera systems that detect reflections of visible light. The sensor data generated by a given sensor generally indicates a distance, a direction, and an intensity of reflected radiation. For example, a sensor can transmit one or more pulses of electromagnetic radiation in a particular direction and can measure the intensity of any reflections as well as the time that the reflection was received. [wherein the far edge node includes a sensor that generates sensor data describing trajectories of the far edge node,] Fig. 1B Trajectory Prediction System, PNG media_image2.png 654 783 media_image2.png Greyscale Col 4 line 40-46, The trajectory prediction system 150 processes the context data 142 to generate a respective trajectory prediction output 152, i.e., one or more predicted trajectories, for each of one or more of the surrounding agents. The trajectory prediction output 152, i.e., the predicted trajectory, for a given agent characterizes the predicted future trajectory of the agent after the current time point. [wherein the far edge node includes an instance of an event detection model that is structured to predict a specific event based on the trajectories of the far edge node,] Col 3, The on-board system 110 is located on-board a vehicle 102. The vehicle 102 in FIG. 1B is illustrated as an automobile, but the on-board system 102 can be located onboard any appropriate vehicle type. Col 3 line 61-67 Col 4 1-10, The sensor subsystems 130 or other components of the vehicle 102 can also classify groups of one or more raw sensor measurements from one or more sensors as being measures of another agent. A group of sensor measurements can be represented in any of a variety of ways, depending on the kinds of sensor measurements that are being captured. For example, each group of raw laser sensor measurements can be represented as a three-dimensional point cloud, with each point having an intensity and a position in a particular two-dimensional or three-dimensional coordinate space. In some implementations, the position is represented as a range and elevation pair. Each group of camera sensor measurements can be represented as an image patch, e.g., an RGB image patch. Col 4 47-53, Once the sensor subsystems 130 classify one or more groups of raw sensor measurements as being measures of respective other agents, the sensor subsystems 130 can compile the raw sensor measurements into a set of raw data 132, and send the raw data 132 to a data representation system 140. As a particular example, a predicted trajectory in the trajectory prediction output 152 for a given agent can include predicted trajectory states for the agent, i.e., locations and optionally other information such as headings, at each of multiple future time points that are after the current time point, e.g., for each future time point in a fixed size time window following the current time point [wherein the far edge node associates the trajectories to a trajectory class, and wherein the far edge node is disposed on or included as a part of a vehicle;]); the central node, wherein the central node is a type of node that trains the event detection model and that distributes instances of the event detection model to the multiple far edge nodes by way of the near edge node; (Sun Col 5 line 52-55, The training system 120 is typically hosted within a data center 124, which can be a distributed computing system having hundreds or thousands of computers in one or more locations [the central node]. Col 6 line 15-25, The training data store 170 provides training examples 175 to a training engine 180, also hosted in the training system 120. The training engine 180 uses the training examples 175 to update model parameters that will be used by the trajectory prediction system 150, and provides the updated model parameters 185 to the trajectory prediction model parameters store 190. Once the parameter values of the trajectory prediction system 150 have been fully trained, the training system 120 can send the trained parameter values 195 to the trajectory prediction system 150, e.g., through a wired or wireless connection [wherein the central node is a type of node that trains the event detection model and that distributes instances of the event detection model to the multiple far edge nodes]) Sun does not explicitly teach a near edge node, wherein the near edge node is a type of node that is configured to collect trajectory class data from multiple far edge nodes, which include said far edge node, and that is also configured to communicate with a central node; receiving, by a the near edge node from a the far edge node, a the trajectory class that is determined by the far edge node; selecting, by the near edge node, a distribution that corresponds to the trajectory class; receiving, by the near edge node, the distribution; and transmitting, by the near edge node to the far edge node, the distribution, wherein the distribution is usable by the far edge node to determine a label to be assigned to a prediction of interest the predicted specific event, which is generated by a model the instance of the event detection model that is running at the far edge node [[.]]; and modifying a current trajectory of the vehicle based on the predicted specific event, wherein modifying the current trajectory includes at least one of braking or accelerating the vehicle. However Bandeira teaches a near edge node, wherein the near edge node is a type of node that is configured to collect trajectory class data from multiple far edge nodes, which include said far edge node, and that is also configured to communicate with a central node (Bandeira Fig. 3 PNG media_image3.png 595 914 media_image3.png Greyscale [which include said far edge node, and that is also configured to communicate with a central node;] Page 4 line 6-10, According to a preferred embodiment, the edge processed data comprises at least one value representative for an attribute associated to the event, said attribute characterizing a property of the event. In this way, more data content can be achieved for applications where classes and attributes may both be needed. In particular, the following sets of classes and attributes may be used for the following events/objects involved in events: Page 4 line 11-13, -for vehicles, classification may be by type of vehicle (car, truck, motorcycle, bicycle), size of vehicle (big, small, intermediate), model of vehicle, color of vehicle; and a corresponding set of attributes may be number plate, speed, direction, number of occupants; Page 22 line 11-19, Figure 3 illustrates a scenario where an event is observed by two edge devices 10 of a subset of n+ 1 edge devices 10, 10' connected to the same fog device 20. The first and the second edge devices numbered Edge device 1 and Edge device 2 may be adjacent geographically but the teachings of this embodiment should not be limited to this option. The fog device 20 may receive first edge processed data D1 about an event from Edge device 1 and second edge processed data D2 about an event from Edge device 2. The fog device 20 may process the first and second edge processed data, D 1 and D2, to determine whether or not the first and second edge processed data D 1, D2, relate to the same event, and to transmit fog processed data D1' to the central control system in accordance with the determined result [a near edge node, wherein the near edge node is a type of node that is configured to collect trajectory class data from multiple far edge nodes]); and transmitting, by the near edge node to the far edge node, the distribution, wherein the distribution is usable by the far edge node to determine a label to be assigned to the predicted specific event, which is generated by a model the instance of the event detection model that is running at the far edge node [[.]] (Bandeira Page 26 line 1-7, Figures 7, 8 and 9 illustrate alternative exemplary embodiments of classification at an edge device 10 for self-learning systems using models [by a model the instance of the event detection model that is running at the far edge node]. Figure 7 illustrates a principle of data fusion where data from a plurality of data sources 13 is first combined by data association and later classified using a model. A plurality of data sources 13a may be arranged at the same edge device 10 a (not represented) and each provide sensed data to the edge processing means 12a of that edge device 10 a. Page 27 line 5 -16, Control means 17 arranged in an edge device or control means 27 arranged in a fog device may then control one of the first and/or second processing means 12a and/or 12b to use the processed data obtained with the best model to train the processing means having a worse model. In particular the processed data obtained with the best model may be used to generate control data to change the 10 other one or more models [wherein the distribution is usable by the far edge node]. In case of classification, the finest classification may be used to train the processing means which had provided a coarser classification. It is further noted that although classification has been mainly discussed above, the self-learning systems of the invention using models may also be envisaged for prediction. Based on the results of the decision fusion, the models may thus be updated with a new set of rules for new classes and/or subclasses and/or 15 attributes [determine a label to be assigned to a prediction of interest generated]. It is noted that decision fusion may use additional environmental data for decision if the results of different models are diverging); Sun and Bandeira are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Sun to incorporate the teachings of Bandeira to fuse data to allow classification at the edge node. Doing so would allow for distributed learning at different levels of the edge learning system by allowing data to be process partially locally in the edge devices (Bandeira line 15-24, In that manner, the fog device is capable of generating fog processed data about the event which is more accurate and/or more compact and/or more complete than the sum of the edge processed data received from the subset. In other words, the solution fog/edge computing can bridge the gap between the central control system and the data sources of the edge devices by suitably organizing computing, storage, networking, and data management between the edges 20 devices, the fog devices and the central control system. The benefit of this solution is that the environmental data can be processed partially locally in the edge devices and partially regionally in the fog devices, reducing the amount of data that has to be transmitted to the central control system, the amount of processing in the central control system and thus the latency for processing the data.). Kobilarov teaches receiving, by a the near edge node from a the far edge node, a the trajectory class that is determined by the far edge node (Kobilarov Col 10 66-67 - Col 11 1-15 As described above, the vehicle control device 114 can be a separate and distinct computer system, which can include an execution module 116, a fallback determination module 118, and a data input module 120. In some examples, the vehicle control device 114 can access the data input module 110 and/or data store 112 associated with the computer system(s) 102 [by a the near edge node from a the far edge node]. The execution module 116 can receive [receiving,] the output trajectory from the trajectory module 108 and can compute commands for actuating steering and acceleration of the autonomous vehicle 122 to enable the autonomous vehicle 122 to follow the output trajectory. In at least one example, the execution module 116 can receive the output trajectory and can compute a steering angle and velocity to enable the autonomous vehicle 122 to follow the output trajectory. A non-limiting example of an algorithm that the execution module 116 can use is provided below. Col 25 line 3-10, FIG. 4 illustrates a detail of an example architecture 400 for predicting a trajectory of a third-party object proximate to an autonomous vehicle, as described herein. The example architecture 400 illustrates aspects of a prediction module 402 receiving inputs from the data input module 110, the policy(s) 128, the predictive data 134, and the map(s) 136 to predict one or more routes or trajectories associated with a third-party object such as a vehicle or person. Col 26 line 37-45, However the predictive trajectory module 410 can nevertheless determine when a behavior of a third-party object is outside a scope of "normal behavior" ( e.g., adhering to rules of the road, right-of-way, good driving etiquette, etc.) and can adjust a route and/or trajectory of the autonomous vehicle (e.g., to avoid an accident) or can perform an action (e.g., alerting emergency services such as the police) in response to determining behavior is out of the ordinary or in response to detecting a collision. [a the trajectory class that is determined by the far edge node;]); selecting, by the near edge node, a distribution that corresponds to the trajectory class; receiving, by the near edge node, the distribution (Col 13 line 30-46, Furthermore, as described above, the data input module 120 can receive sensor data from one or more sensors. The data input module 120 can process sensor data received from the one or more sensors to determine the state of the autonomous vehicle 122 locally. The execution module 116 can utilize the state of the autonomous vehicle 122 for computing a steering angle and velocity to enable the autonomous vehicle 122 to follow the output trajectory without having to communicate with the computer system(s) 102. That is, separating the vehicle control device 114, which is executing the execution module 116, from the computer system(s) 102, which are executing one or more other modules ( e.g., route planning module 104, decision module 106, trajectory module 108, etc.), can conserve computational resources expended by the vehicle control device 114 by enabling the vehicle control device 114 to execute trajectory(s) locally. Col 12 line 46-53, Such data (e.g., real-time processed sensor data) can be used by the fallback determination module 118 to determine when a fallback action is warranted and/or to generate a fallback trajectory. Additionally and/or alternatively, such data (e.g., real-time processed sensor data) can be used by the execution module 116 for computing a steering angle and velocity to enable the autonomous vehicle 122 to follow the output trajectory and/or fallback trajectory [selecting, by the near edge node,]. Col 26 line 65-67 and Col 27 For example, the predictive trajectory probability module 412 can extrapolate a state of an environment to a particular time into the future (e.g., one second, five) seconds, or any other length of time) and evaluate the probability of that outcome in the output distribution of prediction. For example, the predictive trajectory probability module 412 can perform operations including extrapolating tracked motion of an object to determine possible trajectories of the object, and/or to determine probabilities with such trajectories [a distribution that corresponds to the trajectory class;] Col 30 line 53-52, In some instances, the prediction module 402 can deter mine the predictive trajectories 712, 716, and 720 based at least in part on the symbols 728 that are present in the environment 702 and that are relevant to the vehicle 710. For example, the operation can determine that the stop sign 724 is not relevant to the vehicle 710, and therefore, cannot present a predictive trajectory that utilizes a stop region associated with the stop sign 724. In some instances, each trajectory can include a probability that the vehicle 710 can execute the trajectories 712, 716, and 720. Col 31 line 11-17, the vehicle 710 can pursue the second predicted trajectory can increase (e.g., from 33% to 45%), and a probability that the vehicle 710 can pursue the first predicted trajectory can also increase ( e.g., from 33% to 55%). Of course, the percentages above are merely exemplary, and any probabilities can be determined based upon specific scenarios and specific implementation [receiving, by the near edge node, the distribution;]); modifying a current trajectory of the vehicle based on the predicted specific event, wherein modifying the current trajectory includes at least one of braking or accelerating the vehicle (Kobilarov Col 25 line 3-10, FIG. 4 illustrates a detail of an example architecture 400 for predicting a trajectory of a third-party object proximate to an autonomous vehicle, as described herein. The example architecture 400 illustrates aspects of a prediction module 402 receiving inputs from the data input module 110, the policy(s) 128, the predictive data 134, and the map(s) 136 to predict one or more routes or trajectories associated with a third-party object such as a vehicle or person [modifying a current trajectory of the vehicle based on the predicted specific event,]. FIG. 5 depicts a top level view ofa scenario 500 including a third-party vehicle and an autonomous vehicle navigating a stop sign. In this scenario 500, an environment 502 includes an autonomous vehicle 504 driving on a road towards a stop sign 506 associated with a stop region 508 and a stop line 510. Further, the environment 502 can include a third-party vehicle 512. Discussed below, the third-party vehicle 512 can be associated with a tailgate region 514 and a back off region 516. In one example, in order for the autonomous vehicle 504 to traverse the stop sign 506, the autonomous vehicle 504 must approach the stop sign 506, stop in a stop region 508 before the stop line 510, wait, and accelerate away from the stop sign 506 to continue towards another goal, all the while maintaining a respectful distance from the third-party vehicle 512 based on the tailgate region 514 and/or the back off region 516. [wherein modifying the current trajectory includes at least one of braking or accelerating the vehicle]). Sun and Kobilarov are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Sun to incorporate the teachings of Kobilarov to modify the current trajectory of a vehicle to break or accelerate. Doing so to allow the vehicle to navigate without any accidents or damage(Kobilarov col line 10-34, In some instances, motion planning for an autonomous vehicle can include mission level planning (e.g., from point A to point B) as well as more granular planning (e.g., how the vehicle traverses a segment of a road surface, such as a lane change, or how the vehicle navigates through an intersection). The myriad obstacles and behaviors that are encountered in an environment, such as a city, presents many challenges. Failure to correctly navigate an environment can cause accidents or damage, for example. Thus, it becomes desired for an autonomous vehicle to predict the behavior of third-party objects to improve safety and comfort for occupant of an autonomous vehicle, as well as other actors ( e.g., drivers, passengers, pedestrians, cyclists, animals, etc.) of an environment. As the number of third-party objects in an environment proximate to an autonomous vehicle increases, the number of possible trajectories for the third-party objects and for the autonomous vehicle increases rapidly, often exponentially, and often without an upper bound. Thus, an efficient prediction system is needed to improve accuracy and speed of predictions to reduce the overall computational load on a computing device, and to provide a solution (e.g., a trajectory) as the autonomous vehicle traverses an environment. These and other advantages of the methods, apparatuses, and systems are discussed herein.). Regarding claim 2 and analogous claim 12, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun further teaches wherein the trajectory class was determined using a trajectory classification module (Sun Col 4 line 8-10, Once the sensor subsystems 130 classify one or more groups of raw sensor measurements as being measures of respective other agents, the sensor subsystems 130 can compile the raw sensor measurements into a set of raw data 132, and send the raw data 132 to a data representation system 140. Col 4 line 40-43, The trajectory prediction system 150 processes the context data 142 to generate a respective trajectory prediction output 152, i.e., one or more predicted trajectories, for each of one or more of the surrounding agents [a trajectory classification module]. Col 13 line 25, The system trains the neural networks described above for stages 1, 2, and 3 together, i.e., jointly to minimize an overall loss function that includes a respective term for each of the three stages. For example, the overall loss can be a weighted sum of the respective loss terms for each of the three stages. Col 13 line 56-62, The loss term for stage 3 measures, for each of the one or more agents for which trajectories were predicted, an error between the predicted trajectory given the ground-truth temporal-spatial point and the ground truth future trajectory for the agent. For example, the loss can be the sum or the average of the Huber losses between the two trajectories for each of the one or more agents. Col 14 line 9-20, Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus (i.e. module)). Regarding claim 10 and analogous claim 20, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun further teaches wherein the far edge node comprises a mobile device (Col 4 line 15-22, The data representation system 140, also on-board the vehicle 102 [a mobile device], receives the raw sensor data 132 from the sensor system 130 and other data characterizing the environment, e.g., map data that identifies map features in the vicinity of the vehicle, and generates context data 142. The context data 142 characterizes the current state of the environment surrounding the vehicle 102 as of the current time point). Claim(s) 3, 4, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bandeira in view of Sun and Kobilarov and further in view of Wada et al. (US20190132256A) (“Wada”). Regarding claim 3 and analogous claim 13, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun, Bandeira and Kobilarov are combine in the same rational as set forth above with respect to claim 1 and analogous claim 11. Sun does not explicitly teach wherein the distribution comprises a response time between the predicted specific event [[,]] and a time when an action corresponding to the predicted specific event was initiated. However Wada teaches wherein the distribution comprises a response time between the predicted specific event [[,]] and a time when an action corresponding to the predicted specific event was initiated (Para 0176, FIG . 8 is a configuration diagram of the output table group 224 . Para 0177, The output table group 224 includes an relation table 801, a predicted response performance table 805, a predicted metric table 808, and a predictive judgement result table 809 . Para 0184, The predicted response performance table 805 stores a predicted response performance of each service 102. Specifically , for example , the predicted response performance table 805 includes the following information for each date time. Para 0185, A date time 851 ( indicating a response time ) [and a time when an action corresponding to the predicted specific event was initiated] Para 0186, A service ID 852 ( indicating an ID of the service 102 ) Para 0187, A predicted I/O load 853 (indicating a predicted I/0 load ) Para 0188, A predicted response time 854 ( indicating a predicted response time ) [wherein the distribution comprises a response time between the predicted specific event]). Sun and Wada are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Sun to incorporate the teachings of Wada to include information about action response time. Doing to optimization a service by determining accurate predictions (Wada Para 0050, The optimization unit 114 performs the following processes for each service 102 . In other words , the optimization unit 114 performs a predictive judgement which is a determination of whether or not accurate prediction of a workload of the service 102 can be anticipated on the basis of at least one of the following ( x ) and ( y ) , that is , ( x ) time-series measured workloads and time-series predicted the service 102 and ( y ) at least one of a measured service level and a predicted service level in the designated prediction period with respect to the service 102.). Regarding claim 4 and analogous claim 14, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun, Bandeira and Kobilarov are combine in the same rational as set forth above with respect to claim 1 and analogous claim 11. Sun and Wada are combine in the same rational as set forth above with respect to claim 3 and analogous claim 13. Sun does not explicitly teach wherein, when the label is 'true,' the label indicates that the predicted specific event was correct, and wherein when the label is 'false,' the label indicates that the predicted specific event was incorrect. However Wada teaches wherein, when the label is 'true,' the label indicates that the predicted specific event was correct, and wherein when the label is 'false,' the label indicates that the predicted specific event was incorrect (Wada Para 0207, The optimization unit 114 performs a predictive judgement which is a determination of whether or not an accurate prediction of an I/O load of the service 102 can be anticipated on the basis of the time - series measured response performances and the time - series predicted response performances ( S904 ) . Para 0208 In a case where a determination result in S904 is false ( S904 : F ) , the optimization unit 114 determines adding of a resource group ( for example , a predetermined amount of a resource group ) to the service 102 [and wherein when the label is 'false,' the label indicates that the predicted specific event was incorrect]. Para 0209, In a case where a determination result in S904 is true ( S904 : T ) , the prediction unit 110 predicts time – series metrics in a designated prediction period of each resource of the resource group 103 related with the service 102 by using the prediction model 109 corresponding to the resource ( S906 ) . The optimization unit 114 performs a maintenance determination which is a determination of whether or not a capacity of the resources is currently maintained on the basis of at least one of the time - series predicted [when the label is 'true,' the label indicates that the predicted specific event was correct]). Claim(s) 5, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bandeira in view of Sun and Kobilarov and further in view of Li et al. (US20200236015A1) (“Li”). Regarding claim 5 and analogous claim 15, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun, Bandeira and Kobilarov are combine in the same rational as set forth above with respect to claim 1 and analogous claim 11. Sun does not explicitly teach wherein the distribution comprises historical information about action response times for an event prediction class. However Li teaches wherein the distribution comprises historical information about action response times for an event prediction class ((Para 0140, The example method of FIG. 9 includes obtaining, from a managed network, a plurality of response times of a network-based service provided by the managed network (900). The response times span a range of values [historical information about action response times]. Para 0141 line 1-4, The example method of FIG. 9 additionally includes training, based on the plurality of response times, a probability distribution to model the managed network (902). Para 0142, The example method of FIG. 9 also includes receiving an additional response time from the managed network (904). The example method of FIG. 9 further includes using the probability distribution to determine, for the additional response time, a percentile based on the additional response time (906). The example method of FIG. 9 also includes, based on the percentile, determining that the additional response time is anomalously high with respect to the plurality of response times of the network-based service (908) [an event prediction class]. The example method of FIG. 9 further includes transmitting, to a client device associated with the managed network, an indication that the additional response time is anomalously high (910). (Examiner Note: The system trains using a first set of response times making the historical information and receives more response times to determine if the response times are anomalously high)). Sun and Li are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Sun to incorporate the teachings of Li training a model with a first set of response time to provide prediction for additional response times received by the system and send the information the client. Doing to allow proper detection of anomalies and reduce service interruption and minimaxing human intervention (Li Para 0002 line 9-12 Thus, accurate detection of such anomalies can reduce service interruption or diminution by providing appropriate alerts while reducing costs and/or the required amount of human intervention by reducing false-positives. Para 0003, In order to detect whether a particular sample or set of samples of such a signal represent an anomaly, it can be advantageous to determine how likely the value of each of the samples is relative to an expected distribution of the samples. Such a distribution can be readily determined from a set of 'normal' samples if the 'normal' samples comport with a Gaussian distribution or other commonly-used distribution. In such examples, the determined distribution can then be used to determine a percentile score, a Z-score, or some other measure of how 'normal' a particular observed sample is. This level of 'normalcy' can then be used, alone or in combination with percentiles or other levels determined for additional samples, to determine whether an anomaly has occurred (e.g., by determining that the particular sample(s) are especially unlikely to have occurred in the absence of an anomaly). Claim(s) 8, 9, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bandeira in view of Sun and Kobilarov and further in view of Johan Engström, Shu-Yuan Liu, Azadeh Dinparastdjadid, Camelia Simoiu "Modeling Road User Response Timing in Naturalistic Traffic Conflicts: A surprise-based framework (2022) (“Engstrom”). Regarding claim 8 and analogous claim 18, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun, Bandeira and Kobilarov are combine in the same rational as set forth above with respect to claim 1 and analogous claim 11. Sun does not explicitly teach wherein the distribution comprises distribution of operator response times for each action a ∈   A under the trajectory class, and model instance prediction. However Engstrom teaches wherein the distribution comprises distribution of operator response times for each action a ∈   A under the trajectory class, and model instance prediction (Engstrom Page 9, 3.5. Event sampling, An initial data set of rear-end crashes and near crashes was established by filtering on “incident_type = Rear-end, striking” in the SHRP2 Event Data table. This resulted in a set of 119 crashes and 3,669 near crashes. To select a manageable subset of events for the present analysis, the four selection criteria below were initially applied. Page 9 PNG media_image4.png 384 465 media_image4.png Greyscale Page 12 5.1 A computational implementation Para 5, The specific generative model used to produce the prior belief is known as Multipath and described in Chai, Sapp, Bansal and Anguelov (2020). After being trained on large quantities of driving data, Multipath generates prior beliefs of the possible future lateral and longitudinal positions of the cutting-in vehicle along a set of predicted trajectories in the form of probability distributions (Gaussian mixtures) at different time steps into the future. In the current simulation, another vehicle (the POV) unexpectedly cuts in front of the SV (the responding vehicle) from the adjacent lane thus generating a traffic conflict [operator response times for each action a ∈   A under the trajectory class, and model instance prediction]). Sun and Engstrom are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Sun to incorporate the teachings of Engstrom to incorporate response times into the distribution. Doing so would allow the framework to understand response timing and understand relative to the subjects prior belief (Engstrom Page 1 Abstract line 1-14, There is currently no established method for evaluating human response timing across a range of naturalistic traffic conflict types. Traditional notions derived from controlled experiments, such as perception-response time, fail to account for the situation-dependency of human responses and offer no clear way to define the stimulus in many common traffic conflict scenarios. As a result, they are not well suited for application in naturalistic settings. Our main contribution is the development of a novel framework for measuring and modeling response times in naturalistic traffic conflicts applicable to automated driving systems as well as other traffic safety domains. The framework suggests that response timing must be understood relative to the subject’s current (prior) belief and is always embedded in, and dependent on, the dynamically evolving situation. The response process is modeled as a belief update process driven by perceived violations to this prior belief, that is, by surprising stimuli. The framework resolves two key limitations with traditional notions of response time when applied in naturalistic scenarios: (1) The strong situation-dependence of response timing and (2) how to unambiguously define the stimulus). Regarding claim 9 and analogous claim 19, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun, Bandeira and Kobilarov are combine in the same rational as set forth above with respect to claim 1 and analogous claim 11. Sun and Engstrom are combine in the same rational as set forth above with respect to claim 8 and analogous claim 18. Sun does not explicitly teach wherein the prediction of interest concerns a specified event class that is an element of a domain of event classes. However Engstrom teaches wherein the prediction of interest concerns a specified event class that is an element of a domain of event classes ( Engstrom Page 4 PNG media_image5.png 316 474 media_image5.png Greyscale [wherein the prediction of interest concerns a specified event class that is an element] Page 4, A more formal account of the proposed belief updating scheme is illustrated in Figure 2.4, continuing the example from Figure 2.3 (bottom). Here, the following vehicle driver’s prior belief about whether the lead vehicle will continue ahead or brake is represented as probability distribution P(B) [a domain of event classes]. Initially, the belief that the lead vehicle will continue ahead (B1) dominates the prior distribution, which entails the prediction that the observed looming (O) will remain close to zero. When the lead vehicle slows down, the observed looming deviates from the predicted looming and generates a prediction error or surprise3, which drives an update of the prior belief. (i.e. B1 and B2 are possible events that belong to the B)). Claim(s) 21 are rejected under 35 U.S.C. 103 as being unpatentable over Bandeira in view of Sun and Kobilarov and further in view of Stepp et al. (US11455893B2) (“Stepp”). Regarding claim 21, Sun in view of Bandeira and Kobilarov teach the method as recited in claim 1. Sun, Bandeira and Kobilarov are combine in the same rational as set forth above with respect to claim 1 and analogous claim 11. Sun teaches wherein the trajectory class is defined by: generating multiple trajectories by grouping together various sets of most-recently obtained sensor data collections (Sun Col 3 line 61-67 and Col 4 line 1-3, The sensor subsystems 130 or other components of the vehicle 102 can also classify groups of one or more raw sensor measurements from one or more sensors as being measures of another agent. A group of sensor measurements can be represented in any of a variety of ways, depending on the kinds of sensor measurements that are being captured. For example, each group of raw laser sensor measurements can be represented as a three-dimensional point cloud, with each point having an intensity and a position in a particular two-dimensional or three-dimensional coordinate space. Col 4 line 15-22, The data representation system 140, also on-board the vehicle 102, receives the raw sensor data 132 from the sensor system 130 and other data characterizing the environment, e.g., map data that identifies map features in the vicinity of the vehicle, and generates context data 142. The context data 142 characterizes the current state of the environment surrounding the vehicle 102 as of the current time point [generating multiple trajectories by grouping together various sets of most-recently obtained sensor data collections].); Sun does not explicitly teach clustering the multiple trajectories into groups of similar trajectories; identifying, for each respective group, a representative typical trajectory; and classifying each group of similar trajectories, based on its representative typical trajectory, as a corresponding trajectory class, such that multiple trajectory classes are generated, wherein said trajectory class is included among the multiple trajectory classes. However Stepp teaches clustering the multiple trajectories into groups of similar trajectories; identifying, for each respective group, a representative typical trajectory (Stepp Col 13 line 60-67 and Col 14 line 1-14, Returning to FIG. 5, after the trajectory normalizer 506 has converted the trajectory data 502, 504 to map the trajectories 192 to the reference frame in which the first trajectory 192A is constrained, the trajectory segmenter 508 parses the trajectory data 502, 504 into segment data 510. In some implementations, the trajectory data 502, 504 includes data representing multiple trajectory segments 118 that have different shapes representing different kinds of dynamics or behaviors. To cluster at a single behavior level, the trajectory data 502, 504 are parsed into segment data 510, with each entry of the segment data 510 representing a single trajectory segment 118. In a particular implementation, the trajectory segments 118 are separated at change points which are detected using a change point detection process in which change points are detected in each dimension of the trajectory separately, then the change points are combined across dimensions to find points common to all dimensions In a In a particular implementation, the change points are detected by sliding two contiguous windows, past (p) and future (f) relative to their shared boundary, along the time series and comparing their respective histograms using Kulback-Liebler Divergence [clustering the multiple trajectories into groups of similar trajectories;].) Col 14 line 32-36, In the method 500 of FIG. 5, the segment data 510 is provided to a segment clustering operation 512 to group sets (e.g., pairs) of the trajectory segments into clusters 514, 35 where each cluster 514 is associated with one type of behavior [identifying, for each respective group, a representative typical trajectory;]); and classifying each group of similar trajectories, based on its representative typical trajectory, as a corresponding trajectory class, such that multiple trajectory classes are generated, wherein said trajectory class is included among the multiple trajectory classes (Stepp Col 9 line 11-25, The trained classifier 126A can include or correspond to an artificial neural network, a support vector machine, a decision tree, or a variant or ensemble of any combination thereof. The trained classifier 126A is configured to identify the type of trajectory pattern that is represented by trajectory data input to the trained classifier 126A. In particular implementations, the trained classifier 126A is trained to output a label ( e.g., one of the labels from the training data 124) indicating to which cluster of the sets of trajectory clusters 122 a trajectory pair described by the input trajectory data would be assigned. In some such implementations, the output can further include a projection or prediction of a future position of one of the objects 190 based on historical data ( e.g., the sets of trajectory data 114) [and classifying each group of similar trajectories, based on its representative typical trajectory, as a corresponding trajectory class,]. Col 14 line 32-42, In the method 500 of FIG. 5, the segment data 510 is provided to a segment clustering operation 512 to group sets (e.g., pairs) of the trajectory segments into clusters 514, 35 where each cluster 514 is associated with one type of behavior. In a particular implementation, the segments are clustered using a hierarchical clustering technique ( e.g., an agglomerative clustering technique or a divisive clustering technique). To perform the clustering operation 512, each set 40 of segments ( e.g., each pair of segments including a segment of the first trajectory 192A and a corresponding segment of the second trajectory 192B) is represented by feature data. Col 14 line, 52-55, The segment clustering operation 512 assigns a unique identifier to each cluster 514, which can be associated with a user 55 defined label or can be computer assigned. FIG. 6 is a flow diagram illustrating an example of a method 600 of generating a trained classifier according to a particular implementation. The method 600 is a computer controlled method and can be initiated, performed, or controlled by the system 100 or one or more components thereof, such as by the computer system 102, the processors 104, dedicated hardware or firmware components (e.g., an artificial intelligence co-processor), or a combination thereof [such that multiple trajectory classes are generated, wherein said trajectory class is included among the multiple trajectory classes]). Sun and Stepp are considered to be analogous to the claim invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filling date of the claimed invention to have modified Sun to incorporate the teachings of Stepp to classify multiple trajectories by similar trajectories. Doing to allow a classifier to classify a trajectory based on one type of behavior (Stepp Col 9 line 16-21, In particular implementations, the trained classifier 126A is trained to output a label ( e.g., one of the labels from the training data 124) indicating to which cluster of the sets of trajectory clusters 122 a trajectory pair 20 described by the input trajectory data would be assigned. Col 14 line 32-36, In the method 500 of FIG. 5, the segment data 510 is provided to a segment clustering operation 512 to group sets (e.g., pairs) of the trajectory segments into clusters 514, 35 where each cluster 514 is associated with one type of behavior. Col 14 line 52-55, The segment clustering operation 512 assigns a unique identifier to each cluster 514, which can be associated with a user defined label or can be computer assigned.) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFREDO CAMPOS whose telephone number is (571)272-4504. The examiner can normally be reached 7:00 - 4:00 pm M - F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALFREDO CAMPOS/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Nov 22, 2022
Application Filed
Sep 12, 2025
Non-Final Rejection — §103, §112
Dec 09, 2025
Interview Requested
Dec 17, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Dec 22, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561407
ONE-PASS APPROACH TO AUTOMATED TIMESERIES FORECASTING
2y 5m to grant Granted Feb 24, 2026
Patent 12561559
Neural Network Training Method and Apparatus, Electronic Device, Medium and Program Product
2y 5m to grant Granted Feb 24, 2026
Patent 12554973
HIERARCHICAL DATA LABELING FOR MACHINE LEARNING USING SEMI-SUPERVISED MULTI-LEVEL LABELING FRAMEWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12536260
SYSTEM, APPARATUS, AND METHOD FOR AUTOMATICALLY GENERATING NEGATIVE KEYSTROKE EXAMPLES AND TRAINING USER IDENTIFICATION MODELS BASED ON KEYSTROKE DYNAMICS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month