DETAILED ACTION
This office action is in response to amendment filed on 10/7/2025.
Claims 1, 7 – 9, 11, 15 – 17 and 19 are amended.
Claims 1 – 20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 5, 8 – 17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable Farabet et al (US 20190303759, hereinafter Farabet), in view of Guney et al (US 20220027193, hereinafter Guney), and further in view of Boutin et al (US 20160098292, hereinafter Boutin).
As per claim 1, Farabet discloses: A computer-implemented system, comprising: one or more processing units; and one or more non-transitory computer-readable media storing instructions, when executed by the one or more processing units, cause the one or more processing units to perform operations comprising:
receiving a configuration including a simulated request for executing a task associated with at least one of a vehicle software build or a vehicle simulation of a vehicle; (Farabet figure 9 and [0113]: “at block B902, includes receiving simulation data representative of a simulated environment from a simulation host device. For example, the vehicle simulator component(s) 406, 420, and/or 422 may receive, from the simulator component(s) 402, simulation data representative of the simulated environment 410. In some examples, the simulation data received may be the simulation data corresponding to the sensors of the virtual object hosted by the vehicle simulator component(s).”; [0114]: “at block B904, includes generating virtual sensor data for each of a dynamically configurable number of virtual sensors. For example, the vehicle simulator component(s) 406, 420, and/or 422 may generate virtual sensor data using the simulation data for each of the virtual sensors of the vehicle. The virtual sensor data may be representative of the simulated environment 410 as perceived by at least one virtual sensor of a dynamically configurable number of virtual sensors of a virtual object within the simulated environment 410 (e.g., sensor data of a field of view of a virtual camera(s), sensor data of an orientation of the virtual vehicle using virtual IMU sensors, etc.).”.)
executing, a simulation of operations of a scheduler and task execution, (Farabet figure 9 and [0116]: “at block B908, includes computing, by one or more machine learning models, at least one output. For example, one or more DNNs of the software stack(s) 116 may uses the encoded sensor data to generate one or more outputs (e.g., objects detections, controls, actuations, path plans, guidance, etc.).”.)
wherein the executing comprises: and executing, based on the determined schedule, the task using a task run model; (Farabet [0121]: “at block B1008, includes applying the virtual sensor data to a trained machine learning model”; [0122]: “at block B1010, includes computing an output by the trained machine learning model. For example, the trained DNN may compute one or more outputs using the virtual sensor data”; [0123]: “at block B1020, includes controlling a virtual object within a simulated environment based at least in part on the output. For example, the virtual object (e.g., virtual vehicle) may be controlled within the simulated environment based at least in part on the output. In other examples, the outputs may be used for control. For example, the outputs may be object detection, lane detection, drivable free-space detection, safety procedure determination, etc.”).
calculating a metric for the scheduler based on an output of the simulation; (Farabet [0123]: “the outputs may be tested using one or more KPI's to determine the accuracy and effectiveness of the trained DNNs in any of a number of scenarios and environments. As such, where the trained DNNs suffer, fine-tuning may be executed to improve, validate, and verify the DNNs prior to deployment of the DNNs in real-world, physical vehicles (e.g., the vehicle 102)”.)
Farabet did not explicitly disclose:
Wherein the simulation is executed, using a queue state model and a resource availability model that simulates synthetic hardware resource;
determining, by the scheduler, a schedule for executing the task based on the configuration and at least one of a driving scenario or a vehicle compute framework associated with the task;
wherein the task run model simulates execution of the task by running the task on the synthetic hardware resource;
and validating, based on one or more predefined assertion rules, at least one of a task start time, task completion time, or a task execution order generated by the scheduler.
However, Guney teaches:
determining, by the scheduler, a schedule for executing the task based on the configuration and at least one of a driving scenario or a vehicle compute framework associated with the task; (Guney [0004]: “at each of a plurality of time steps: receiving data that characterizes an environment in a vicinity of a vehicle at a current time step, the environment comprising a plurality of agents; receiving data that identifies, as high-priority agents, a proper subset of the plurality of agents for which respective data characterizing the agents must be generated at the current time step; identifying computing resources that are available for generating the respective data characterizing the high-priority agents at the current time step; processing the data that characterizes the environment using a complexity scoring model to determine one or more respective complexity scores for each of the high-priority agents, each respective complexity score characterizing an estimated amount of computing resources that is required for generation of the data characterizing the high-priority agent using a prediction model; and determining a schedule for the current time step that allocates the generation of the data characterizing the high-priority agents across the available computing resources based on the complexity scores”.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Guney into that of Farabet in order to determine, by the scheduler, a schedule for executing the task based on the configuration and at least one of a driving scenario or a vehicle compute framework associated with the task. Guney [0015] provided the motivation for such combination because it would allow “a vehicle to generate planning decisions which cause the vehicle to travel along a safe and comfortable trajectory, the planning system must be provided with timely and accurate prediction or perception data for the agents in the vicinity of the vehicle”. The instant claim is therefore rejected under 35 USC 103.
Boutin teaches:
Wherein the simulation is executed, using a queue state model and a resource availability model that simulates synthetic hardware resource; wherein the task run model simulates execution of the task by running the task on the synthetic hardware resource; and validating, based on one or more predefined assertion rules, at least one of a task start time, task completion time, or a task execution order generated by the scheduler. (Boutin [0024]: “In order to identify a suitable server for a given task, the job management component uses expected server performance information received from multiple servers. For instance, the server performance information might include expected performance parameters for tasks of particular categories if assigned to the server. As an example, the server performance information might include expected wait times before the tasks of various categories are anticipated to begin execution given the current server state. The job management component then identifies a particular task category for a given task, determines which of the servers can perform the task by a suitable estimated completion time, and then assigns the task based on the estimated completion time. The job management component also uses cluster-level information in order to determine which server to assign a task to. The job management component then submits a request to perform the task to the selected server.”; [0059]: “The job management component 630 estimates a task completion time that the task would be completed by if perform by a particular server (act 1101). The job management component 630 then determines that the estimated task completion time associated with a particular server is acceptable (act 1102), and as a result, selects the particular server (act 1103). In some cases, the task completion time is estimated for each of multiple servers. In that case, the estimated completion time that is determined to be acceptable might be the earliest completion time estimated, and thus the particular server assigned to the task might be the server that is estimated to complete the task earliest. However, as will be descried hereinafter, that may not always be the case as the decision may be much more complex and involve a number of often competing considerations”.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Boutin into that of Farabet and Gunney in order to have the simulation is executed, using a queue state model and a resource availability model that simulates synthetic hardware resource; wherein the task run model simulates execution of the task by running the task on the synthetic hardware resource; and validating, based on one or more predefined assertion rules, at least one of a task start time, task completion time, or a task execution order generated by the scheduler. Boutin has shown that the claimed limitations are merely commonly known performance modelling and task scheduling methods, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 2, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the determining the schedule comprises: estimating a runtime for the task based on the driving scenario. (Farabet [0091])
As per claim 3, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 2, wherein the driving scenario includes information associated with at least one of a road condition, a city, a weather condition, or a road asset. (Farabet [0091])
As per claim 4, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the determining the schedule comprises: estimating a runtime for the task based on the vehicle compute framework. (Farabet [0091])
As per claim 5, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 4, wherein the vehicle compute framework is one of a perception compute framework, a prediction compute framework (Farabet [0091]), a planning compute framework, or a driving scenario replay framework.
As per claim 6, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the configuration further includes an indication of at least one of a compute resource capacity, a storage resource capacity, or a network resource capacity for a hardware platform associated with the resource availability model. (Guney [0056])
As per claim 9, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the configuration further includes an indication of at least one of a compute resource occupancy, a storage resource occupancy, or a network resource occupancy for a hardware platform associated with the resource availability model. (Guney [0056])
As per claim 10, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the configuration further includes an indication of at least one of a priority (Farabet [0044]), a runtime, a task completion goal, a file uploading time duration, or a file downloading time duration associated with the task.
As per claim 11, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the task run model simulates task execution using a timer that maps an actual time duration to a shorter simulated time duration. (Boutin [0084])
As per claim 12, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the calculating the metric for the scheduler is based on a comparison between a completion time of the task and a completion goal for the task. (Boutin [0059])
As per claim 13, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the calculating the metric for the scheduler is based on an ordering of tasks scheduled by the scheduler. (Farabet [0032])
As per claim 14, the combination of Farabet, Gunney and Boutin further teach:
The computer-implemented system of claim 1, wherein the calculating the metric for the scheduler is based on priorities of tasks executed over a certain time duration. (Farabet [0032])
As per claim 15, it is the method variant of claim 1 and is therefore rejected under the same rationale.
As per claim 16, it is the method variant of claim 3 and is therefore rejected under the same rationale.
As per claim 17, it is the method variant of claim 5 and is therefore rejected under the same rationale.
As per claim 19, it is the non-transitory, computer-readable media variant of claim 1 and is therefore rejected under the same rationale.
Claim(s) 6, 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Farabet, Guney and Boutin, and further in view of Adams et al (Us 20230281039, hereinafter Adams).
As per claim 6, the combination of Farabet, Gunney and Boutin did not teach:
The computer-implemented system of claim 1, wherein: the determining the schedule is further based on whether the task is associated with a first task category or a second task category, the vehicle simulation is in the first task category, and the vehicle software build is in the second task category.
However, Adams teaches:
The computer-implemented system of claim 1, wherein: the determining the schedule is further based on whether the task is associated with a first task category or a second task category, the vehicle simulation is in the first task category, and the vehicle software build is in the second task category. (Adams [0005])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Adams into that of Farabet, Gunney and Boutin in order have the schedule further determined based on whether the task is associated with a first task category or a second task category, the vehicle simulation is in the first task category, and the vehicle software build is in the second task category. Adams [0005] has shown that the claimed limitation is merely commonly known and used steps to determine appropriate resources to execute a specific task, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 18, it is the method variant of claim 6 and is therefore rejected under the same rationale.
As per claim 20, it is the non-transitory, computer-readable media variant of claim 6 and is therefore rejected under the same rationale.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Farabet, Gunney and Boutin, and further in view of Jones et al (US 20160266930, hereinafter Jones).
As per claim 7, the combination of Farabet, Gunney and Boutin did not teach:
wherein the configuration further includes an indication of at least one of a queue size or a number of pending tasks associated with a queue state model.
However, Jones teaches:
wherein the configuration further includes an indication of at least one of a queue size or a number of pending tasks associated with a queue state model. (Jones [0003])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Jones into that of Farabet, Gunney and Boutin in order to includes an indication of at least one of a queue size or a number of pending tasks associated with a queue state model. Jones [0003] has shown that the claimed limitation is merely commonly known and used steps to determine appropriate resources to execute a specific task, applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 – 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Brech et al (US 20120180055) teaches “comparing cost-saving methods of scheduling a task to the operating parameters of completing a task--e.g., a maximum amount of time allotted to complete a task. If the task can be scheduled to reduce operating costs (e.g., rescheduled to a time when power is cheaper) and still be performed within the operating parameters, then that cost-saving method is used to create a workload plan to implement the task. In another embodiment, several cost-saving methods are compared to determine the most profitable.”;
Gao et al (USPAT 10871988) teaches “workload scheduler devices that determine one of a plurality of task categories for a received task. A stored expected runtime for each of a plurality of CPUs to execute one standard computation unit (SCU) in the determined one of the plurality of task category is obtained. One of the plurality of CPUs is selected based on the stored expected runtime. The task is dispatched to the selected one of the plurality of CPUs for execution. Accordingly, with this technology, tasks associated with workloads can be more effectively dispatched and more effectively processed by a CPU pool”.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES M SWIFT whose telephone number is (571)270-7756. The examiner can normally be reached Monday - Friday: 9:30 AM - 7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 5712701014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES M SWIFT/Primary Examiner, Art Unit 2196