Prosecution Insights
Last updated: April 19, 2026
Application No. 18/496,459

Local Area Network System with Machine Learning-Based Task Offload Feature

Non-Final OA §101§103
Filed
Oct 27, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Roku Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title. Claim 20 is directed to non-statutory subject matter. Claim 20 is directed to a “non-transitory computer-readable medium”. The “non-transitory computer-readable medium” as disclosed on paragraph 0038 of the specification does not exclude non-statutory embodiment. For instance, the “non-transitory computer-readable medium” does not exclude carrier wave, transmission medium and the like and is therefore directed to non-statutory subject matter. Appropriate corrected is required, for instance paragraph 0038 needs to amended to described the claimed non-transitory computer-readable medium” as excluding carrier wave, transmission medium and the like. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 6, 8-10, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 2023/0185621 A1 to Sen et al. As to claim 1, Murphy teaches a method for use in connection with a local area network (LAN) system comprising a communication network and a group of multiple devices connected to the communication network, wherein the group of multiple devices includes a first device (Clients 102) and a separate set of devices (figure 1), the method comprising: the first device (Clients 102) determining that a machine learning (ML)-based task is to be performed (send requests) (“…The clients 102 may send requests to the deep learning server 104 to monitor certain ones of the sensors 1 10 and provide the clients 102 with event notifications when those sensors 1 10 detect an event. The clients 102 may also send requests to the deep learning server 104 to apply a specific one of the machine learning models 108 to the sensor data from a specific one of the sensors 1 10, and return the results to the clients 102…” paragraph 0016); the first device (Clients 102) sending request to a second device (Deep Learning Server 104) (“…The clients 102 may send requests to the deep learning server 104 to monitor certain ones of the sensors 1 10 and provide the clients 102 with event notifications when those sensors 1 10 detect an event. The clients 102 may also send requests to the deep learning server 104 to apply a specific one of the machine learning models 108 to the sensor data from a specific one of the sensors 1 10, and return the results to the clients 102…” paragraph 0016); and (ii) wherein the second device is configured to perform the ML-based task in accordance with the ML-based task request (send requests), thereby generating ML-based task output (return the results to the clients 102), and to transmit the generated ML-based task output to the first device and the first device receiving the generated output from the second device and using the received output to facilitate performing one or more operations (return the results to the clients 102) (“…Sensors 1 10 provide sensor data to deep learning server 104. The sensor data may provide an explicit indication of an event occurring (e.g., a door sensor providing an indication that a door has been opened), or the sensor data may be data that can be provided to a machine learning model that is trained to make an inference regarding the data (e.g., a video stream that is analyzed to perform a face detection). The term “machine learning model” as used herein generally refers to a trained machine learning model that has previously undergone a training process and is configured to make inferences from received data. Each of the model servers 106 includes at least one machine learning model 108. The clients 102 may send requests to the deep learning server 104 to monitor certain ones of the sensors 1 10 and provide the clients 102 with event notifications when those sensors 1 10 detect an event. The clients 102 may also send requests to the deep learning server 104 to apply a specific one of the machine learning models 108 to the sensor data from a specific one of the sensors 1 10, and return the results to the clients 102…” paragraph 0016). Murphy is silent with reference to the first device broadcasting to the separate set of devices, a ML-based task request for the ML-based task, (i) wherein the separate set of devices are configured to perform an arbitration process to select a second device from among the separate set of devices based at least on the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices, and (ii) wherein the second device is configured to perform the ML-based task in accordance with the ML-based task request, thereby generating ML-based task output, and to transmit the generated ML-based task output to the first device. Akkapeddi teaches the first device broadcasting (distributed) to the separate set of devices (multiple edge devices), a ML-based task request for the ML-based task (“…The task may be distributed among the edge devices based on the predicted resource availability scores. For example, the task may be reassigned from the original edge device to the one of the other edge devices that achieved the highest the predicted resource availability score. The task may also be split and distributed to multiple edge devices. The multiple edge devices may include the device to which the task was originally assigned. The reassignment and distribution may be performed by the central server. In some embodiments, the reassignment and/or distribution may be performed by one or more edge devices. For example, the device to which the task was originally assigned may itself reassign the task and may transmit to another device the request to perform the task…” Col. 3 Ln. 17-30). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy with the teaching of Akkapeddi because the teaching of Akkapeddi would improve the system of Murphy by providing methods for maximizing resource utilization in a digital communication system (Akkapeddi Abstract). Sen teaches wherein the separate set of devices are configured to perform an arbitration process (task optimization module) to select a second device from among the separate set of devices based at least on the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices (“…In some embodiments, the task optimization module may make a determination between using the on-premises systems 300 or the off-premises systems 400 based on an estimated hardware requirement necessary to the complete the computer implemented task. This estimated hardware requirement may be based on user input of required resources for an experiment combined with a resource multiplication factor. Some tasks are specified by the user to be more GPU intensive while others are more CPU intensive. GPU infrastructure is comparatively expensive and more likely to have limited capacity in the on-premises systems 300. Thus, in many circumstances the task optimization module would favor sending a high GPU intensive task to the off-premises systems 400 because the on-premises system will not be capable of completing the task in an efficient manner. CPU infrastructure is comparatively inexpensive and more likely to have sufficient capacity in the on-premises systems 400. Thus, in many circumstances the task optimization module would favor sending a high CPU intensive task to the on-premises systems 300 because the off-premises systems 400 does not offer a significant efficiency boost…In some embodiments, the task optimization module may make a determination between using the on-premises systems 300 or the off-premises systems 400 based on a latency requirement of the computer implemented task. Extensive travel distance for data results in delays between when an action is requested and when it is executed (i.e., latency). If data is stored locally, additional latency delays occur if that data needs to be uploaded to off-premises systems 400. Similarly, If data is stored remotely, additional latency delays occur if that data needs to be downloaded to on-premises systems 400. If large amounts of data are to be processed in the computer implemented task, the task optimization module will tend to favor the systems 300, 400 that are more proximal to the data to reduce inefficiencies due to latency. If smaller amounts of data are to be processed in the computer implemented task, the task optimization module will tend to favor the on-premises systems 300 since any inefficiencies due to latency will be less significant. In practice, for example, a user may indicate in the job specification if a job is Service Level Agreement (SLA) critical or not. Using this information, task optimization module may schedule SLA critical jobs immediately on off-premises systems 400 if required resources are not available on on-premises systems 300. For non-SLA critical jobs, task optimization module can schedule such jobs once resources are available on-premises…In some embodiments, the task optimization module may make a determination between using the on-premises systems 300 or the off-premises systems 400 based on a hardware capacity of the on-premises computer systems 300. Sufficient hardware capacity may be determined, for example, by the system 100 checking current resource availability in on-premises clusters of the on-premises systems 300 by querying clusters resource utilization metrics. Using the current resource utilization metrics in on-premises systems 300, the system 100 may determine available capacity in on-premises systems 300 and verify if that available capacity meets the resource specification from user. Because on-premises computer systems 300 are not scalable on demand, any computer implemented task to be executed on the on-premises computer systems 300 are limited to using the hardware that is currently available. Thus, when hardware capacity (e.g., storage, GPU capacity, CPU capacity) of the on-premises computer systems 300 is low (either due to low amounts of physical hardware or high amounts of hardware that is already being used), the task optimization module will tend to favor off-premises computer systems 400 because the on-premises system will not be capable of completing the task in an efficient manner. Additionally, when hardware capacity (e.g., storage, GPU capacity, CPU capacity) of the on-premises computer systems 300 is higher (either due to high amounts of physical hardware or low amounts of hardware that is already being used), the task optimization module will tend to favor on-premises computer systems 300 because the off-premises systems 400 does not offer a significant efficiency boost…In some embodiments, the task optimization module may make a determination between using the on-premises systems 300 or the off-premises systems 400 based on an estimated financial cost of using the off-premises computer systems 300 for the computer implemented task. Multiple factors go into the determining the financial cost of using off-premises systems 400. For example, larger jobs tend to be more expensive to run than smaller jobs. Additionally, heavy traffic within the off-premises systems 400 tend to raise the cost of accessing those systems 400. Additionally, contractual arrangements may stipulate that use of the off-premises systems 400 above a certain threshold will be charged at a higher rate than usage below that threshold. The task optimization module will use these cost determining factors to estimate the financial cost of a particular computer implemented task (e.g., with regards to the user specified hardware needs for CPU, memory, GPU, and data storage) on the off-premises systems 400, and the higher those estimated costs are, the more heavily the task optimization module will favor on-premises…In some embodiments, the task optimization module will evaluate the above-described factors for determining between using the on-premises systems 300 or the off-premises systems 400 based on weighted averages and threshold requirements. For example, the task optimization module may weigh efficient operation of a task higher than the financial cost of running that task (or vice versa). Additionally, the relationship between hardware requirements of the task with respect to capacity of the on-premises systems 300 may end up being a threshold requirement regardless of the other factors. For example, if the on-premises systems 300 does not have sufficient hardware capacity to run the task, the task optimization module may choose the off-premises systems 400 regardless of what the other factors show…As another example, the task optimization module may determine to use one system over another if it determines that any system (or any part of any system) is “unhealthy.” System health, in some embodiments, includes latency, disk storage, load, expected load, For example, task optimization module may consult a data store listing the health of each system (e.g., resource availability, ping/latency, processor load, or the like), or may determine the health of a system directly (e.g., by requesting such data from a system or using a “ping” to determine latency)…” paragraphs 0077-0082). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Sen because the teaching of Sen would improve the system of Murphy and Akkapeddi by providing optimization of computer resource utilization. As to claim 2, Murphy teaches the method of claim 1, wherein the first device determining that the ML-based task is to be performed comprises the first device receiving an instruction to perform the one or more operations, and the first device determining that the ML-based task needs to be performed to facilitate performing the one or more operations (send requests). As to claim 6, Sen teaches the method of claim 1, however it is silent with reference to wherein the ML-based task request comprises an indication of estimated computing resource requirements for the ML-based task (“…Yet another aspect of the present disclosure is directed to a system. The system includes a memory and at least one processor. The at least one processor is configured to execute instructions to: receive, from a user interface, a request to complete a computer implemented task; determine whether to send the request to a first computer system or a second, cloud-based computer system, the determination based on (i) an estimated hardware requirement necessary to the complete the computer implemented task, (ii) a latency requirement of the computer implemented task, (iii) a hardware capacity of the first computer system, and (iv) an estimated financial cost of using the second, cloud-based computer system for the computer implemented task; and send the request to complete the computer implemented task to either the first computer system or the second, cloud-based computer system based on the determination; wherein the computer implemented task is a Machine Learning (ML) task; wherein the estimated hardware requirement includes: central processing unit (CPU) requirements, graphical processing unit (GPU) requirements, memory requirements, storage requirements, and information download requirements; wherein the request to complete a computer implemented task from a user interface is initiated by a user; wherein the processor is further configured to, prior to receiving the request to complete a computer implemented task: receive a request from the user interface for logon access to the system, and authenticate the user; wherein a first token is required to access the first computer system and a second token is required to access the second, cloud-based computer system; wherein the processor is further configured to: automatically obtain and use the first token if the processor determines to send the request to the first computer system, and automatically obtain and use the second token if the processor determines to send the request to the second, cloud-based computer system…” paragraph 0008). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Sen because the teaching of Sen would improve the system of Murphy and Akkapeddi by providing optimization of computer resource utilization. As to claim 8, Sen teaches the method of claim 1, wherein the separate set of devices performing the arbitration process to select the second device from among the separate set of devices comprises: each of the devices in the separate set of devices determining respective computing resource availability and broadcasting the determined respective computing resource to the other devices in the separate set of devices (“…Yet another aspect of the present disclosure is directed to a system. The system includes a memory and at least one processor. The at least one processor is configured to execute instructions to: receive, from a user interface, a request to complete a computer implemented task; determine whether to send the request to a first computer system or a second, cloud-based computer system, the determination based on (i) an estimated hardware requirement necessary to the complete the computer implemented task, (ii) a latency requirement of the computer implemented task, (iii) a hardware capacity of the first computer system, and (iv) an estimated financial cost of using the second, cloud-based computer system for the computer implemented task; and send the request to complete the computer implemented task to either the first computer system or the second, cloud-based computer system based on the determination; wherein the computer implemented task is a Machine Learning (ML) task; wherein the estimated hardware requirement includes: central processing unit (CPU) requirements, graphical processing unit (GPU) requirements, memory requirements, storage requirements, and information download requirements; wherein the request to complete a computer implemented task from a user interface is initiated by a user; wherein the processor is further configured to, prior to receiving the request to complete a computer implemented task: receive a request from the user interface for logon access to the system, and authenticate the user; wherein a first token is required to access the first computer system and a second token is required to access the second, cloud-based computer system; wherein the processor is further configured to: automatically obtain and use the first token if the processor determines to send the request to the first computer system, and automatically obtain and use the second token if the processor determines to send the request to the second, cloud-based computer system…” paragraph 0008). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Sen because the teaching of Sen would improve the system of Murphy and Akkapeddi by providing optimization of computer resource utilization. As to claim 9, Sen teaches the method of claim 1, wherein the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices comprises the second device having favorable processing computing resource availability as compared to respective processing computing resource availability of any other devices of the separate set of devices (“…Yet another aspect of the present disclosure is directed to a system. The system includes a memory and at least one processor. The at least one processor is configured to execute instructions to: receive, from a user interface, a request to complete a computer implemented task; determine whether to send the request to a first computer system or a second, cloud-based computer system, the determination based on (i) an estimated hardware requirement necessary to the complete the computer implemented task, (ii) a latency requirement of the computer implemented task, (iii) a hardware capacity of the first computer system, and (iv) an estimated financial cost of using the second, cloud-based computer system for the computer implemented task; and send the request to complete the computer implemented task to either the first computer system or the second, cloud-based computer system based on the determination; wherein the computer implemented task is a Machine Learning (ML) task; wherein the estimated hardware requirement includes: central processing unit (CPU) requirements, graphical processing unit (GPU) requirements, memory requirements, storage requirements, and information download requirements; wherein the request to complete a computer implemented task from a user interface is initiated by a user; wherein the processor is further configured to, prior to receiving the request to complete a computer implemented task: receive a request from the user interface for logon access to the system, and authenticate the user; wherein a first token is required to access the first computer system and a second token is required to access the second, cloud-based computer system; wherein the processor is further configured to: automatically obtain and use the first token if the processor determines to send the request to the first computer system, and automatically obtain and use the second token if the processor determines to send the request to the second, cloud-based computer system…” paragraph 0008). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Sen because the teaching of Sen would improve the system of Murphy and Akkapeddi by providing optimization of computer resource utilization. As to claim 10, Sen teaches the method of claim 1, wherein the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices comprises the second device having favorable memory computing resource availability as compared to respective memory computing resource availability of any other devices of the separate set of devices (“…Yet another aspect of the present disclosure is directed to a system. The system includes a memory and at least one processor. The at least one processor is configured to execute instructions to: receive, from a user interface, a request to complete a computer implemented task; determine whether to send the request to a first computer system or a second, cloud-based computer system, the determination based on (i) an estimated hardware requirement necessary to the complete the computer implemented task, (ii) a latency requirement of the computer implemented task, (iii) a hardware capacity of the first computer system, and (iv) an estimated financial cost of using the second, cloud-based computer system for the computer implemented task; and send the request to complete the computer implemented task to either the first computer system or the second, cloud-based computer system based on the determination; wherein the computer implemented task is a Machine Learning (ML) task; wherein the estimated hardware requirement includes: central processing unit (CPU) requirements, graphical processing unit (GPU) requirements, memory requirements, storage requirements, and information download requirements; wherein the request to complete a computer implemented task from a user interface is initiated by a user; wherein the processor is further configured to, prior to receiving the request to complete a computer implemented task: receive a request from the user interface for logon access to the system, and authenticate the user; wherein a first token is required to access the first computer system and a second token is required to access the second, cloud-based computer system; wherein the processor is further configured to: automatically obtain and use the first token if the processor determines to send the request to the first computer system, and automatically obtain and use the second token if the processor determines to send the request to the second, cloud-based computer system…” paragraph 0008). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Sen because the teaching of Sen would improve the system of Murphy and Akkapeddi by providing optimization of computer resource utilization. As to claims 17 and 20, see the rejection of claim 1 above, expect for a non-transitory computer-readable medium. Murphy teaches a non-transitory computer-readable medium (Non-Transitory Computer-Readable Storage Medium paragraph 0090). As to claim 18, Sen teaches the first device of claim 17, wherein the ML-based task request comprises an indication of estimated computing resource requirements for the ML-based task (“…Yet another aspect of the present disclosure is directed to a system. The system includes a memory and at least one processor. The at least one processor is configured to execute instructions to: receive, from a user interface, a request to complete a computer implemented task; determine whether to send the request to a first computer system or a second, cloud-based computer system, the determination based on (i) an estimated hardware requirement necessary to the complete the computer implemented task, (ii) a latency requirement of the computer implemented task, (iii) a hardware capacity of the first computer system, and (iv) an estimated financial cost of using the second, cloud-based computer system for the computer implemented task; and send the request to complete the computer implemented task to either the first computer system or the second, cloud-based computer system based on the determination; wherein the computer implemented task is a Machine Learning (ML) task; wherein the estimated hardware requirement includes: central processing unit (CPU) requirements, graphical processing unit (GPU) requirements, memory requirements, storage requirements, and information download requirements; wherein the request to complete a computer implemented task from a user interface is initiated by a user; wherein the processor is further configured to, prior to receiving the request to complete a computer implemented task: receive a request from the user interface for logon access to the system, and authenticate the user; wherein a first token is required to access the first computer system and a second token is required to access the second, cloud-based computer system; wherein the processor is further configured to: automatically obtain and use the first token if the processor determines to send the request to the first computer system, and automatically obtain and use the second token if the processor determines to send the request to the second, cloud-based computer system…” paragraph 0008). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Sen because the teaching of Sen would improve the system of Murphy and Akkapeddi by providing optimization of computer resource utilization. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. as applied to claim 1 above, and further in view of W.O. No. 2022154829 A1 to LI et al. As to claim 3, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the ML-based task is a task that involves using audio-based data together with a speech recognition model to generate output. LI teaches wherein the ML-based task is a task (neural network task) that involves using audio-based data (audio input) together with a speech recognition model (speech recognition) to generate output (“…As another example, the input to a neural network can be audio input, including streamed audio, pre-recorded audio, and audio as part of a video or other source or media. A neural network task in the audio context can include speech recognition, including isolating speech from other identified sources of audio and/or enhancing characteristics of identified speech to be easier to hear. A neural network can be trained to predict an accurate translation of input speech to a target language, for example in real-time as part of a translation tool….” paragraph 0130). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of LI because the teaching of LI would improve the system of Murphy, Akkapeddi and Sen by providing techniques for speech recognition functionalities. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2021/0035255 A1 to Nurvitadhi et al. As to claim 4, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the ML-based task is a task that involves using image-based data together with an image recognition model. Nurvitadhi teaches wherein the ML-based task is a task that involves using image-based data together with an image recognition model (“…During operation, the media processor 1302 and vision processor 1304 can work in concert to accelerate computer vision operations. The media processor 1302 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams. The decoded video streams can be written to a buffer in the on-chip-memory 1305. The vision processor 1304 can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor 1304 can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU 1306…” paragraph 0186). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy and Akkapeddi with the teaching of Nurvitadhi because the teaching of Nurvitadhi would improve the system of Murphy and Akkapeddi by providing techniques for image recognition functionalities. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2021/0327018 A1 to Carranza et al. As to claim 5, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the ML-based task is a subtask of another ML-based task. Carranza teaches wherein the ML-based task is a subtask of another ML-based task (“…The flowchart then proceeds to block 616 to partition the portion of the pipeline being offloaded into one or more partial pipeline(s) based on the peer node availability. For example, the peer node availability may be used to select which peer nodes to offload the pipeline to, and determine which tasks from the pipeline to offload to each of the selected peer nodes. The portion of the pipeline being offloaded can then be partitioned into one or more partial CV pipelines to be offloaded to the subset of peer node(s) that were chosen. For example, each partial CV pipeline may contain a subset of tasks from the portion of the CV pipeline being offloaded for one of the peer nodes…” paragraph 0107). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Carranza because the teaching of Carranza would improve the system of Murphy, Akkapeddi and Sen by providing techniques for offloading a chain of related tasks to allow for compact task execution. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2019/0121673 A1 to Gold et al. As to claim 7, Murphy as modified by Akkapeddi and Sen teaches the method of claim 6, however it is silent with reference to wherein the ML-based task request further comprises an indication of a completion deadline for the ML-based task. Gold teaches wherein the ML-based task request further comprises an indication of a completion deadline for the ML-based task (estimated time for completion) (“…The example method depicted in FIG. 12B also includes determining (1258) an estimated time for completion for a particular artificial intelligence or machine learning job. Determining (1258) an estimated time for completion for a particular artificial intelligence or machine learning job may be carried out, for example, by estimating an amount of time required to complete a particular artificial intelligence or machine learning job in view of the amount of resources that may be made available for use by the particular artificial intelligence or machine learning job. In such an example, users in a multi-tenant environment may even be provided with the estimated time for completion for a particular artificial intelligence or machine learning job, so that a user may determine whether to actually submit the particular artificial intelligence or machine learning job. Likewise, the estimated time for completion for a particular artificial intelligence or machine learning job may be given to a scheduler or other module of computer program instructions that can gather such information from a plurality of artificial intelligence and machine learning infrastructures (1100) (e.g., in a clustered environment) in order to identify which particular artificial intelligence and machine learning infrastructure (1100) the particular artificial intelligence or machine learning job should be submitted to….” paragraph 0252). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Gold because the teaching of Gold would improve the system of Murphy, Akkapeddi and Sen by providing technique for optimizing completion of tasks. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. and further in view of U.S. Pub. No. 2021/0333759 A1 to Vashavada et al. As to claim 11, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices comprises the second device having favorable power computing resource availability as compared to respective power computing resource availability of any other devices of the separate set of devices. Vasavada teaches wherein the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices comprises the second device having favorable power computing resource availability as compared to respective power computing resource availability of any other devices of the separate set of devices (“…In some examples, a head-mounted display (HMD) may have sufficient processing capabilities (e.g., central processing unit (CPU), memory, bandwidth, battery power, etc.) to offload computing tasks from the wristband system (e.g., a watch body, a watch band) to the HMD. Methods of the present disclosure may include determining a computing task of the wristband system that is suitable for processing on available computing resources of the HMD. By way of example, the computing task to be offloaded may be determined based on computing requirements, power consumption, battery charge level, latency requirements, or a combination thereof. The tasks offloaded to the HMD may include processing images captured by image sensors of the wristband system, a location determining task, a neural network training task, etc. The HMD may process the computing task(s) and return the results to the wristband system. In some examples, offloading computing tasks from the wristband system to the HMD may reduce heat generation, reduce power consumption and/or decrease computing task execution latency in the wristband system….” paragraph 0035). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Vasavada because the teaching of Vasavada would improve the system of Murphy, Akkapeddi and Sen by offloading computing task based on power consumption to allow for optimal computer processing. Claims 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. as applied to claims 1 and 17 above, and further in view of U.S. Pat. No. 11,221,885 B1 issued to Ross et al. As to claim 12, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices comprises a ML module of the second device having favorable computing resource availability as compared to respective computing resource availability of ML modules of any other devices of the separate set of devices. Ross teaches wherein the second device having favorable computing resource availability as compared to respective computing resource availability of any other devices of the separate set of devices (Special Purpose Machine Learning Model Processors (117)) comprises a ML module (allocate to the machine learning model) of the second device having favorable computing resource availability as compared to respective computing resource availability of ML modules of any other devices of the separate set of devices (Allocation Engine (109)) (“…An allocation engine (109) identifies the special purpose machine learning model processors (117) and available resources in a datacenter that the processing system (100) can allocate to the machine learning model. The processing system (100) then allocates the special purpose processors (117) and other resources to the computational graph representation of the machine learning model for execution. The allocated special purpose machine learning model processors (117) then execute the operations of the computational dataflow graph representation to complete the machine learning task of the machine learning model…FIG. 5 is a flowchart of an example process 500 for allocating different types of machine learning models for a machine learning task. For convenience, the process 500 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, an example processing system, e.g., the processing system 100 of FIG. 1, appropriately programmed, can perform the process 500…An example processing system generates a computational dataflow graph for each machine learning model from numerous machine learning models of various types that perform a specific machine learning model task (502). For a given machine learning task, there may be several different types of machine learning models that complete the task. Some machine learning models may be more computationally precise or more computationally intensive and therefore require more resources during execution. The machine learning models assigned to a specific machine learning model task can be rebalanced based on usage, timing, the number of resources a model requires, and the precision necessary to perform a machine learning task…The processing system schedules and compiles the computational dataflow graphs representing the machine learning models into executable binaries as described above (504). The system then determines the amount of resources that are required to execute each executable binary (506). The example processing system determines the user requirements for completing the machine learning task including precision requirements and timing for task completion (508). The processing system determines the number and type of executable binaries, representing different types of machine learning models for the machine learning task to meet the user requirements for completing the task. Additionally or alternatively, the example processing system can maximize performance and efficiency of the machine learning model task by choosing the optimal combination of machine learning models that will run the machine learning task. This can be done, for example, by determining the machine learning model task requirements and obtaining the amount of machine learning model allocation resources available on a special purpose machine learning model processor or a datacenter (510). The example processing system then allocates the resources required to execute the determined number and type of executable binaries representing different types of machine learning models for the machine learning task (512)…” Col. 5 Ln. 36-46, Col. 8 Ln. 7-53). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Ross because the teaching of Ross would improve the system of Murphy, Akkapeddi and Sen for allocating different types of machine learning models for a machine learning task (Ross Col. 8 Ln. 7-53). As to claim 19, see the rejection of claim 12 above. Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 20230185621 A1 to Sen et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2023/0144238 A1 to Kim et al. As to claim 13, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the second device is selected based further on the second device having a predefined state. Kim teaches wherein the second device is selected based further on the second device having a predefined state (Resource Management Unit 200) (“…A machine learning resource 30 may be understood as a cloud service-based server farm composed of a plurality of physical servers or virtual servers. The machine learning resource 30 may perform a machine learning job under the control of the resource management unit 200…The resource management unit 200 monitors an idle resource of the machine learning resource 30, and processes a request for performing a machine learning job input through the user interface provision unit 300 using the monitoring results. In addition, the resource management unit 200 registers the machine learning job in a job queue, which was requested for execution, then requests and receives an expected execution time of the machine learning job from the execution time expectation unit 100, and, finally, determines when each machine learning job will be performed in the machine learning resource 30 using the expected execution time and the results of monitoring the idle resource of the machine learning resource 30…” paragraphs 0034/0036). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Sen because the teaching of Sen would improve the system of Murphy, Akkapeddi and Sen by providing loading balancing techniques that allows for task execution based on unused or idle computing resources. As to claim 14, Murphy as modified by Akkapeddi and Sen teaches the method of claim 13, however it is silent with reference to wherein the predefined state is an idle state. Kim teaches wherein the predefined state is an idle state (Resource Management Unit 200) (“…A machine learning resource 30 may be understood as a cloud service-based server farm composed of a plurality of physical servers or virtual servers. The machine learning resource 30 may perform a machine learning job under the control of the resource management unit 200…The resource management unit 200 monitors an idle resource of the machine learning resource 30, and processes a request for performing a machine learning job input through the user interface provision unit 300 using the monitoring results. In addition, the resource management unit 200 registers the machine learning job in a job queue, which was requested for execution, then requests and receives an expected execution time of the machine learning job from the execution time expectation unit 100, and, finally, determines when each machine learning job will be performed in the machine learning resource 30 using the expected execution time and the results of monitoring the idle resource of the machine learning resource 30…” paragraphs 0034/0036). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Kim because the teaching of Kim would improve the system of Murphy, Akkapeddi and Sen by providing loading balancing techniques that allows for task execution based on unused or idle computing resources. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 2023/0185621 A1 to Sen et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2022/0383116 A1 to Karjee et al. As to claim 15, Murphy as modified by Akkapeddi and Sen teaches the method of claim 1, however it is silent with reference to wherein the second device is a television or a set-top box. Karjee teaches wherein the second device is a television or a set-top box (smart television (TV) (“…As shown in FIG. 3, at operation 301, the method 300 comprises assigning at least one DNN task by an IoT device to a first edge device. In an embodiment, the IoT device has a first computing capability, and the plurality of edge devices have a second computing capability higher than the first computing capability. For example, the IoT device may be a wearable device (e.g., a smart watch), Raspberry Pi™, etc. and the edge devices may be a smart phone, a personal computer (PC), laptop, smart television (TV), etc…” paragraph 0055). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Karjee because the teaching of Karjee would improve the system of Murphy, Akkapeddi and Sen by providing a technique for offloading computer tasks to a smart television for processing. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over W.O. No. 2020185233 A1 to Murphy et al. in view of U.S. Pat. No. 11,777,870 B1 issue to Akkapeddi et al. and further in view of U.S. Pub. No. 2023/0185621 A1 to Sen et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2021/0185363 A1 to Paiement et al. As to claim 16, Murphy as modified by Akkapeddi and Sen teaches the method of claim 10, however it is silent with reference to wherein the first device is a camera or a light switch. Paiement teaches the method of claim 10, wherein the first device is a camera or a light switch (Users/Cameras 202a, 204) (“…The system 200a can include a plurality of users/cameras 202a, 204. The cameras can be part of a mobile device such as a mobile phone, tablet computer, or drone. Each user/camera 202a, 204a can capture a video content stream (or a group of images) and provide each video content stream to an aggregation component 206a, which has the function of aggregating the video content provided by each user/camera 202a, 204a. Further, a feedback component 208a that also receives video content from the user/cameras 202a, 204a can function as providing feedback to the aggregation component 206a to process or adjust the aggregation of the video content according to the feedback. Such feedback can be from user-generated input or discerned from machine learning techniques. The aggregated video content can be provided by the aggregation component 206a to the machine learning (ML)/base action and content selection component 220a…”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Murphy, Akkapeddi and Sen with the teaching of Paiement because the teaching of Paiement would improve the system of Murphy, Akkapeddi and Sen by providing a camera for offloading computer tasks for optimal processing. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. WO No. 2016144220 A1 to Huang et al. et al. and directed to a method of estimating computational network resources required to perform a computational task. U.S. Pub. No. 20170279734 A1 to Costa et al. and directed to system for allocating computing tasks to computer resources in distributed computing environments. U.S. No. 2022/0413941 A1 to Ramtekkar et al. and directed to electronic device includes a processor and memory storing executable instructions that when executed cause the processor to determine availability of memory resources and processing resources of multiple computing devices. U.S. Pub. No. 20220308917 A1 to Sivathanu et al. and directed to a method for transparently and preemptively migrating deep learning training (DLT) jobs and inferences from one group of processing resources in the cloud to another. U.S. Pat. No. 11,570,257 B1 issued to Tanach et al. and directed to a system and method for communicating artificial intelligence (AI) tasks between AI resources. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Oct 27, 2023
Application Filed
Feb 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month