Prosecution Insights
Last updated: April 19, 2026
Application No. 17/705,873

MULTI-LEVEL COORDINATED INTERNET OF THINGS ARTIFICIAL INTELLIGENCE

Non-Final OA §103
Filed
Mar 28, 2022
Examiner
MILLER, ALEXANDRIA JOSEPHINE
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
5 granted / 27 resolved
-36.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
40 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 4-5, and 8-20 are presented for examination. This office action is in response to submission of application on 19-NOVEMBER-2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 28-MARCH-2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The amendment filed 19-NOVEMBER-2025 in response to the previous office action mailed 15-SEPTEMBER-2025 has been entered. Claims 1, 4-5, and 8-20 remain pending in the application. With regards to the non-final office action’s rejection under 101, the amendments to the claims have overcome the original rejection with regards to the claims being directed towards software per se. With regards to the non-final office action’s rejection under 112, the amendments to the claims have overcome the previous 112(b) rejection. With regards to the non-final office action’s rejections under 103, the amendments to the claims necessitated a new consideration of the art. The examiner agrees that amendments overcome the original rejection to the independent claims over Lovegrove in view of Jo. A new 103 rejection over the prior art has been provided over Lovegrove in view of Jo, further in view of previously presented Beaufays. Lovegrove discloses executing, using a second machine learning level, a second plurality of machine learning operations on the one or more first machine learning outputs wherein the second machine learning level includes one or more second level neural networks and each second level neural network corresponds with a single IoT task of a single IoT device; obtaining, from the second machine learning level, one or more second machine learning outputs: Lovegrove teaches that separate machine learning model generates from the first subset output by the original machine learning model (Paragraph 30) an output that is first control instructions (Paragraph 31). The first subset would be the one or more first machine learning outputs, and the first control instructions would be the second machine learning outputs. Furthermore, Lovegrove teaches that the first control instructions generated by its second neural network may be used to perform a control operation in a target system (Paragraph 30). A control operation would be analogous to an IoT task wherein the target system is an IoT device. Jo in the same field of endeavor of reinforcement learning discloses receiving a request to perform a task: Jo teaches receiving a request for an allocation of a resource, which would be a task (Column 2, line 39-40). Lovegrove and Jo are analogous art to the present application because they are in the same field of endeavor of reinforcement learning. Lovegrove and Jo together disclose generating, based on the identifying, the IoT output for a first IoT device of the two or more IoT devices, wherein the output is configured to complete the tasks and the generating is based on the first machine learning level, the second machine learning level and the third machine learning level being chained together: Lovegrove teaches the chaining together of machine learning levels, as it teaches the use of the output of one machine learning model to be used as the input of a second machine learning model. (Lovegrove’s first machine learning model produces sets of sensor data which as input to a second machine learning model that produces control signal instructions) (Paragraph 30-31). Furthermore, the machine learning models of Lovegrove may be used to identify a task to be performed on the sensor data (Paragraph 22), which would be output configured to complete the tasks. Jo further teaches a machine learning model identifies an IoT output and generates the output (Column 16, lines 40-52) wherein the machine learning model could be combined with Lovegrove’s teaching of multiple linked levels of machine learning models. Therefore, in order to form the first, second, and third machine learning levels that are chained together, the two chained together levels of Lovegrove could be used. Then, the additional machine learning model of Jo could be used as a third machine learning level, because Lovegrove has already taught the method of connecting multiple machine learning models in this manner. Therefore the first and second machine learning levels of Lovegrove and the third level of Jo would be chained together in order to generate an IoT output. Beaufays in the same field of endeavor of discloses reinforcement learning discloses wherein the first machine learning level includes one or more first level neural networks and the IoT input data is captured from a plurality of IoT devices, and wherein each individual IoT device of the plurality of IoT devices corresponds with a single first level neural network of the one or more first level neural networks: Beaufays in the same field of endeavor of reinforcement learning teaches client devices with corresponding machine learning models (Paragraph 3) wherein the machine learning model processes data captured by the client device (Paragraph 28). Therefore, the input data is captured from a plurality of devices, wherein each individual device corresponds with a single machine learning model. Furthermore, Lovegrove provides recurrent neural networks (Lovegrove, Paragraph 24) as the machine learning models of Beaufays and IoT devices that may be used in the methods of Beaufays. Beaufays and the present application are analogous art because they are in the same field of endeavor of reinforcement learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove and the teachings of Jo. This would have provided the advantage of improving the utilization of the processing device (Jo, Column 6, lines 30-35) as well as the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, and 8-20 rejected under 35 U.S.C. 103 as being unpatentable over Lovegrove (Pub. No. US 20220131925 A1, filed October 22nd 2020, hereinafter Lovegrove) in view of Jo et al. (Pub. No. US 12175299 B2, filed April 6th 2021, hereinafter Jo) further in view of Beaufays et al. (Pub. No. US 20220293093 A1, filed March 10th 2021, hereinafter Beaufays). Regarding claim 1: Claim 1 recites: A method comprising: performing, using a first machine learning level, a first plurality of machine learning operations on Internet of Things (IoT) input data in an IoT ecosystem from two or more IoT devices; receiving a request to perform a task; performing, using a first machine learning level, a first plurality of machine learning operations on Internet of Things (IoT) input data in an IoT ecosystem received from two or more IoT devices, wherein the first machine learning level includes one or more first level neural networks and the IoT input data is captured from a plurality of IoT devices, and wherein each individual IoT device of the plurality of IoT devices corresponds with a single first level neural network of the one or more first level neural networks; receiving, from the first machine learning level, one or more first machine learning outputs for each IoT device of the two or more IoT devices; executing, using a second machine learning level, a second plurality of machine learning operations on the one or more first machine learning outputs wherein the second machine learning level includes one or more second level neural networks and each second level neural network corresponds with a single IoT task of a single IoT device; obtaining, from the second machine learning level, one or more second machine learning outputs; running, using a third machine learning level, a third plurality of machine learning operations on the one or more second machine learning outputs, wherein the third learning level is based on output from two or more of the second plurality of machine learning operations; identifying, from the third machine learning level, an IoT output; and generating, based on the identifying, the IoT output for a first IoT device of the two or more IoT devices, wherein the output is configured to complete the tasks the generating is based on the first machine learning level, the second machine learning level: and the third machine learning level being chained together. Lovegrove discloses performing, using a first machine learning level, a first plurality of machine learning operations on Internet of Things (IoT) input data in an IoT ecosystem received from two or more IoT devices: Lovegrove teaches a machine learning model receiving sensor data as input (Paragraph 24) where the sensor data may be from an IoT device (Paragraph 1). The sensor data would therefore be IoT input data for the machine learning model to perform machine learning operations upon. Furthermore, Lovegrove teaches that additional data from additional sensors may be gathered (Paragraph 34) which under the same logic would be IoT input data from two or more IoT devices as they are separate from the first sensors. Lovegrove discloses receiving, from the first machine learning level, one or more first machine learning outputs for each IoT device of the two or more IoT devices: Lovegrove teaches that the machine learning model outputs classifications for the input data that places them in one of two subsets (Paragraph 24). Furthermore, the additional sensors data may also be processed by this machine learning model (Paragraph 37), which would apply it to two or more IoT devices. Lovegrove discloses executing, using a second machine learning level, a second plurality of machine learning operations on the one or more first machine learning outputs wherein the second machine learning level includes one or more second level neural networks and each second level neural network corresponds with a single IoT task of a single IoT device; obtaining, from the second machine learning level, one or more second machine learning outputs: Lovegrove teaches that separate machine learning model generates from the first subset output by the original machine learning model (Paragraph 30) an output that is first control instructions (Paragraph 31). The first subset would be the one or more first machine learning outputs, and the first control instructions would be the second machine learning outputs. Furthermore, Lovegrove teaches that the first control instructions generated by its second neural network may be used to perform a control operation in a target system (Paragraph 30). A control operation would be analogous to an IoT task wherein the target system is an IoT device. Jo in the same field of endeavor of reinforcement learning discloses receiving a request to perform a task: Jo teaches receiving a request for an allocation of a resource, which would be a task (Column 2, line 39-40). Lovegrove and Jo are analogous art to the present application because they are in the same field of endeavor of reinforcement learning. Jo discloses running, using a third machine learning level, a third plurality of machine learning operations on the one or more second machine learning outputs wherein the third learning level is based on output from two or more of the second plurality of machine learning operations; and identifying, from the third machine learning level, an IoT output: Jo in the same field of endeavor of reinforcement learning teaches generating from an inference request for an IoT device an inference result using a neural network (Column 16, lines 40-52). The neural networks would be the third machine learning level that identifies an IoT output, as the inference result in transmitted back to the IoT device (Column 16, lines 40-52). Lovegrove has previously taught using the output of one machine learning model as the input for another (Lovegrove, Paragraph 30-31). As such, Lovegrove in view of Jo would disclose a method in which the machine learning model of Jo could be used as a continuation of two machine learning levels of Lovegrove, wherein the machine learning model of Jo is based on the output of two or more of the second plurality of machine learning operations. Lovegrove and Jo together disclose generating, based on the identifying, the IoT output for a first IoT device of the two or more IoT devices, wherein the output is configured to complete the tasks and the generating is based on the first machine learning level, the second machine learning level and the third machine learning level being chained together: Lovegrove teaches the chaining together of machine learning levels, as it teaches the use of the output of one machine learning model to be used as the input of a second machine learning model. (Lovegrove’s first machine learning model produces sets of sensor data which as input to a second machine learning model that produces control signal instructions) (Paragraph 30-31). Furthermore, the machine learning models of Lovegrove may be used to identify a task to be performed on the sensor data (Paragraph 22), which would be output configured to complete the tasks. Jo further teaches a machine learning model identifies an IoT output and generates the output (Column 16, lines 40-52) wherein the machine learning model could be combined with Lovegrove’s teaching of multiple linked levels of machine learning models. Therefore, in order to form the first, second, and third machine learning levels that are chained together, the two chained together levels of Lovegrove could be used. Then, the additional machine learning model of Jo could be used as a third machine learning level, because Lovegrove has already taught the method of connecting multiple machine learning models in this manner. Therefore the first and second machine learning levels of Lovegrove and the third level of Jo would be chained together in order to generate an IoT output. Beaufays in the same field of endeavor of discloses reinforcement learning discloses wherein the first machine learning level includes one or more first level neural networks and the IoT input data is captured from a plurality of IoT devices, and wherein each individual IoT device of the plurality of IoT devices corresponds with a single first level neural network of the one or more first level neural networks: Beaufays in the same field of endeavor of reinforcement learning teaches client devices with corresponding machine learning models (Paragraph 3) wherein the machine learning model processes data captured by the client device (Paragraph 28). Therefore, the input data is captured from a plurality of devices, wherein each individual device corresponds with a single machine learning model. Furthermore, Lovegrove provides recurrent neural networks (Lovegrove, Paragraph 24) as the machine learning models of Beaufays and IoT devices that may be used in the methods of Beaufays. Beaufays and the present application are analogous art because they are in the same field of endeavor of reinforcement learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove and the teachings of Jo. This would have provided the advantage of improving the utilization of the processing device (Jo, Column 6, lines 30-35) as well as the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2). Regarding claim 4, which depends upon claim 3: Claim 4 recites: The method of claim 3, wherein the first machine learning operation is an embedding operation of the IoT input data. Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 1 upon which claim 4 depends. Furthermore, Lovegrove discloses the limitations of claim 4: Lovegrove teaches that from the original sensor data, a machine learning model may classify the data into one of two subsets (Paragraph 24) that denotes a normal or irregular operating mode (Paragraph 25). This classification would be an embedding of the IoT input data as it stores additional information about the data. Regarding claim 5, which depends upon claim 3: Claim 5 recites: The method of claim 3, wherein the first level neural networks are independent of any IoT task to be performed by the plurality of IoT devices. Lovegrove in view of Jo further in view of Beaufays teaches the method of claim ` upon which claim 5 depends. Furthermore, Lovegrove discloses the limitation of claim 5: Lovegrove teaches that the first machine learning model only classifies the sensor data and is not connected to the first control signal of the second machine learning model, which controls a specific task to be performed (Paragraph 24). Therefore the first level neural networks are independent of any IoT task to be performed although the classification contains information about the normality of previous operations. Regarding claim 8, which depends upon claim 6: Claim 8 recites: The method of claim 6, wherein the second level neural networks are based on the IoT input data and based on IoT task data Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 1 upon which claim 8 depends. Furthermore, Lovegrove discloses the limitation of claim 8: Lovegrove teaches that the first subset that acts as input for the second level neural networks is composed of sensor input data (which may be from an IoT device) that displays normal behavior as opposed to irregular behavior during tasks (Paragraph 25). The classified sensor data that the second neural networks runs upon would therefore be both IoT input data and IoT task data as it describes an IoT task. Regarding claim 9, which depends upon claim 6: Claim 9 recites: The method of claim 8, wherein the second machine learning operation is an embedding operation of the IoT input data and the IoT task data. Lovegrove in view of Jo further in view of Beaufays teaches the method of claim ` upon which claim 9 depends. Furthermore, Beaufays discloses the limitation of claim 9: Beaufays teaches generating a speaker embedding from audio data (Paragraph 50). Here the specific utterance would be the particular task data and the audio data would be the input data wherein this data is embedded with knowledge of the speaker. Lovegrove in view of Jo has previously taught the use of IoT devices which may be used alongside the method of Beaufays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo and the teachings of Beaufays. This would have provided the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2) Regarding claim 10, which depends upon claim 1: Claim 10 recites: The method claim 1, wherein the third machine learning level includes one or more third level neural networks Lovegrove in view of Jo in view of Beaufays teaches the method of claim 1 upon which claim 10 depends. Furthermore, Beaufays discloses the limitation of claim 10: Beaufays teaches machine learning models that are neural networks (Paragraph 3) wherein its global model is a third machine learning level as it is a model that receives input from multiple client models (Paragraph 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo and the teachings of Beaufays. This would have provided the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2) Regarding claim 11, which depends upon claim 10: Claim 11 recites: The method of claim 10, wherein each individual third level neural network receives input from multiple second level neural networks of the second machine learning level. Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 10 upon which claim 11 depends. Furthermore, Beaufays discloses the limitation of claim 11: Beaufays teaches that global machine learning model (or models) can receive client gradients from multiple client devices models (Paragraph 3). These client gradients are input from multiple neural networks that are received by the global model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo and the teachings of Beaufays. This would have provided the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2) Regarding claim 12, which depends upon claim 1: Claim 12 recites: The method of claim 1, the method further comprises: monitoring, before the performing, for an IoT request to perform an IoT task; detecting, based on the monitoring and before the performing, the IoT request; generating, based on the IoT output, an IoT response; and responding, based on the IoT response, to the IoT request Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 1 upon which claim 12 depends. Furthermore, Jo discloses the limitation of claim 12: Jo teaches receiving an inference request from a user terminal, which may be an IoT device (Column 16, lines 40-52). Receiving the request indicates both monitoring, before the performing, for an IoT request to perform an IoT task and detecting, based on the monitoring and before the performing, the IoT request. Furthermore, a response is generated to transmit back to the IoT device (Column 16, lines 40-52). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo further in view of Beaufays. This would have provided the advantage of improving the utilization of the processing device (Jo, Column 6, lines 30-35). Regarding claim 13, which depends upon claim 12: Claim 13 recites: The method of claim 12, wherein the IoT ecosystem contains a plurality IoT devices, and the IoT input data is related to ta subset of the plurality IoT devices Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 12 upon which claim 13 depends. Furthermore, Jo discloses the limitation of claim 13: Jo teaches that requests may come from a plurality of user terminals, which may be IoT devices (Column 16, lines 40-52). Therefore, the request would be input data related to the subset of IoT devices. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo further in view of Beaufays. This would have provided the advantage of improving the utilization of the processing device (Jo, Column 6, lines 30-35). Regarding claim 14, which depends upon claim 13: Claim 14 recites: The method of claim 13, wherein the first machine learning level includes a plurality of first level neural networks, and the first plurality of machine learning operations is performed by a subset of the first level neural networks that correspond to the subset of the plurality of IoT devices. Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 13 upon which claim 14 depends. Furthermore, Beaufays discloses the limitation of claim 14: Beaufays in the same field of endeavor of reinforcement learning teaches client devices with corresponding machine learning models (Paragraph 3) wherein the machine learning model processes data captured by the client device (Paragraph 28). Therefore, the input data is captured from a plurality of devices, wherein each individual device corresponds with a single machine learning model. Furthermore, Lovegrove teaches that its machine learning model may be a recurrent neural network (Paragraph 24). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo and the teachings of Beaufays. This would have provided the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2) Regarding claim 15, which depends upon claim 12: Claim 15 recites: The method of claim 12, wherein the IoT task includes IoT task data, and the second machine learning level includes one or more second level neural networks configured to operate on the IoT input data and the IoT task data Lovegrove in view of Jo in view of Beaufays teaches the method of claim 12 upon which claim 15 depends. Furthermore, Lovegrove discloses the limitation of claim 15: Lovegrove teaches that the first control instructions generated by its second neural network may be used to perform a control operation in a target system (Paragraph 30). A control operation would be analogous to an IoT task wherein the target system is an IoT device. Furthermore, Lovegrove teaches that the first subset that acts as input for the second level neural networks is composed of sensor input data (which may be from an IoT device) that displays normal behavior as opposed to irregular behavior during tasks (Paragraph 25). The classified sensor data that the second neural networks runs upon would therefore be both IoT input data and IoT task data as it describes an IoT task. Regarding claim 16, which depends upon claim 15: Claim 16 recites: The method of claim 15, wherein the third machine learning level includes one or more third level neural networks configured to operate on the IoT input data and the IoT task data Lovegrove in view of Jo in view of Beaufays teaches the method of claim 15 upon which claim 16 depends. Furthermore, Lovegrove discloses the limitation of claim 16: Lovegrove teaches that the first subset that acts as input for neural networks is composed of sensor input data (which may be from an IoT device) that displays normal behavior as opposed to irregular behavior during tasks (Paragraph 25). The classified sensor data that the neural networks runs upon would therefore be both IoT input data and IoT task data as it describes an IoT task. This is combination with Jo which teaches the third machine learning models would teach the above limitation. Regarding claim 17, which depends upon claim 1: Claim 17 recites: The method of claim 1. wherein the method further comprises: updating, using a training algorithm, at least one second level neural network of the second machine learning level: and updating, using a second training algorithm, at least one third level neural network of the third machine learning level. Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 1 upon which claim 17 depends. Furthermore, Beaufays discloses updating, using a training algorithm, at least one second level neural network of the second machine learning level: Beaufays teaches that client gradients are derived from loss functions used to train the client machine learning models (Paragraph 21), which are analogous to the second level machine learning models. The loss function would be the training algorithm that is updated during the training of the model. Furthermore, Lovegrove in view of Jo have previously taught the use of neural networks. Furthermore, Beaufays discloses updating, using a second training algorithm, at least one third level neural network of the third machine learning level.: Beaufays teaches that the global model uses the client gradient to update (Paragraph 3). The client gradient would be the second training algorithm. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo and the teachings of Beaufays. This would have provided the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2) Regarding claim 18, which depends upon claim 17: Claim 18 recites: The method of claim 17, wherein the training algorithm and the second training algorithm share a loss function Lovegrove in view of Jo further in view of Beaufays teaches the method of claim 17 upon which claim 18 depends. Furthermore, Beaufays discloses the limitation of claim 18: Beaufays teaches that the client gradient (the second training algorithm) represents a value of the loss function (the training algorithm) (Paragraph 21). This would be equivalent to sharing the loss function between the two algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove in view of Jo and the teachings of Beaufays. This would have provided the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2) Claim 19 recites a system that parallel the method of claim 1. Therefore, the analysis discussed above with respect to claim 1 also applies to claim 19. Accordingly, claim 19 is rejected based on substantially the same rationale as set forth above with respect to claim 1. Claim 20 recites a product that parallel the method of claim 1. Therefore, the analysis discussed above with respect to claim 1 also applies to claim 20. Accordingly, claim 20 is rejected based on substantially the same rationale as set forth above with respect to claim 1. Response to Arguments Applicant’s arguments filed 19-NOVEMBER-2025 have been fully considered, but the examiner believes that not all are fully persuasive. Regarding the applicant’s remarks on the non-final office action’s 103 rejection of the claims, the applicant argues that Lovegrove in view of Jo do not teach the amended limitations of these claims. As such, the applicant argues that all claims dependent on the above would additionally not be obvious under 103. The examiner agrees that the amendments to claim 1 have overcome the original rejection under Lovegrove in view of Jo. However, the examiner believes that Lovegrove in view of Jo further in view of Beaufays does teach the amended limitations and respectfully requests applicant’s consideration of the following: Lovegrove discloses executing, using a second machine learning level, a second plurality of machine learning operations on the one or more first machine learning outputs wherein the second machine learning level includes one or more second level neural networks and each second level neural network corresponds with a single IoT task of a single IoT device; obtaining, from the second machine learning level, one or more second machine learning outputs: Lovegrove teaches that separate machine learning model generates from the first subset output by the original machine learning model (Paragraph 30) an output that is first control instructions (Paragraph 31). The first subset would be the one or more first machine learning outputs, and the first control instructions would be the second machine learning outputs. Furthermore, Lovegrove teaches that the first control instructions generated by its second neural network may be used to perform a control operation in a target system (Paragraph 30). A control operation would be analogous to an IoT task wherein the target system is an IoT device. Jo in the same field of endeavor of reinforcement learning discloses receiving a request to perform a task: Jo teaches receiving a request for an allocation of a resource, which would be a task (Column 2, line 39-40). Lovegrove and Jo are analogous art to the present application because they are in the same field of endeavor of reinforcement learning. Lovegrove and Jo together disclose generating, based on the identifying, the IoT output for a first IoT device of the two or more IoT devices, wherein the output is configured to complete the tasks and the generating is based on the first machine learning level, the second machine learning level and the third machine learning level being chained together: Lovegrove teaches the chaining together of machine learning levels, as it teaches the use of the output of one machine learning model to be used as the input of a second machine learning model. (Lovegrove’s first machine learning model produces sets of sensor data which as input to a second machine learning model that produces control signal instructions) (Paragraph 30-31). Furthermore, the machine learning models of Lovegrove may be used to identify a task to be performed on the sensor data (Paragraph 22), which would be output configured to complete the tasks. Jo further teaches a machine learning model identifies an IoT output and generates the output (Column 16, lines 40-52) wherein the machine learning model could be combined with Lovegrove’s teaching of multiple linked levels of machine learning models. Therefore, in order to form the first, second, and third machine learning levels that are chained together, the two chained together levels of Lovegrove could be used. Then, the additional machine learning model of Jo could be used as a third machine learning level, because Lovegrove has already taught the method of connecting multiple machine learning models in this manner. Therefore the first and second machine learning levels of Lovegrove and the third level of Jo would be chained together in order to generate an IoT output. Beaufays in the same field of endeavor of discloses reinforcement learning discloses wherein the first machine learning level includes one or more first level neural networks and the IoT input data is captured from a plurality of IoT devices, and wherein each individual IoT device of the plurality of IoT devices corresponds with a single first level neural network of the one or more first level neural networks: Beaufays in the same field of endeavor of reinforcement learning teaches client devices with corresponding machine learning models (Paragraph 3) wherein the machine learning model processes data captured by the client device (Paragraph 28). Therefore, the input data is captured from a plurality of devices, wherein each individual device corresponds with a single machine learning model. Furthermore, Lovegrove provides recurrent neural networks (Lovegrove, Paragraph 24) as the machine learning models of Beaufays and IoT devices that may be used in the methods of Beaufays. Beaufays and the present application are analogous art because they are in the same field of endeavor of reinforcement learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Lovegrove and the teachings of Jo. This would have provided the advantage of improving the utilization of the processing device (Jo, Column 6, lines 30-35) as well as the advantage of improving performance of the machine learning model and avoiding catastrophic forgetting among models (Beaufays, Paragraph 2). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached on (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.J.M./Examiner, Art Unit 2142 /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Mar 28, 2022
Application Filed
Apr 22, 2025
Non-Final Rejection — §103
Jun 18, 2025
Interview Requested
Jul 10, 2025
Applicant Interview (Telephonic)
Jul 10, 2025
Examiner Interview Summary
Jul 18, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Oct 14, 2025
Interview Requested
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 22, 2025
Examiner Interview Summary
Nov 04, 2025
Response after Non-Final Action
Dec 05, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566943
METHOD AND APPARATUS WITH NEURAL NETWORK QUANTIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12481890
SYSTEMS AND METHODS FOR APPLYING SEMI-DISCRETE CALCULUS TO META MACHINE LEARNING
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
90%
With Interview (+71.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month