Prosecution Insights
Last updated: April 19, 2026
Application No. 17/790,418

Distributed Machine-Learned Models Across Networks of Interactive Objects

Final Rejection §103
Filed
Jun 30, 2022
Examiner
NAULT, VICTOR ADELARD
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +83% interview lift
Without
With
+83.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on 08/28/2025, in which claims 1, 3, 6, 11, 13, 16, and 18 are amended. Claim 17 is cancelled. No new claims are added. Claims 1-16 and 18-20 are currently pending. Response to Arguments With regards to the rejections of claims 1-20 under 35 U.S.C. 101 as directed to abstract ideas, Applicant’s arguments that the claims as amended overcome the rejections are persuasive. The transmission of data recited in the claims serves to provide the technical improvement of enabling execution of machine-learned models on edge devices without the high latency or privacy/security risks of communication with a centralized server. With regards to the rejections of claims 1, 5-11, and 15 under 35 U.S.C. 103 as being unpatentable over Teerapittayanon et al. “Distributed Deep Neural Networks over the Cloud, the Edge and End Devices” (Teerapittayanon), in view of Poupyrev et al. (International Patent Application Publication No. WO 2018/236702) (Poupyrev), Examiner first notes that the claims as amended overcome the rejections, necessitating rejections under a new combination of references, as described below. Applicant additionally argues on page 14 of the Remarks that “However, the DDNN framework of Teerapittayanonin describes a static partitioning of a deep neural network and aggregation of outputs, it does not disclose or suggest ‘determining, by the computing system, a subset of respective portions of a machine-learned model for execution by such interactive object during at least a portion of the activity,’ as expressly recited in Claim 1”. Examiner however notes that Teerapittayanon does teach determining, by the computing system, a subset of respective portions of a machine-learned model for execution ((Teerapitayanon Pg. 3) “We can extend this model to include a single end device, as shown in (b), by performing a portion of the DNN inference computation on the device rather than sending the raw input to the cloud”) by such interactive object ((Teerapitayanon Pg. 1) “the number of end devices, including Internet of Things (IoT) devices, has increased dramatically. These devices are appealing targets for machine learning applications as they are often directly connected to sensors (e.g., cameras, microphones, gyroscopes)”, an end device with a sensor such as a camera is interactive) during at least a portion of the activity (Teerapittayanon Pg. 7, Fig. 5 shows interactive objects with cameras are used to monitor activities such as driving cars). No limitations within claim 1 as currently amended clarify that determining, by the computing system, a subset of respective portions of a machine-learned model cannot be a static partitioning of a deep neural network. With regards to the rejections of claims 16-20 under 35 U.S.C. 103 as being unpatentable over Poupyrev, in view of Teerapittayanon, further in view of Zhao et al. “DeepThings: Distributed Adaptive Deep Learning Inference on Resource-Constrained IoT Edge Clusters” (Zhao), Applicant’s argument that the claims as amended overcome the rejection have been considered, but are not found persuasive. Applicant states on page 16 of the Remarks that “The Examiner assets that Teerapittayanon allegedly teaches ‘configuration data,’ citing the disclosure of ‘preconfigured exit thresholds.’ See Teerapittayanon at p. 5. However, Teerapittayanon describes a confidence-based exit mechanism” and “Teerapittayanonin's ‘preconfigured exit thresholds’ are decision criteria, not ‘configuration data,’ as described by the instant claims. This is especially true because the ‘preconfigured exit thresholds’ do not identify or allocate specific layers of the neural network for execution at an interactive object”. Examiner respectfully disagrees. No limitation within claim 16, nor any description within the specification, provides features of at least the “first configuration data” that the preconfigured exit thresholds of Teerapittayanon would not encompass. The preconfigured exit thresholds of Teerapittayanon do allocate layers of a neural network for execution at specific devices, as stated in Teerapittayanon: (Teerapittayanon Pg. 3) “If at an early exit point a sample is deemed confident based on the entropy of the computed probability vector for target classes, then it is classified and no further computation is performed by the higher NN layers. In DDNN, exit points are placed at physical boundaries (e.g., between the last NN layer on an end device and the first NN layer in the next higher layer of the distributed computing hierarchy such as the edge or the cloud)”. The preconfigured exit thresholds determine how the layers are split between different devices, and if layers are executed at all for classification of a given sample. Examiner interprets the preconfigured exit thresholds to be both decision criteria and configuration data for the “DDNN” (distributed deep neural network) taught by Teerapittayanon. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-9, 11, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Teerapittayanon et al. “Distributed Deep Neural Networks over the Cloud, the Edge and End Devices”, hereinafter Teerapittayanon, in view of Cashmore (U.S. Patent Application Publication No. 2023/0263419), hereinafter Cashmore. Regarding claim 1, Teerapittayanon teaches A computer-implemented method, comprising: identifying, by at least one computing device of a computing system, a set of interactive objects (Teerapittayanon Pg. 7, Fig. 4 shows computing devices, namely the local aggregator and the cloud aggregator, identifying edge devices to implement a classification neural network model, (Teerapittayanon Pg. 1) “At the same time, the number of end devices, including Internet of Things (IoT) devices, has increased dramatically. These devices are appealing targets for machine learning applications as they are often directly connected to sensors (e.g., cameras, microphones, gyroscopes”), end/edge devices include interactive objects such as microphones) for monitoring an activity, (Teerapittayanon Pg. 7, Fig. 5 shows interactive objects with cameras are used to monitor activities such as driving cars) PNG media_image1.png 511 486 media_image1.png Greyscale PNG media_image2.png 337 517 media_image2.png Greyscale wherein the set of interactive objects are communicatively coupled over one or more networks, (Teerapittayanon Pg. 3) “It is widely expected that most of data generated by the massive number of IoT devices must be processed locally at the devices or at the edge, for otherwise the total amount of sensor data for a centralized cloud would overwhelm the communication network bandwidth”) wherein each interactive object comprises at least one respective sensor configured to generate sensor data associated with such interactive object; ((Teerapittayanon Pg. 1) “At the same time, the number of end devices, including Internet of Things (IoT) devices, has increased dramatically. These devices are appealing targets for machine learning applications as they are often directly connected to sensors (e.g., cameras, microphones, gyroscopes”) determining, by the computing system, a subset of respective portions of a machine-learned model for execution by such interactive object during at least a portion of the activity; ((Teerapittayanon Pg. 3) “We can extend this model to include a single end device, as shown in (b), by performing a portion of the DNN inference computation on the device rather than sending the raw input to the cloud. Using an exit point after device inference, we may classify those samples which the local network is confident about, without sending any information to the cloud. For more difficult cases, the intermediate DNN output (up to the local exit) is sent to the cloud, where further inference is performed using additional NN layers and a final classification decision is made”, part of the machine-learned model, the local network, is executed at the end device) generating, by the computing system, configuration data indicative of the subset of respective portions of the machine-learned model for execution by such interactive object during at least the portion of the activity; ((Teerapittayanon Pg. 5) “Inference in DDNN is performed in several stages using multiple preconfigured exit thresholds T (one element T at each exit point) as a measure of confidence in the prediction of the sample…We use a normalized entropy threshold as the confidence criteria (instead of unnormalized entropy as used in [3]) that determines whether to classify (exit) a sample at a particular exit point”, preconfigured exit thresholds that determine whether classification should finish at an edge device or continue to a different device or the cloud corresponds to configuration data indicating the portion of a machine learning model for execution at an interactive object) communicating, by the computing system, the configuration data to the subset of respective portions of the machine-learned model; ((Teerapittayanon Pg. 4) “While DDNN inference is distributed over the distributed computing hierarchy, the DDNN system can be trained on a single powerful server or in the cloud”, training, i.e. creating the machine learning model, on a server or in the cloud and then distributing the model over a hierarchy for inference, i.e. use, corresponds to communicating by the computing system, to edge devices, configuration data indicating the portion of the machine learning model for execution on the edge devices) Cashmore teaches the following further limitation more explicitly than Teerapittayanon does: and processing, by the subset of respective portions of the machine-learned model, ((Cashmore [0128]) “The processor 201 of the wearable article 20 comprises an application processor and an AI hardware accelerator. The AI hardware accelerator may perform at least a component the machine-learning inference and updating operations”, a component of machine-learning inference operations corresponds to a subset of portions of a machine-learned model) the sensor data to generate data indicative of at least one inference associated with the activity ((Cashmore [0020]) “The first data may comprise activity data sensed by the at least one sensor of the wearable article. The generated inference may comprise an activity classification”, (Cashmore [0029]-[0032]) “the instructions, when executed by the processor, cause the processor to perform operations, the operations comprising:…(b) obtaining first data from at least one sensor of the wearable article;…(c) employing the current version of the machine-learned model to generate an inference using the first data;”) At the time of filing, one of ordinary skill in the art would have motivation to combine Teerapittayanon and Cashmore by taking the method of taking portions of a machine learning model, creating configuration data for them, and communicating the data to a set of interactive objects with sensors that monitor an activity, taught by Teerapittayanon, and having the portions of the machine learning model be configured to generate data indicating an inference related to the activity based on sensor data, taught by Cashmore. Deploying a machine-learned model, i.e. a trained machine learning model, to create inferences on an activity is a well-known application within the art that yields the predictable benefit of forming conclusions on a monitored activity in a cheaper manner than a human could provide. Such a combination would be obvious. Regarding claim 5, Teerapittayanon and Cashmore jointly teach The method of claim 1, wherein: Teerapittayanon further teaches: the configuration data for a first interactive object identifies an output of a second interactive object ((Teerapittayanon Pg. 5) “We now provide an example of the inference procedure for a DDNN which has multiple end devices and three exit points (configuration (e) in Figure 2): (1) Each end device first sends summary information to local aggregator”, a local aggregator is an interactive object, summary information from end devices is the output of at least one second interactive object) including one or more feature representations ((Teerapittayanon Pg. 3) “Note that the intermediate output can be designed to be much smaller than the sensor input (e.g., a raw image from a video camera), and therefore drastically reduce the network communication required”, intermediate output of a machine learning model corresponds to one or more feature representations) to be used as an input to the respective portion of the machine-learned model at the first interactive object (Teerapittayanon Pg. 5, Fig. 2 shows at (e) that the local aggregator uses the output of the device neural network layers as input) PNG media_image3.png 613 991 media_image3.png Greyscale At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon and Cashmore for the parent claim of claim 5, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 6, Teerapittayanon and Cashmore jointly teach The method of claim 1, wherein: Teerapittayanon further teaches: the interactive object is configured, in response to the configuration data indicative of the subset of respective portions of the machine-learned model, to obtain the respective portion of the machine-learned model from at least one computing device remote from the interactive object ((Teerapittayanon Pg. 4) “While DDNN inference is distributed over the distributed computing hierarchy, the DDNN system can be trained on a single powerful server or in the cloud”, training, i.e. creating the machine learning model, on a server or in the cloud and then distributing the model over a hierarchy for inference, i.e. use, corresponds to obtaining the portion of the machine learning model for inference at the edge device (interactive object) from a remote computing device (the server or the cloud)) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon and Cashmore for the parent claim of claim 6, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 7, Teerapittayanon and Cashmore jointly teach The method of claim 1, wherein: Teerapittayanon further teaches: the configuration data for at least one interactive object ((Teerapittayanon Pg. 1) “the number of end devices, including Internet of Things (IoT) devices, has increased dramatically. These devices are appealing targets for machine learning applications as they are often directly connected to sensors (e.g., cameras, microphones, gyroscopes)”, devices such as microphones are interactive objects) includes the respective portion of the machine-learned model ((Teerapittayanon Pg. 2) “By training DDNN end-to-end, the network optimally configures lower NN layers to support local inference at end devices, and higher NN layers in the cloud to improve overall classification accuracy of the system”, lower neural network layers are a portion of a machine learning model) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon and Cashmore for the parent claim of claim 7, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 8, Teerapittayanon and Cashmore jointly teach The method of claim 1, wherein: Cashmore further teaches: the at least one respective sensor of at least one interactive object includes an inertial measurement unit ((Cashmore [0115]) “The electronics arrangement for the wearable article 20 comprises…one or more sensors…In the example of FIG. 2, the sensors 211 comprise a motion sensor 213 which may be an inertial measurement unit”, a wearable article is an interactive object) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon and Cashmore for the parent claim of claim 8, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 9, Teerapittayanon and Cashmore jointly teach The method of claim 1, wherein: Cashmore further teaches: the set of interactive objects include at least one wearable device and at least one non-wearable device ((Cashmore [0131-0133]) “Any electronic device capable of communicating with a server and/or a wearable device over a wired or wireless communication network may function as a base station in accordance with the present invention. The base station may be a wireless device or a wired device. The wireless/wired device may be a mobile phone, tablet computer, gaming system, MP3 player, point-of-sale device, or wearable device such as a smart watch”, mobile phones are non-wearable devices and interactive objects) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon and Cashmore for the parent claim of claim 9, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 11, Claim 11 recites a system comprising a processor and a non-transitory computer-readable medium for performing the function of the method of claim 1. Specifically, claim 11 recites: one or more processors; and one or more non-transitory computer-readable media that collectively store instructions that when executed by the one or more processors cause the one or more processors to perform operations, the operations comprising: [the method of claim 1]. Cashmore recites: (Cashmore Abstract) “The electronics arrangement comprising a processor (201); and a memory (203), the at least one memory (203) storing instructions, the instructions, when executed by the processor (201), cause the processor (201) to perform operation”. All other limitations in claim 11 are substantially the same as those in claim 1, therefore the same rationale for rejection applies. Regarding claim 15, Claim 15 recites a system for performing the function of the method of claim 5. All other limitations in claim 15 are substantially the same as those in claim 5, therefore the same rationale for rejection applies. Claims 2-4 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Teerapittayanon, in view of Cashmore, further in view of Zhao et al. “DeepThings: Distributed Adaptive Deep Learning Inference on Resource-Constrained IoT Edge Clusters”, hereinafter Zhao. Regarding claim 2, Teerapittayanon and Cashmore jointly teach The method of claim 1, further comprising: Zhao teaches the following further limitations that neither Teerapittayanon, nor Cashmore teach: monitoring, by the at least one computing device, a respective resource state associated with each interactive object ((Zhao Pg. 3) “the Runtime System will register itself with the gateway device, which centrally monitors and coordinates work distribution and stealing. If its task queue runs empty, an IoT edge node will poll the gateway for devices with active work items and start stealing tasks by directly communicating with other DeepThings runtimes in a peer-to-peer fashion”, a gateway device monitoring work distribution based on available resources of nodes corresponds to monitoring, by a computing device, resource states for each object) of the set of interactive objects during the activity; ((Zhao Pg. 7) “In our experiments, we deploy YOLOv2 on top of DeepThings on a set of IoT devices in a WLAN. Our network setup consists of up to six edge and a single gateway device. We use Raspberry Pi 3 Model B (RPi3) as both edge and gateway platforms”, a Raspberry Pi, which is a single-board computer, is an interactive object, and an experiment is an activity)) and re-distributing execution of portions of the machine-learned model ((Zhao Pgs. 2-3) “DeepThings incorporates the following main innovations and contributions. 1) We propose a Fused Tile Partitioning (FTP) method for dividing convolutional layers into independently distributable tasks…2) We develop a distributed work stealing runtime system for IoT clusters to adaptively distribute FTP partitions in dynamic application scenarios”, dividing convolutional layers into tasks that are adaptively distributed to edge devices corresponds to re-distributing execution of portions of a machine learning model) to individual interactive objects of the set of interactive objects during the activity ((Zhao Pg. 7) “In our experiments, we deploy YOLOv2 on top of DeepThings on a set of IoT devices in a WLAN. Our network setup consists of up to six edge and a single gateway device. We use Raspberry Pi 3 Model B (RPi3) as both edge and gateway platforms”, a Raspberry Pi, which is a single-board computer, is an interactive object, and an experiment is an activity) based at least in part on the respective resource state associated with each interactive object ((Zhao Pg. 3) “If its task queue runs empty, an IoT edge node will poll the gateway for devices with active work items and start stealing tasks by directly communicating with other DeepThings runtimes in a peer-to-peer fashion”, re-allocating tasks based on devices finishing their task queue and thus having available resources corresponds to re-distributing portions of a machine learning model based on respective resource states of objects) At the time of filing, one of ordinary skill in the art would have motivation to combine Teerapittayanon, Cashmore, and Zhao by taking the method of taking a portion of a machine learning model, creating configuration data for it, communicating the data to a set of interactive objects with sensors that monitor an activity, and having the machine learning model be configured to generate data indicating an inference with portions of a machine-learned model based on sensor data, taught by Teerapittayanon and Cashmore, and monitoring the resource states of the interactive objects in order to re-distribute portions of a machine learning model based on the resource states, taught by Zhao. Zhao teaches that doing so provides the predictable benefit of: (Zhao Pg. 3) “dynamically balanc[ing] the workload among edge clusters to enable efficient locally distributed CNN inference under time-varying processing needs”. Such a combination would be obvious. Regarding claim 3, Teerapittayanon, Cashmore, and Zhao jointly teach The method of claim 2, Zhao further teaches: wherein determining the subset of respective portions of the machine-learned model comprises: determining a first respective portion of the machine-learned model for execution by a first interactive object and a second respective portion of the machine-learned model for execution by a second interactive object during a first time period of the activity; ((Zhao Pg. 3) “Based on resource constraints of edge devices, a proper offloading point between gateway/edge nodes and partitioning parameters are generated in a one-time offline process…For inference, a DeepThings runtime is instantiated in each IoT device to manage task computation, distribution, and data communication. Its Data Frame Partitioner will partition any incoming data frames from local data sources into distributable and lightweight inference tasks according to the precomputed FTP parameters. The Runtime System in turn loads the pretrained weights and invokes an externally linked CNN inference engine to process the partitioned inference tasks”, partitioning tasks based on precomputed parameters, which are based on resource constraints of edge devices, and wherein the tasks are part of inference of a machine learning model, corresponds to determining portions of a machine learning model for execution at a first and a second object during a first time period) generating first configuration data indicative of the first respective portion of the machine-learned model for execution by the first interactive object first and second configuration data indicative of the second respective portion of the machine-learned model for execution by the second interactive object during the first time period of the activity; ((Zhao Pg. 2) “we propose DeepThings, a novel framework for locally distributed and adaptive CNN inference in resource-constrained IoT devices. DeepThings incorporates the following main innovations and contributions. 1) We propose a Fused Tile Partitioning (FTP) method for dividing convolutional layers into independently distributable tasks”, creating tasks, which are portions of the layers of a machine learning model, for IoT devices corresponds to generating configuration data for first and second portions of a machine learning model for execution by objects) and communicating to the first interactive object the first configuration data indicative of the first respective portion of the machine-learned model for execution by the first interactive object and communicating to the second interactive object the second configuration data indicative of the second respective portion of the machine-learned model for execution by the second interactive object ((Zhao Pg. 3) “For inference, a DeepThings runtime is instantiated in each IoT device to manage task computation, distribution, and data communication. Its Data Frame Partitioner will partition any incoming data frames from local data sources into distributable and lightweight inference tasks according to the precomputed FTP parameters”, a DeepThings runtime to distribute machine learning inference tasks to IoT devices and communicate associated data corresponds to communicating to objects configuration data corresponding to portions of a machine learning model for execution for first and second objects) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon, Cashmore, and Zhao for the parent claim of claim 3, claim 2. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 4, Teerapittayanon, Cashmore, and Zhao jointly teach The method of claim 3, Zhao further teaches: wherein re-distributing execution of portions of the machine-learned model to individual interactive objects of the set of interactive objects during the activity comprises: determining that the first respective portion of the machine-learned model is to be executed by the second interactive object during a second time period of the activity; ((Zhao Pg. 3) “If its task queue runs empty, an IoT edge node will poll the gateway for devices with active work items and start stealing tasks by directly communicating with other DeepThings runtimes in a peer-to-peer fashion”, (Zhao Pg. 2) “We propose a Fused Tile Partitioning (FTP) method for dividing convolutional layers into independently distributable tasks”, re-allocating tasks from one edge node to another, wherein the tasks are portions of a machine learning model to execute, based on devices finishing their task queue corresponds to determining a first portion of a machine learning model is to be executed by a different object during a second time period) generating configuration data indicative of the first respective portion of the machine-learned model for execution by the second interactive object during the second time period of the activity; ((Zhao Pg. 5) “To serve incoming stealing requests from other devices, a Stealing Server Thread is continuously running as part of the Work Stealing Service in each edge device. Once a steal request has been received, the Request Handler inside the stealing server will get a task from the local task queue and reply with the corresponding partition input data to the stealer. Additionally, a Partition Result Collection Thread will collect all the stolen and local partition results from the result queue and send them to the gateway”, replying with partition input data in response to a steal request of a machine learning task from one edge device to another corresponds to generating configuration data indicating a portion of a machine learning model to execute at a second object during a second time period) and communicating the configuration data indicative of the first respective portion of the machine-learned model for execution by the second interactive object during the second time period of the activity ((Zhao Pg. 3) “the Runtime System will register itself with the gateway device, which centrally monitors and coordinates work distribution and stealing. If its task queue runs empty, an IoT edge node will poll the gateway for devices with active work items and start stealing tasks by directly communicating with other DeepThings runtimes in a peer-to-peer fashion”, a gateway device coordinating work distribution of tasks, which are portions of a machine learning model to execute, between edge nodes based on their task queue at different times corresponds to communicating configuration data indicating a portion of a machine learning model to execute during a second time period) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Teerapittayanon, Cashmore, and Zhao for the parent claim of claim 4, claim 3. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claims 12-14, Claims 12-14 recite a system for performing the function of the method of claims 2-4, respectively. All other limitations in claims 12-14 are substantially the same as those in claims 2-4, respectively, therefore the same rationale for rejection applies. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Teerapittayanon, in view of Cashmore, further in view of Poupyrev et al. (International Patent Application Publication No. WO 2018/236702), hereinafter Poupyrev. Regarding claim 10, Teerapittayanon and Cashmore jointly teach The method of claim 1, wherein: Poupyrev teaches the following further limitation that neither Teerapittayanon nor Cashmore teaches: the one or more networks include at least one mesh network that permits direct communication between the interactive objects of the set of interactive objects ((Poupyrev [0041]-[0042]) “Wearable motion sensor 102 enables collected motion data to be used by a variety of other computing devices 106 via a network 108. Computing devices 106 are illustrated with various non-limiting example devices: server 106-1, smart phone 106-2, laptop 106-3, computing spectacles 106-4, television 106-5, camera 106-6, tablet 106-7, desktop 106-8, and smart watch 106-9…Network 108 includes one or more of many types of wireless or partly wireless communication networks, such as…a mesh network…Wearable motion sensor 102 can interact with computing devices 106 by transmitting motion data through network 108”, wearable motion sensors and devices such as smart phones, computing spectacles, and smart watches are interactive devices) At the time of filing, one of ordinary skill in the art would have motivation to combine Teerapittayanon, Cashmore, and Poupyrev by taking the method of taking a portion of a machine learning model, creating configuration data for it, communicating the data to a set of interactive objects with sensors that monitor an activity, and having the machine learning model be configured to generate data indicating an inference with portions of a machine-learned model based on sensor data, taught by Teerapittayanon and Cashmore, and using a mesh network for communication between the set of interactive objects, taught by Poupyrev. Substituting a mesh network in for the wireless or cellular network used by Teerapittayanon and Cashmore ((Cashmore [0112]) “The wearable articles 20 communicate with the server 40 over a wireless network such as a cellular network”) predictably creates a method that performs the same purpose of enabling data transmission between the interactive objects. Such a combination would be obvious. Claims 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Poupyrev, in view of Teerapittayanon, further in view of Zhao. Regarding claim 16, Poupyrev teaches An interactive object, comprising: ((Poupyrev [0053]) “FIG. 4 depicts a block diagram of an example wearable motion sensor 102 according to example embodiments of the present disclosure”, a wearable motion sensor is an interactive object) one or more sensors configured to generate sensor data associated with a user of the interactive object; ((Poupyrev [0018]) “multiple wearable sensors can measure and record data associated with movement of a user as the user moves through a space”) and one or more processors communicatively coupled to the one or more sensors ((Poupyrev [0053]) “The wearable motion sensor 102 can include one or more processors 122 and one or more memory devices 124. The motion sensor 102 can further include one or more sensors 126 configured to acquire motion data to store in the one or more memory devices 124”) the one or more processors configured to: obtain first configuration data indicative of a [first portion of a] machine-learned model configured to generate data indicative of at least one inference associated with an activity monitored by a set of interactive objects including the interactive object, ((Poupyrev [0025]-[0026]) “In some embodiments, the motion patterns can be identified using one or more models developed for particular activities using machine learning (e.g., deep learning). Each model can analyze the motion data (e.g., motion primitives) and classify a particular motion as a signature motion pattern…the model(s) can be accessed from a library of models. Each model in the library of models can be associated with a particular activity (e.g., sport, exercise movement, gait analysis, etc.). Each model can also be associated with a particular type of motion sensor and/or motion sensor placement. For instance, a first model in the library of models can be associated with motion sensors located in both of a user's shoes. A second model in the library of models can be associated with motion sensors located in both of a user's shoes and an additional motion sensor located on apparel worn by the user (e.g., a sensor located on a user's shirt near the user's shoulder)”, classifying a particular motion as a signature motion pattern is generating data indicative of an inference, a particular activity is an activity, the activity being associated with motion data collected by motion sensors of the interactive objects corresponds to the activity being monitored by a set of interactive objects, Poupyrev does not teach portions of a machine learning model) the set of interactive objects being communicatively coupled over one or more networks ((Poupyrev [0041]-[0042]) “Wearable motion sensor 102 enables collected motion data to be used by a variety of other computing devices 106 via a network 108. Computing devices 106 are illustrated with various non-limiting example devices: server 106-1, smart phone 106-2, laptop 106-3, computing spectacles 106-4, television 106-5, camera 106-6, tablet 106-7, desktop 106-8, and smart watch 106-9…Network 108 includes one or more of many types of wireless or partly wireless communication networks…Wearable motion sensor 102 can interact with computing devices 106 by transmitting motion data through network 108”, wearable motion sensors and devices such as smart phones, computing spectacles, and smart watches are interactive devices) Teerapittayanon teaches the following further limitations that Poupyrev does not teach: and each interactive object storing at least a portion of the machine-learned model ((Teerapittayanon Pg. 3) “We can extend this model to include a single end device, as shown in (b), by performing a portion of the DNN inference computation on the device rather than sending the raw input to the cloud. Using an exit point after device inference, we may classify those samples which the local network is confident about, without sending any information to the cloud. For more difficult cases, the intermediate DNN output (up to the local exit) is sent to the cloud, where further inference is performed using additional NN layers and a final classification decision is made”, part of the machine-learned model, the local network, is stored at the end device) during at least a portion of a time period associated with the activity; (Teerapittayanon Pg. 7, Fig. 5 shows interactive objects monitoring an activity during a time period) configure, in response to the first configuration data, the interactive object to generate a first set of feature representations based at least in part on the first portion of the machine-learned model and sensor data associated with the one or more sensors of the interactive object; ((Teerapittayanon Pg. 1) “The small model at an end device can quickly perform initial feature extraction, and also classification if the model is confident. Otherwise, the end device can fall back to the large NN model in the cloud, which performs further processing and final classification…Additionally, since a summary based on extracted features from the end device model are sent instead of raw sensor data, the system could provide better privacy protection”, an end device performing feature extraction with a small model corresponds to generating a set of feature representations based on a portion of a machine learning model, extracting features from raw sensor data corresponds to generating feature representations based on one or more sensors associated with an interactive object) wherein the first configuration data is associated with one or more first layers of at least one neural network of the machine-learned model; ((Teerapittayanon Pg. 5) “Inference in DDNN is performed in several stages using multiple preconfigured exit thresholds T (one element T at each exit point) as a measure of confidence in the prediction of the sample…We use a normalized entropy threshold as the confidence criteria (instead of unnormalized entropy as used in [3]) that determines whether to classify (exit) a sample at a particular exit point”, preconfigured exit thresholds that determine whether classification should finish at an end device or continue corresponds to configuration data, Teerapittayanon Pg. 5, Fig. 2 shows at (f) that the first few layers of a neural network are implemented on an end device with a local exit) PNG media_image3.png 613 991 media_image3.png Greyscale wherein the second configuration data is associated with one or more second layers of the at least one neural network of the machine-learned model; ((Teerapittayanon Pg. 5) “Inference in DDNN is performed in several stages using multiple preconfigured exit thresholds T (one element T at each exit point) as a measure of confidence in the prediction of the sample…We use a normalized entropy threshold as the confidence criteria (instead of unnormalized entropy as used in [3]) that determines whether to classify (exit) a sample at a particular exit point”, preconfigured exit thresholds that determine whether classification should finish at an end device or continue corresponds to configuration data, Teerapittayanon Pg. 5, Fig. 2 shows at (f) that the first few layers of a neural network are implemented on multiple end devices with a local exit, second configuration data is not ) At the time of filing, one of ordinary skill in the art would have motivation to combine Poupyrev and Teerapittayanon by taking the interactive object, with sensors to generate data on a user and a processor, the processor obtaining configuration data of a machine learning model that creates inferences on an activity monitored by a set of interactive objects, and the interactive objects communicating with one another over a network, taught by Poupyrev, and having the interactive objects store portions of a machine learning model and in response to configuration data, which are associated with layers of a neural network, generating a set of feature representations based on sensor data from the interactive object and the stored portion of the machine learning model, taught by Teerapittayanon. Teerapittayanon teaches that doing so provides the predictable benefit of: (Teerapittayanon Pg. 1) “Hierarchically distributed computing structures consisting of the cloud, the edge and devices (see, e.g., [1], [2]) have inherent advantages, such as supporting coordinated central and local decisions, and providing system scalability, for large-scale intelligent tasks based on geographically distributed IoT devices”, with the stored portion of the machine learning model on the interactive object being part of a hierarchically distributed computing structure implemented on devices. Such a combination would be obvious. Zhao teaches the following further limitations that neither Poupyrev, nor Teerapittayanon teaches: obtain, by the interactive object subsequent to generating the first set of feature representations, second configuration data indicative of a second portion of the machine-learned model; ((Zhao Pgs. 2-3) “DeepThings incorporates the following main innovations and contributions. 1) We propose a Fused Tile Partitioning (FTP) method for dividing convolutional layers into independently distributable tasks…2) We develop a distributed work stealing runtime system for IoT clusters to adaptively distribute FTP partitions in dynamic application scenarios”, (Zhao Pg. 3) “If its task queue runs empty, an IoT edge node will poll the gateway for devices with active work items and start stealing tasks by directly communicating with other DeepThings runtimes in a peer-to-peer fashion”, dividing convolutional layers into tasks that are adaptively distributed to edge devices, including edge devices that have finished previous tasks obtaining other tasks, corresponds to an interactive object obtaining configuration data indicating a second portion of a machine learning model subsequent to generating a set of feature representations based on a first portion of a machine learning model) and configure, in response to the second configuration data, the interactive object to generate a second set of feature representations based at least in part on the second portion of the machine-learned model [and sensor data associated with the one or more sensors of the interactive object] ((Zhao Pg. 3) “If its task queue runs empty, an IoT edge node will poll the gateway for devices with active work items and start stealing tasks by directly communicating with other DeepThings runtimes in a peer-to-peer fashion”, (Zhao Pg. 4) “we propose an Fused Tile Partitioning (FTP) method to parallelize the convolutional operation and reduce both the memory footprint and communication overhead for early stage convolutional layers. In FTP, the original CNN is divided into tiled stacks of convolution and pooling operations. The feature maps of each layer are divided into small tiles in a grid fashion, where corresponding feature map tiles and operations across layers are vertically fused together to constitute an execution partition and stack”, an edge device creating feature maps of a divided part of a machine learning model corresponds to an interactive object generating a second set of feature representations based on a second portion of a machine learning model, Teerapittayanon and Zhao teach sensor data associated with sensors of interactive objects) At the time of filing, one of ordinary skill in the art would have motivation to combine Poupyrev, Teerapittayanon, and Zhao by taking the interactive object, with sensors to generate data on a user and a processor, the processor obtaining configuration data of a machine learning model that creates inferences on an activity monitored by a set of interactive objects, the interactive objects communicating with one another over a network, having the interactive objects store portions of a machine learning model and in response to configuration data, and generating a set of feature representations based on sensor data from the interactive object and the stored portion of the machine learning model, taught jointly by Poupyrev and Teerapittayanon, and having the interactive object subsequently receive configuration data of a different portion of the machine learning model, which it then generates feature representations using, taught by Zhao. Zhao teaches that this imparts the predictable benefit of: (Zhao Pg. 3) “dynamically balanc[ing] the workload among edge clusters to enable efficient locally distributed CNN inference under time-varying processing needs”. Such a combination would be obvious. Regarding claim 18, Poupyrev, Teerapittayanon, and Zhao jointly teach The interactive object of claim 16, wherein the one or more processors are configured to: Teerapittayanon further teaches: generate the first set of feature representations using the one or more first layers of the at least one neural network of the machine-learned model; ((Teerapittayanon Pg. 1) “The small model at an end device can quickly perform initial feature extraction, and also classification if the model is confident”, Teerapittayanon Pg. 5, Fig. 2 shows at (f) that the first few layers of a neural network are on an end device, that would perform the initial feature extraction) and generate the second set of feature representations using the one or more second layers of the at least one neural network of the machine-learned model ((Teerapittayanon Pg. 1) “The small model at an end device can quickly perform initial feature extraction, and also classification if the model is confident”, Teerapittayanon Pg. 5, Fig. 2 shows at (f) that other sets of initial layers of a neural network are on multiple end devices, which would also perform initial feature extraction) At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Poupyrev, Teerapittayanon, and Zhao for the parent claim of claim 18, claim 16. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 19, Poupyrev, Teerapittayanon, and Zhao jointly teach The interactive object of claim 16, wherein: Teerapittayanon further teaches: the machine-learned model includes at least one neural network including a first set of layers, a second set of layers, a third set of layers, and a fourth set of layers; (Teerapittayanon Pg. 5, Fig. 2 shows at (f) that there are at least four sets of layers, since as stated in the caption, each horizonal line represents a layer) the first set of feature representations is generated using the first set of layers based on an output of the second set of layers, ((Teerapittayanon Pg. 3) “For more difficult cases, the intermediate DNN output (up to the local exit) is sent to the cloud, where further inference is performed using additional NN layers and a final classification decision is made”, feature representations include intermediate DNN outputs, Teerapittayanon Pg. 5, Fig. 2 shows at (f) that there is a set of layers at the “edge” portion, which takes intermediate input from the end devices, and which is made up of edge devices, which are interactive objects) the second set of layers being implemented at a second interactive object of the set of interactive objects; (Teerapittayanon Pg. 5, Fig. 2 shows at (f) that layers that provide intermediate output are located at the end devices, which are interactive objects) and the second set of feature representations is generated using the third set of layers based on an output of the fourth set of layers, ((Teerapittayanon Pg. 3) “For more difficult cases, the intermediate DNN output (up to the local exit) is sent to the cloud, where further inference is performed using additional NN layers and a final classification decision is made”, feature representations include intermediate DNN outputs, Teerapittayanon Pg. 5, Fig. 2 shows at (f) that there are multiple sets of layers at the “edge” portion, which takes intermediate input from the end devices, and which is made up of edge devices, which are interactive objects) the fourth set of layers being implemented at a third interactive object of the set of interactive objects (Teerapittayanon Pg. 5, Fig. 2 shows at (f) that layers that provide intermediate output, which are the inputs for both sets of layers within the edge portion, are located at the end devices, which are interactive objects) At the time of
Read full office action

Prosecution Timeline

Jun 30, 2022
Application Filed
May 28, 2025
Non-Final Rejection — §103
Aug 20, 2025
Examiner Interview Summary
Aug 20, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Response Filed
Nov 14, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579429
DEEP LEARNING BASED EMAIL CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12566953
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Mar 03, 2026
Patent 12561563
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Feb 24, 2026
Patent 12468939
OBJECT DISCOVERY USING AN AUTOENCODER
2y 5m to grant Granted Nov 11, 2025
Patent 12446600
TWO-STAGE SAMPLING FOR ACCELERATED DEFORMULATION GENERATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+83.3%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month