DETAILED ACTION
This action is in response to the application filed 05/31/2022. Claims 1-2, 4, 6-12, 14, 16, and 18-25 are pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/2026 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
Page 17, line 14: “portions of the tree structure store in infrastructure” should be “portions of the tree structure stored in infrastructure”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 4, 6-8, 11-12, 14, 16, 18, 20-21, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Mayyuri (ZONE-BASED FEDERATED LEARNING, filed 6/25/2021, US 2022/0417108 A1) in view of Zhang et al. (BLOCKCHAIN-BASED SECURE FEDERATED LEARNING, published 2/10/2022, US 20220044162 A1), hereafter referred to as Zhang.
Regarding claim 1, Mayyuri discloses [a] method comprising:
representing, by a controller for a federated learning system, computing infrastructure for the federated learning system as a tree-structured database in which nodes correspond to physical or organizational infrastructure components including at least a root node, one or more location nodes representing geographic locations, and one or more infrastructure nodes representing computing devices or organizations:
PNG
media_image1.png
617
917
media_image1.png
Greyscale
(Mayyuri, Figure 6). A tree-structured database including a root node.
“FIG. 6 is a diagram illustrating an example 600 of different zones in a federated learning system, in accordance with aspects of the present disclosure. In the example 600 of FIG. 6, each UE 620 may be an example of a device participating in federated learning. Such devices may be referred to as participating devices (computing devices / physical infrastructure components). Additionally, each UE 620 may be an example of a UE 120 as described with reference to FIGS. 1 and 2. In some implementations, as shown in the example 600 of FIG. 6, each UE 620 may be placed in a group 610, 612, 614 based on one or more common attributes or settings. Each group 610, 612, 614 may correspond to a particular zone. For example, as shown in FIG. 6, a first group 610 corresponds to a first zone, a second group 612 corresponds to a second zone, and a third group 614 corresponds to a third zone. In some examples, a UE 620 may be placed in more than one group 610, 612, 614 (not shown in FIG. 6). Additionally, or alternatively, two or more zones may overlap (not shown in FIG. 6). As described, the attributes and settings may include, but are not limited to, a geographic location, a default language, or a user interface theme. As an example, each group 610, 612, 614 may be based on a UE's geographic location. In this example, the UEs 620 in a first group 610 have a common geographic location, the UEs 620 in a second group 612 have a common geographic location, and the UEs 620 in a third group 614 have a common geographic location.” (Mayyuri, [0085])
“A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, and/or the like” (Mayyuri, [0041]); “Some UEs may be considered a customer premises equipment (CPE)” (Mayyuri, [0042]). A piece of equipment associated with a subscription service is associated with the organization managing that service.
“An inter-process communication component 818, such as a bus or a controller/processor, may facilitate communication between the different components” (Mayyuri, [0094]). For further limitations and claims, it should be understood that controllers are managing the system, even if this isn’t explicitly referenced further in the office action.
forming, by the controller, associations between datasets available to the federated learning system and nodes in the tree-structured database by storing, in a metadata database, realm values for the datasets that reference the nodes in the tree-structured database, wherein the realm values define access boundaries restricting where the datasets may be used for model training:
“In some implementations, as shown in the example 600 of FIG. 6, each UE 620 may be placed in a group 610, 612, 614 (realm) based on one or more common attributes or settings. Each group 610, 612, 614 may correspond to a particular zone” (Mayyuri, [0085]). Each piece of user equipment is associated with a zone (value). In Figure 6 above, the first group of equipment is associated with zone 1, the second group with zone 2, and the third with zone 3.
“Each UE 620 may use the respective zone model 604, 606 to perform one or more tasks for an application, such as vocabulary prediction, based on locally collected data (datasets). Additionally, each UE 620 may continue to train the respective zone model 604, 606 with locally collected data.” (Mayyuri, [0089])
“Each UE 620 may use the respective zone model 604, 606 to perform one or more tasks for an application, such as vocabulary prediction, based on locally collected data. Additionally, each UE 620 may continue to train the respective zone model 604, 606 with locally collected data. That is, each UE 620 may train the respective zone model 604, 606 while also using the respective zone model 604, 606 for a corresponding task, such as inference or prediction“ (Mayyuri, [0089]). Each piece of user equipment is restricted to training within its zone, corresponding with its associated zone model.
receiving, at the controller, one or more instructions to perform model training in the federated learning system with datasets specified using location paths or infrastructure identifiers referencing nodes in the tree-structured database according to the associations, wherein the one or more instructions include a groupby instruction specifying how model training results from the datasets should be aggregated based on the location paths; and configuring, by the controller and in response to the one or more instructions, the federated learning system to perform the model training, including orchestrating distribution of model training tasks across the computing by selecting intermediate aggregator nodes based on the groupby instruction to aggregate training results from trainer nodes within specified location paths, based on the location paths or infrastructure identifiers, based on the location paths or infrastructure identifiers:
“After receiving the global model 602, at time t3, each UE 620 (trainer nodes) in the first group 610 and the second group 612 individually trains the global model 602 based on local data (dataset). At time t4, the respective UEs 620 in the first group 610 and the second group 612 transmit weight updates (model training results) to the first zone server 654 and the second zone server 656, respectively.” (Mayyuri, [0087])
“Furthermore, as shown in FIG. 7, at time t7, each zone server 654, 656 transmits the respective weight updates to the global server 652. The weight updates transmitted at time t7 may be raw weight updates or aggregated weight updates, such as an averaged weight updates. Raw weight updates may be an example of weight updates received at a respective zone server 654, 656 (intermediate aggregator nodes) prior to aggregation by the respective zone server 654, 656” (Mayyuri, [0089])
“Another aspect of the present disclosure is directed to an apparatus for training models at a UE. The apparatus includes a processor; a memory coupled with the processor; and instructions stored in the memory and operable, when executed by the processor, to cause the apparatus to receive, at the first UE associated with a first group of UEs, a first model from a first network device associated with a first zone model of a number of zone models. In some examples, the first group of UEs is associated with a first zone model, and a different group of UEs are associated with each of the number of zone models. Execution of the instructions further cause the apparatus to identify, at the first UE, a network device for training the first model based on one or both of a current connectivity state of the UE or a current resource use of the UE. Execution of the instructions still further cause the apparatus to transmit, to the first network device, model weight updates based on the training of the first model. Execution of the instructions also cause the apparatus to receive, from the first network device, the first zone model based on the transmitted model weights updates.” (Mayyuri, [0013])
Mayyuri relates to hierarchical federated learning grouped by geography and is analogous to the claimed invention.
While Mayyuri fails to disclose the further limitations of the claim, Zhang discloses [a] method comprising:
forming, by the controller, associations between datasets available to the federated learning system and nodes in the tree-structured database by storing, in a metadata database, realm values for the datasets that reference the nodes in the tree-structured database, wherein the realm values define access boundaries restricting where the datasets may be used for model training:
“Federated learning is a distributed machine-learning approach in which a machine-learning model implemented on a central server (“global machine-learning model”) is trained based on one or more decentralized datasets. The decentralized datasets (“local datasets”) may include private and/or sensitive data, such as patient health data and telecommunication data, and each decentralized dataset may be owned and/or managed by different, individual systems that are referred to as “clients” in the present disclosure. Transmitting the global machine-learning model rather than the local datasets may preserve bandwidth and/or data privacy because the training data included in the local datasets are not transmitted.” (Zhang, [0013]). Private dataset training is tied to the realm of the corresponding client.
“The blockchain 130 (database) may include metadata that describes the global machine-learning model of the central server 120, the clients 110, and/or the local datasets of the clients 110. For example, the metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier (realm value) … In some embodiments, the metadata may be published as individual blocks (nodes) on the blockchain 130 such that each individual block of the blockchain 130 includes metadata relating to a respective client and/or local dataset for a given training round” (Zhang, [0033])
configuring, by the controller and in response to the one or more instructions, the federated learning system to perform the model training, including orchestrating distribution of model training tasks across the computing by selecting intermediate aggregator nodes based on the groupby instruction to aggregate training results from trainer nodes within specified location paths, based on the location paths or infrastructure identifiers, based on the location paths or infrastructure identifiers:
“individual systems (infrastructure) that are referred to as “clients” in the present disclosure” (Zhang, [0013])
“In response to one or more of the clients 204 (infrastructure) determining that its respective local dataset is relevant to performing the training tasks, the clients 204 (infrastructure) including relevant local datasets may request to participate in training the global-machine learning model at operations 214” (Zhang, [0042])
“Based on the metadata of the local model updates, one or more local model updates may be transferred from the clients 204 to the central server 202. At operations 222, the central server 202 may read the blockchain 206 and identify which of the clients 204 (infrastructure) published metadata relating to their respective local model updates to the blockchain 206. In some embodiments, the central server 202 may read the metadata of the local model updates corresponding to the clients 204 to determine whether a threshold number of local model updates are available. Responsive to determining that the threshold number of local model updates are available, the central server 202 may send requests to each of the clients 204 that published metadata indicating their local model updates are ready to transfer their local model updates to the central server 202 at operations 224” (Zhang, [0044])
“At operations 228, the central server 202 may aggregate the local model updates obtained from the clients 204.” (Zhang, [0045])
Zhang relates to hierarchical federated learning with data grouped by infrastructure / organization and is analogous to the claimed invention. Mayyuri teaches a method of performing hierarchical federated learning by grouping data aggregation by location.
The claimed invention improves upon this method by also grouping data aggregation by infrastructure. Zhang teaches a method of performing hierarchical federated learning by grouping data aggregation by infrastructure, applicable to Mayyuri. A person of ordinary skill in the art would have recognized that grouping local dataset processing by infrastructure / organization would lead to the predictable result of private data that an infrastructural node can’t securely share being incorporated into the model training, and would improve the known device by increasing the amount of data available to train the model without risking security breaches due to the usage of private / sensitive data.
Additionally, the claimed invention improves upon Mayyuri’s method by storing data realm identification information in a metadata database. Zhang teaches a method of storing data realm identification information in a metadata database. A person of ordinary skill in the art would have recognized that storing realm information of datasets would allow for quick retrieval and reference of such information for aggregation grouping and organization, rather than repeatedly determining the realm of each dataset whenever needed, improving the known device by caching data that requires repeated and frequent usage (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Regarding claim 2, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Mayyuri further discloses a method, wherein one or more of the nodes in the tree-structured database represents different geographic locations in which the computing infrastructure for the federated learning system is located:
“Each group of participating devices (computing infrastructure) may be associated with a machine learning model of a zone. For example, participating devices may be grouped based on geographic location. In such an example, participating devices in Los Angeles may be grouped together and associated with a first zone and participating devices in New York may be grouped together and associated with a second zone. The machine learning model of a zone may be referred to as a zone model.” (Mayyuri, [0033])
PNG
media_image1.png
617
917
media_image1.png
Greyscale
(Mayyuri, Figure 6). Each group of devices, along with the paired zone device, can correspond to a unique location.
Regarding claim 4, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Mayyuri further discloses a method, wherein a particular instruction in the one or more instructions to perform the model training indicates that model training results from two or more of the datasets should be grouped: “After receiving the global model 602, at time t3, each UE 620 in the first group 610 and the second group 612 individually trains the global model 602 based on local data (dataset). At time t4, the respective UEs 620 in the first group 610 and the second group 612 transmit weight updates (model training results) to the first zone server 654 and the second zone server 656, respectively.” (Mayyuri, [0087]).
Regarding claim 6, the rejection of claim 4 in view of Mayyuri and Zhang is incorporated. Mayyuri further discloses a method, wherein the particular instruction specifies a location path in the tree-structured database: “In this example, the UEs 620 in a first group 610 have a common geographic location, the UEs 620 in a second group 612 have a common geographic location, and the UEs 620 in a third group 614 have a common geographic location” (Mayyuri, [0085]). When groups correspond to geographic locations, each tree branch corresponding to each zone is one particular location path. Instructions involving operations in a particular branch this inherently specify one location path in the tree.
Regarding claim 7, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Mayyuri discloses a method, further comprising: initiating, by the controller, the model training in the federated learning system using the datasets specified by the one or more instructions: “As described, the FL manager 802 may initiate a model trainer for a given model and determines a location of the data in the processed data storage 808 or raw data storage 812” (Mayyuri, [0099])
Regarding claim 8, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Mayyuri further discloses a method, wherein representing the computing infrastructure for the federated learning system as a tree-structured database comprises: representing at least a portion of the computing infrastructure associated with a particular organization as a singular node in the tree-structured database:
PNG
media_image2.png
738
388
media_image2.png
Greyscale
(Mayyuri, Figure 6). A single group node representing three pieces of user equipment.
“A UE (computing infrastructure) may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, and/or the like” (Mayyuri, [0041]); “Some UEs may be considered a customer premises equipment (CPE)” (Mayyuri, [0042]). A piece of equipment associated with a subscription service is associated with the organization managing that service.
Regarding claim 21, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Zhang further discloses a method, wherein the realm values comprise either an infrastructure identifier corresponding to a specific infrastructure node or a location path corresponding to a branch in the tree-structured database, and wherein a dataset having a realm value specified as an infrastructure identifier is restricted to processing within computing infrastructure of a particular organization represented by the infrastructure identifier:
“individual systems (infrastructure) that are referred to as “clients” in the present disclosure” (Zhang, [0013])
“The decentralized datasets (“local datasets”) may include private and/or sensitive data, such as patient health data and telecommunication data, and each decentralized dataset may be owned and/or managed by different, individual systems (infrastructure / organization[s]) that are referred to as “clients” in the present disclosure. Transmitting the global machine-learning model rather than the local datasets may preserve bandwidth and/or data privacy because the training data included in the local datasets are not transmitted.” (Zhang, [0013]).
“The blockchain 130 (database) may include metadata that describes the global machine-learning model of the central server 120, the clients 110, and/or the local datasets of the clients 110. For example, the metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier (realm value / infrastructure identifier) … In some embodiments, the metadata may be published as individual blocks (nodes) on the blockchain 130 such that each individual block of the blockchain 130 includes metadata relating to a respective client and/or local dataset for a given training round” (Zhang, [0033])
Zhang relates to hierarchical federated learning with data grouped by infrastructure / organization and is analogous to the claimed invention. Mayyuri teaches a method of performing hierarchical federated learning by grouping data aggregation by location. The claimed invention improves upon this method by also grouping data aggregation by infrastructure. Zhang teaches a method of performing hierarchical federated learning by grouping data aggregation by infrastructure, applicable to Mayyuri. A person of ordinary skill in the art would have recognized that grouping local dataset processing by infrastructure / organization would lead to the predictable result of private data that an infrastructural node can’t securely share being incorporated into the model training, and would improve the known device by increasing the amount of data available to train the model without risking security breaches due to the usage of private / sensitive data.
Regarding claim 25, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Mayyuri discloses a method, further comprising: configuring, by the controller, a global aggregator node to aggregate training results from the intermediate aggregator nodes, wherein the global aggregator node forms an aggregated machine learning model based on intermediate models received from the intermediate aggregator nodes via aggregation channels that are not restricted by geographic location: “at time t7, each zone server 654, 656 (intermediate aggregator nodes) transmits the respective weight updates to the global server 652 (global aggregator node) … At time t8, the global server 652 may update the global model 602 (aggregated machine learning model) based on the received weight updates. In some implementations, the global server 652 may average the received weight updates and update the global model 602 based on the average of the received weight updates. In such implementations, the global server 652 may generate a global averaged model based on the update to the global model 602.” (Mayyuri, [0089]). The global server receives updates from all zone locations, and thus is not restricted by geographic location.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mayyuri (ZONE-BASED FEDERATED LEARNING, filed 6/25/2021, US 2022/0417108 A1) in view of Zhang et al. (BLOCKCHAIN-BASED SECURE FEDERATED LEARNING, published 2/10/2022, US 20220044162 A1), hereafter referred to as Zhang, and further in view of Vegge (PERSONAL MOBILE INTERNET, published 9/18/2003, US 20030174683 A1).
Regarding claim 9, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. While Mayyuri and Zhang fail to disclose the further limitations of the claim, Vegge teaches a method, wherein the datasets available in the federated learning system include at least one public dataset whose association is with a root node of the tree-structured database:
“it is an object of the invention to provide a flexible and affordable information system for providing location dependent information and services according to predefined needs of a mobile user and the needs of the information and service providers” (Vegge, [0025])
PNG
media_image3.png
583
821
media_image3.png
Greyscale
(Vegge, Figure 2).
“The servers in a system according to the invention are connected in an hierarchical ordered tree network (tree-structured database). The system may comprise servers at the root, the branch nodes and at the leaves of the tree. Accordingly, in a system of networked servers according to the invention one may find one root server (root node), a plurality of node servers, wherein each node server has one parent server and one or more child server(s), and a plurality of leaf servers that have no child servers. An example of such an arrangement is depicted in FIG. 2.” (Vegge, [0052])
“A world-wide PMI network means that the user is able to use his personal portal no matter where in the world he is!” (Vegge, [0248]); “The roll out of a world-wide PMI network can be done in few steps:” (Vegge, [0249]); “1. Set up a root server (root node) covering the whole world” (Vegge, [0250]). The root data can be accessed from any location, making it globally public, as opposed to other nodes that require access from a specific location.
Vegge relates to distributed servers for data organization and access and is analogous to the claimed invention. The existing combination teaches a method of performing federated learning with geographically organized nodes. The claimed invention improves upon this method by incorporating publicly available datasets in the root of a data access hierarchy. Vegge teaches a method of constructing a hierarchical distributed server system with globally accessible data at its root, applicable to the existing combination. A person of ordinary skill in the art would have recognized that incorporating a data hierarchy with globally accessible data in its root into the existing combination’s federated learning system would lead to the predictable result of certain datasets being available to all nodes in the federated learning system, and would improve the known device by increasing the amount of training data available for local models (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Claims 10 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Mayyuri (ZONE-BASED FEDERATED LEARNING, filed 6/25/2021, US 2022/0417108 A1) in view of Zhang et al. (BLOCKCHAIN-BASED SECURE FEDERATED LEARNING, published 2/10/2022, US 20220044162 A1), hereafter referred to as Zhang, and further in view of Roese et al. (LOCATION BASED DATA, published 11/20/2003, US 2003/0217151 A1), hereafter referred to as Roese.
Regarding claim 10, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. While Mayyuri and Zhang fail to disclose the further limitations of the claim, Roese discloses a method, comprising: assigning, by the controller, unique identifiers to the datasets available in the
federated learning system:
“In a general aspect, the invention features a system that associates physical locations with network-linked devices in a network to which such devices are connected” (Roese, [0008])
“The information included in the location database (metadata database) can vary. For example, Table 1 is a table containing the type of information that can be included in the location database. As illustrated in Table 1, each row represents an association between a connection point and its corresponding location in one or more formats. The "Connection Point ID" column contains the unique identifier associated with a particular connection point” (Roese, [0066])
PNG
media_image4.png
292
792
media_image4.png
Greyscale
(Roese, Table 1)
In combination with Mayyuri and Zhang, where each device of the federated learning system has its own local dataset, Roese’s method identifies local datasets through their association with device connection points.
Roese relates to geographic access restrictions in distributed systems and is analogous to the claimed invention. The existing combination teaches a federated learning system that operates across geographically distributed data. The claimed invention improves upon this method by using unique identifiers for datasets. Roese teaches a method of assigning unique identifiers to devices across a distributed system, applicable to the existing combination. A person of ordinary skill in the art would have recognized that assigning unique identifiers to datasets in the federated learning system of the existing combination would lead to the predictable result of a unified method for organizing and querying distributed datasets across multiple devices, and would improve the known device by establishing consistent and fast indexing in the metadata database (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Regarding claim 11, Mayyuri discloses [a]n apparatus, comprising:
one or more network interfaces: “In some aspects, the BSs may be interconnected to one another and/or to one or more other BSs or network nodes (not shown) in the wireless network 100 through various types of backhaul interfaces such as a direct physical connection, a virtual network, and/or the like using any suitable transport network.” (Mayyuri, [0037])
a processor coupled to the one or more network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor: “Instructions (processes) executed at the CPU 302 (processor) may be loaded from a program memory associated with the CPU 302 or may be loaded from a memory block 318.” (Mayyuri, [0057])
the processor configured to:
represent computing infrastructure for the federated learning system as a tree-structured database in which nodes correspond to physical or organizational infrastructure components including at least a root node, one or more location nodes representing geographic locations, and one or more infrastructure nodes representing computing devices or organizations:
PNG
media_image1.png
617
917
media_image1.png
Greyscale
(Mayyuri, Figure 6). A tree-structured database including a root node.
“FIG. 6 is a diagram illustrating an example 600 of different zones in a federated learning system, in accordance with aspects of the present disclosure. In the example 600 of FIG. 6, each UE 620 may be an example of a device participating in federated learning. Such devices may be referred to as participating devices (computing devices / physical infrastructure components). Additionally, each UE 620 may be an example of a UE 120 as described with reference to FIGS. 1 and 2. In some implementations, as shown in the example 600 of FIG. 6, each UE 620 may be placed in a group 610, 612, 614 based on one or more common attributes or settings. Each group 610, 612, 614 may correspond to a particular zone. For example, as shown in FIG. 6, a first group 610 corresponds to a first zone, a second group 612 corresponds to a second zone, and a third group 614 corresponds to a third zone. In some examples, a UE 620 may be placed in more than one group 610, 612, 614 (not shown in FIG. 6). Additionally, or alternatively, two or more zones may overlap (not shown in FIG. 6). As described, the attributes and settings may include, but are not limited to, a geographic location, a default language, or a user interface theme. As an example, each group 610, 612, 614 may be based on a UE's geographic location. In this example, the UEs 620 in a first group 610 have a common geographic location, the UEs 620 in a second group 612 have a common geographic location, and the UEs 620 in a third group 614 have a common geographic location.” (Mayyuri, [0085])
“A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, and/or the like” (Mayyuri, [0041]); “Some UEs may be considered a customer premises equipment (CPE)” (Mayyuri, [0042]). A piece of equipment associated with a subscription service is associated with the organization managing that service.
“An inter-process communication component 818, such as a bus or a controller/processor, may facilitate communication between the different components” (Mayyuri, [0094]). For further limitations and claims, it should be understood that controllers are managing the system, even if this isn’t explicitly referenced further in the office action.
form associations between datasets available to the federated learning system and nodes in the tree-structured database by storing, in a metadata database, realm values for the datasets that reference the nodes in the tree-structured database, wherein the realm values define access boundaries restricting where the datasets may be used for model training:
“In some implementations, as shown in the example 600 of FIG. 6, each UE 620 may be placed in a group 610, 612, 614 (realm) based on one or more common attributes or settings. Each group 610, 612, 614 may correspond to a particular zone” (Mayyuri, [0085]). Each piece of user equipment is associated with a zone (value). In Figure 6 above, the first group of equipment is associated with zone 1, the second group with zone 2, and the third with zone 3.
“Each UE 620 may use the respective zone model 604, 606 to perform one or more tasks for an application, such as vocabulary prediction, based on locally collected data (datasets). Additionally, each UE 620 may continue to train the respective zone model 604, 606 with locally collected data.” (Mayyuri, [0089])
“Each UE 620 may use the respective zone model 604, 606 to perform one or more tasks for an application, such as vocabulary prediction, based on locally collected data. Additionally, each UE 620 may continue to train the respective zone model 604, 606 with locally collected data. That is, each UE 620 may train the respective zone model 604, 606 while also using the respective zone model 604, 606 for a corresponding task, such as inference or prediction“ (Mayyuri, [0089]). Each piece of user equipment is restricted to training within its zone, corresponding with its associated zone model.
receive one or more instructions to perform model training in the federated learning system with datasets specified using location paths or infrastructure identifiers referencing nodes in the tree-structured database according to the associations, wherein the one or more instructions include a groupby instruction specifying how model training results from the datasets should be aggregated based on the location paths; and configuring, by the controller and in response to the one or more instructions, the federated learning system to perform the model training, including orchestrating distribution of model training tasks across the computing by selecting intermediate aggregator nodes based on the groupby instruction to aggregate training results from trainer nodes within specified location paths, based on the location paths or infrastructure identifiers, based on the location paths or infrastructure identifiers:
“After receiving the global model 602, at time t3, each UE 620 (trainer nodes) in the first group 610 and the second group 612 individually trains the global model 602 based on local data (dataset). At time t4, the respective UEs 620 in the first group 610 and the second group 612 transmit weight updates (model training results) to the first zone server 654 and the second zone server 656, respectively.” (Mayyuri, [0087])
“Furthermore, as shown in FIG. 7, at time t7, each zone server 654, 656 transmits the respective weight updates to the global server 652. The weight updates transmitted at time t7 may be raw weight updates or aggregated weight updates, such as an averaged weight updates. Raw weight updates may be an example of weight updates received at a respective zone server 654, 656 (intermediate aggregator nodes) prior to aggregation by the respective zone server 654, 656” (Mayyuri, [0089])
“Another aspect of the present disclosure is directed to an apparatus for training models at a UE. The apparatus includes a processor; a memory coupled with the processor; and instructions stored in the memory and operable, when executed by the processor, to cause the apparatus to receive, at the first UE associated with a first group of UEs, a first model from a first network device associated with a first zone model of a number of zone models. In some examples, the first group of UEs is associated with a first zone model, and a different group of UEs are associated with each of the number of zone models. Execution of the instructions further cause the apparatus to identify, at the first UE, a network device for training the first model based on one or both of a current connectivity state of the UE or a current resource use of the UE. Execution of the instructions still further cause the apparatus to transmit, to the first network device, model weight updates based on the training of the first model. Execution of the instructions also cause the apparatus to receive, from the first network device, the first zone model based on the transmitted model weights updates.” (Mayyuri, [0013])
Mayyuri relates to hierarchical federated learning grouped by geography and is analogous to the claimed invention.
While Mayyuri fails to disclose the further limitations of the claim, Zhang discloses [a]n apparatus, comprising:
form associations between datasets available to the federated learning system and nodes in the tree-structured database by storing, in a metadata database, realm values for the datasets that reference the nodes in the tree-structured database, wherein the realm values define access boundaries restricting where the datasets may be used for model training:
“Federated learning is a distributed machine-learning approach in which a machine-learning model implemented on a central server (“global machine-learning model”) is trained based on one or more decentralized datasets. The decentralized datasets (“local datasets”) may include private and/or sensitive data, such as patient health data and telecommunication data, and each decentralized dataset may be owned and/or managed by different, individual systems that are referred to as “clients” in the present disclosure. Transmitting the global machine-learning model rather than the local datasets may preserve bandwidth and/or data privacy because the training data included in the local datasets are not transmitted.” (Zhang, [0013]). Private dataset training is tied to the realm of the corresponding client.
“The blockchain 130 (database) may include metadata that describes the global machine-learning model of the central server 120, the clients 110, and/or the local datasets of the clients 110. For example, the metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier (realm value) … In some embodiments, the metadata may be published as individual blocks (nodes) on the blockchain 130 such that each individual block of the blockchain 130 includes metadata relating to a respective client and/or local dataset for a given training round” (Zhang, [0033])
configure, in response to the one or more instructions, the federated learning system to perform the model training, including orchestrating distribution of model training tasks across the computing by selecting intermediate aggregator nodes based on the groupby instruction to aggregate training results from trainer nodes within specified location paths, based on the location paths or infrastructure identifiers, based on the location paths or infrastructure identifiers:
“individual systems (infrastructure) that are referred to as “clients” in the present disclosure” (Zhang, [0013])
“In response to one or more of the clients 204 (infrastructure) determining that its respective local dataset is relevant to performing the training tasks, the clients 204 (infrastructure) including relevant local datasets may request to participate in training the global-machine learning model at operations 214” (Zhang, [0042])
“Based on the metadata of the local model updates, one or more local model updates may be transferred from the clients 204 to the central server 202. At operations 222, the central server 202 may read the blockchain 206 and identify which of the clients 204 (infrastructure) published metadata relating to their respective local model updates to the blockchain 206. In some embodiments, the central server 202 may read the metadata of the local model updates corresponding to the clients 204 to determine whether a threshold number of local model updates are available. Responsive to determining that the threshold number of local model updates are available, the central server 202 may send requests to each of the clients 204 that published metadata indicating their local model updates are ready to transfer their local model updates to the central server 202 at operations 224” (Zhang, [0044])
“At operations 228, the central server 202 may aggregate the local model updates obtained from the clients 204.” (Zhang, [0045])
Zhang relates to hierarchical federated learning with data grouped by infrastructure / organization and is analogous to the claimed invention. Mayyuri teaches a method of performing hierarchical federated learning by grouping data aggregation by location.
The claimed invention improves upon this method by also grouping data aggregation by infrastructure. Zhang teaches a method of performing hierarchical federated learning by grouping data aggregation by infrastructure, applicable to Mayyuri. A person of ordinary skill in the art would have recognized that grouping local dataset processing by infrastructure / organization would lead to the predictable result of private data that an infrastructural node can’t securely share being incorporated into the model training, and would improve the known device by increasing the amount of data available to train the model without risking security breaches due to the usage of private / sensitive data.
Additionally, the claimed invention improves upon Mayyuri’s method by storing data realm identification information in a metadata database. Zhang teaches a method of storing data realm identification information in a metadata database. A person of ordinary skill in the art would have recognized that storing realm information of datasets would allow for quick retrieval and reference of such information for aggregation grouping and organization, rather than repeatedly determining the realm of each dataset whenever needed, improving the known device by caching data that requires repeated and frequent usage (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
The analysis of claims 11-12, 14, 16, and 18-19 mirrors that of claims 1-2, 4, 6, and 8-9, with the exception that claims 11-12, 14, 16, and 18-19 are directed to generic computer hardware which executes the methods of claims 1-2, 4, 6, and 8-9. This generic hardware is taught by Mayyuri, as discussed regarding claim 11. Thus, claims 11-12, 14, 16, and 18-19 are rejected under the same rationales used for claims 1-2, 4, 6, and 8-9, respectively.
Regarding claim 20, Mayyuri discloses [a] tangible, non-transitory, computer-readable medium
storing program instructions that cause a controller for a federated learning system to
execute a process: “In another aspect of the present disclosure, a non-transitory computer-readable medium with non-transitory program code recorded thereon for managing model updates at a first network device is disclosed. The program code is executed by a processor and includes program code to receive, at the first network device associated with a first zone model of a number of zone models, a global model from a second network device associated with the global model.” (Mayyuri, [0008])
Said process comprising:
representing, by a controller for a federated learning system, computing infrastructure for the federated learning system as a tree-structured database in which nodes correspond to physical or organizational infrastructure components including at least a root node, one or more location nodes representing geographic locations, and one or more infrastructure nodes representing computing devices or organizations:
PNG
media_image1.png
617
917
media_image1.png
Greyscale
(Mayyuri, Figure 6). A tree-structured database including a root node.
“FIG. 6 is a diagram illustrating an example 600 of different zones in a federated learning system, in accordance with aspects of the present disclosure. In the example 600 of FIG. 6, each UE 620 may be an example of a device participating in federated learning. Such devices may be referred to as participating devices (computing devices / physical infrastructure components). Additionally, each UE 620 may be an example of a UE 120 as described with reference to FIGS. 1 and 2. In some implementations, as shown in the example 600 of FIG. 6, each UE 620 may be placed in a group 610, 612, 614 based on one or more common attributes or settings. Each group 610, 612, 614 may correspond to a particular zone. For example, as shown in FIG. 6, a first group 610 corresponds to a first zone, a second group 612 corresponds to a second zone, and a third group 614 corresponds to a third zone. In some examples, a UE 620 may be placed in more than one group 610, 612, 614 (not shown in FIG. 6). Additionally, or alternatively, two or more zones may overlap (not shown in FIG. 6). As described, the attributes and settings may include, but are not limited to, a geographic location, a default language, or a user interface theme. As an example, each group 610, 612, 614 may be based on a UE's geographic location. In this example, the UEs 620 in a first group 610 have a common geographic location, the UEs 620 in a second group 612 have a common geographic location, and the UEs 620 in a third group 614 have a common geographic location.” (Mayyuri, [0085])
“A UE may also be referred to as an access terminal, a terminal, a mobile station, a subscriber unit, a station, and/or the like” (Mayyuri, [0041]); “Some UEs may be considered a customer premises equipment (CPE)” (Mayyuri, [0042]). A piece of equipment associated with a subscription service is associated with the organization managing that service.
“An inter-process communication component 818, such as a bus or a controller/processor, may facilitate communication between the different components” (Mayyuri, [0094]). For further limitations and claims, it should be understood that controllers are managing the system, even if this isn’t explicitly referenced further in the office action.
forming, by the controller, associations between datasets available to the federated learning system and nodes in the tree-structured database by storing, in a metadata database, realm values for the datasets that reference the nodes in the tree-structured database, wherein the realm values define access boundaries restricting where the datasets may be used for model training:
“In some implementations, as shown in the example 600 of FIG. 6, each UE 620 may be placed in a group 610, 612, 614 (realm) based on one or more common attributes or settings. Each group 610, 612, 614 may correspond to a particular zone” (Mayyuri, [0085]). Each piece of user equipment is associated with a zone (value). In Figure 6 above, the first group of equipment is associated with zone 1, the second group with zone 2, and the third with zone 3.
“Each UE 620 may use the respective zone model 604, 606 to perform one or more tasks for an application, such as vocabulary prediction, based on locally collected data (datasets). Additionally, each UE 620 may continue to train the respective zone model 604, 606 with locally collected data.” (Mayyuri, [0089])
“Each UE 620 may use the respective zone model 604, 606 to perform one or more tasks for an application, such as vocabulary prediction, based on locally collected data. Additionally, each UE 620 may continue to train the respective zone model 604, 606 with locally collected data. That is, each UE 620 may train the respective zone model 604, 606 while also using the respective zone model 604, 606 for a corresponding task, such as inference or prediction“ (Mayyuri, [0089]). Each piece of user equipment is restricted to training within its zone, corresponding with its associated zone model.
receiving, at the controller, one or more instructions to perform model training in the federated learning system with datasets specified using location paths or infrastructure identifiers referencing nodes in the tree-structured database according to the associations, wherein the one or more instructions include a groupby instruction specifying how model training results from the datasets should be aggregated based on the location paths; and configuring, by the controller and in response to the one or more instructions, the federated learning system to perform the model training, including orchestrating distribution of model training tasks across the computing by selecting intermediate aggregator nodes based on the groupby instruction to aggregate training results from trainer nodes within specified location paths, based on the location paths or infrastructure identifiers, based on the location paths or infrastructure identifiers:
“After receiving the global model 602, at time t3, each UE 620 (trainer nodes) in the first group 610 and the second group 612 individually trains the global model 602 based on local data (dataset). At time t4, the respective UEs 620 in the first group 610 and the second group 612 transmit weight updates (model training results) to the first zone server 654 and the second zone server 656, respectively.” (Mayyuri, [0087])
“Furthermore, as shown in FIG. 7, at time t7, each zone server 654, 656 transmits the respective weight updates to the global server 652. The weight updates transmitted at time t7 may be raw weight updates or aggregated weight updates, such as an averaged weight updates. Raw weight updates may be an example of weight updates received at a respective zone server 654, 656 (intermediate aggregator nodes) prior to aggregation by the respective zone server 654, 656” (Mayyuri, [0089])
“Another aspect of the present disclosure is directed to an apparatus for training models at a UE. The apparatus includes a processor; a memory coupled with the processor; and instructions stored in the memory and operable, when executed by the processor, to cause the apparatus to receive, at the first UE associated with a first group of UEs, a first model from a first network device associated with a first zone model of a number of zone models. In some examples, the first group of UEs is associated with a first zone model, and a different group of UEs are associated with each of the number of zone models. Execution of the instructions further cause the apparatus to identify, at the first UE, a network device for training the first model based on one or both of a current connectivity state of the UE or a current resource use of the UE. Execution of the instructions still further cause the apparatus to transmit, to the first network device, model weight updates based on the training of the first model. Execution of the instructions also cause the apparatus to receive, from the first network device, the first zone model based on the transmitted model weights updates.” (Mayyuri, [0013])
Mayyuri relates to hierarchical federated learning grouped by geography and is analogous to the claimed invention.
While Mayyuri fails to disclose the further limitations of the claim, Zhang discloses instructions, comprising:
forming, by the controller, associations between datasets available to the federated learning system and nodes in the tree-structured database by storing, in a metadata database, realm values for the datasets that reference the nodes in the tree-structured database, wherein the realm values define access boundaries restricting where the datasets may be used for model training:
“Federated learning is a distributed machine-learning approach in which a machine-learning model implemented on a central server (“global machine-learning model”) is trained based on one or more decentralized datasets. The decentralized datasets (“local datasets”) may include private and/or sensitive data, such as patient health data and telecommunication data, and each decentralized dataset may be owned and/or managed by different, individual systems that are referred to as “clients” in the present disclosure. Transmitting the global machine-learning model rather than the local datasets may preserve bandwidth and/or data privacy because the training data included in the local datasets are not transmitted.” (Zhang, [0013]). Private dataset training is tied to the realm of the corresponding client.
“The blockchain 130 (database) may include metadata that describes the global machine-learning model of the central server 120, the clients 110, and/or the local datasets of the clients 110. For example, the metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier (realm value) … In some embodiments, the metadata may be published as individual blocks (nodes) on the blockchain 130 such that each individual block of the blockchain 130 includes metadata relating to a respective client and/or local dataset for a given training round” (Zhang, [0033])
configuring, by the controller and in response to the one or more instructions, the federated learning system to perform the model training, including orchestrating distribution of model training tasks across the computing by selecting intermediate aggregator nodes based on the groupby instruction to aggregate training results from trainer nodes within specified location paths, based on the location paths or infrastructure identifiers, based on the location paths or infrastructure identifiers:
“individual systems (infrastructure) that are referred to as “clients” in the present disclosure” (Zhang, [0013])
“In response to one or more of the clients 204 (infrastructure) determining that its respective local dataset is relevant to performing the training tasks, the clients 204 (infrastructure) including relevant local datasets may request to participate in training the global-machine learning model at operations 214” (Zhang, [0042])
“Based on the metadata of the local model updates, one or more local model updates may be transferred from the clients 204 to the central server 202. At operations 222, the central server 202 may read the blockchain 206 and identify which of the clients 204 (infrastructure) published metadata relating to their respective local model updates to the blockchain 206. In some embodiments, the central server 202 may read the metadata of the local model updates corresponding to the clients 204 to determine whether a threshold number of local model updates are available. Responsive to determining that the threshold number of local model updates are available, the central server 202 may send requests to each of the clients 204 that published metadata indicating their local model updates are ready to transfer their local model updates to the central server 202 at operations 224” (Zhang, [0044])
“At operations 228, the central server 202 may aggregate the local model updates obtained from the clients 204.” (Zhang, [0045])
Zhang relates to hierarchical federated learning with data grouped by infrastructure / organization and is analogous to the claimed invention. Mayyuri teaches a method of performing hierarchical federated learning by grouping data aggregation by location.
The claimed invention improves upon this method by also grouping data aggregation by infrastructure. Zhang teaches a method of performing hierarchical federated learning by grouping data aggregation by infrastructure, applicable to Mayyuri. A person of ordinary skill in the art would have recognized that grouping local dataset processing by infrastructure / organization would lead to the predictable result of private data that an infrastructural node can’t securely share being incorporated into the model training, and would improve the known device by increasing the amount of data available to train the model without risking security breaches due to the usage of private / sensitive data.
Additionally, the claimed invention improves upon Mayyuri’s method by storing data realm identification information in a metadata database. Zhang teaches a method of storing data realm identification information in a metadata database. A person of ordinary skill in the art would have recognized that storing realm information of datasets would allow for quick retrieval and reference of such information for aggregation grouping and organization, rather than repeatedly determining the realm of each dataset whenever needed, improving the known device by caching data that requires repeated and frequent usage (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Regarding claim 22, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Zhang further discloses a method, wherein the metadata database stores, for each dataset, a unique identifier, an address of the dataset, and the realm value, and wherein the datasets remain stored in computing infrastructure of their respective organizations while only the metadata is stored in the metadata database: “The blockchain 130 (database) may include metadata that describes the global machine-learning model of the central server 120, the clients 110, and/or the local datasets of the clients 110. For example, the metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier (realm value) … In some embodiments, the metadata may be published as individual blocks (nodes) on the blockchain 130 such that each individual block of the blockchain 130 includes metadata relating to a respective client and/or local dataset for a given training round” (Zhang, [0033])
Zhang relates to hierarchical federated learning with data grouped by infrastructure / organization and is analogous to the claimed invention. Mayyuri teaches a method of performing hierarchical federated learning by grouping data aggregation by location. The claimed invention improves upon Mayyuri’s method by storing data realm identification information in a metadata database. Zhang teaches a method of storing data realm identification information in a metadata database. A person of ordinary skill in the art would have recognized that storing realm information of datasets would allow for quick retrieval and reference of such information for aggregation grouping and organization, rather than repeatedly determining the realm of each dataset whenever needed, improving the known device by caching data that requires repeated and frequent usage (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
While Mayyuri and Zhang fail to disclose the further limitations of the claim, Roese discloses a method wherein the metadata database stores, for each dataset, a unique identifier, an address of the dataset, and the realm value, and wherein the datasets remain stored in computing infrastructure of their respective organizations while only the metadata is stored in the metadata database:
“In a general aspect, the invention features a system that associates physical locations (realm value[s]) with network-linked devices in a network to which such devices are connected” (Roese, [0008]); “This aspect can include one or more of the following features:” (Roese, [0012]); “A physical location of a device accessing the data can be determined, and the limiting of the access is then according to the determined physical location” (Roese, [0013]).
“In one example, this functionality includes a location database (metadata database) to store location information, protocol to communicate location information to other devices, and rules to enforce location-based policies (e.g., to enable policing based on location information).” (Roese, [0028])
“The information included in the location database (metadata database) can vary. For example, Table 1 is a table containing the type of information that can be included in the location database. As illustrated in Table 1, each row represents an association between a connection point and its corresponding location in one or more formats. The "Connection Point ID" column contains the unique identifier associated with a particular connection point. The connection point ID can be any ID that uniquely identifies a connection point. As described in more detail below and illustrated in Table 1, in one example the combination of a device Media Access Control (MAC) address (e.g., 0000ld00000l) and a port MAC address within the device (e.g., 00001d000101) determines the connection point ID. The locations (address[es] / realm value[s]) contained in Table 1 are included in two format types for each connection point ID. The first type is an American National Standards Institute (ANSI) Location Identification Number (LIN) and the second type is a coordinate of latitude and longitude” (Roese, [0066])
PNG
media_image4.png
292
792
media_image4.png
Greyscale
(Roese, Table 1)
In combination with Mayyuri and Zhang, where each device of the federated learning system has its own local dataset, Roese’s method identifies local datasets through their association with device connection points.
Roese relates to geographic access restrictions in distributed systems and is analogous to the claimed invention. The existing combination teaches a federated learning system that operates across geographically distributed data. The claimed invention improves upon this method by caching unique identifiers and addresses for datasets. Roese teaches a method of caching unique identifiers and addresses for devices across distributed servers, applicable to the existing combination. A person of ordinary skill in the art would have recognized that assigning unique identifiers and addresses to datasets in the federated learning system of the existing combination would lead to the predictable result of a unified method for organizing and querying distributed datasets across multiple devices, and would improve the known device by establishing consistent and fast indexing in the metadata database (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Mayyuri (ZONE-BASED FEDERATED LEARNING, filed 6/25/2021, US 2022/0417108 A1) in view of Zhang et al. (BLOCKCHAIN-BASED SECURE FEDERATED LEARNING, published 2/10/2022, US 20220044162 A1), hereafter referred to as Zhang, and further in view of Luo et al. (HFEL: Joint Edge Association and Resource Allocation for Cost-Efficient Hierarchical Federated Edge Learning, published 2020, IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 19, NO. 10, OCTOBER 2020), hereafter referred to as Luo, and De Brouwer et al. (SYSTEM AND METHOD WITH FEDERATED LEARNING MODEL FOR MEDICAL RESEARCH APPLICATIONS, published 2020, US 2020/0293887 A1), hereafter referred to as De Brouwer.
Regarding claim 23, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. While Mayyuri and Zhang fail to disclose the further limitations of the claim, Luo discloses a method, wherein selecting intermediate aggregator nodes comprises selecting the intermediate aggregator nodes based on geographic proximity to the trainer nodes within the specified location paths, and wherein at least one intermediate aggregator node is provisioned in a cloud environment located in a same geographic region as its assigned trainer nodes:
PNG
media_image5.png
512
885
media_image5.png
Greyscale
(Luo, page 6537, Fig. 1)
“we propose a novel Hierarchical Federated Edge Learning (HFEL) framework, in which edge servers (intermediate aggregators) usually fixedly deployed with base stations as intermediaries between mobile devices and the cloud, can perform edge aggregations of local models which are transmitted from devices (trainer nodes) in proximity” (Luo, page 6535, right column, paragraph 3)
“Greedy edge association: each device (trainer node) can select the connected edge server sequentially based on the geographical distance to each edge server (intermediate aggregator) in an ascending order.” (Luo, page 6543, left column, paragraph 2)
Luo relates to hierarchical federated learning with geographic organization and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to assign aggregation zones by geographic proximity, as disclosed by Luo. Luo’s organization of federated learning is both low-latency and energy efficient compared to contemporary methods. See Luo, page 6536, right column, paragraph 2.
While Luo fails to disclose the further limitations of the claim, De Brouwer discloses a method, wherein at least one intermediate aggregator node is provisioned in a cloud environment: “The disclosed system and method are in the field of machine learning. To be more specific, in the field of federated machine learning utilizing computation capability of edge devices and a federated learning (“FL”) aggregator, which is typically cloud-based, relative to the edge devices. In this context, edge devices typically are mobile devices, but also can include nodes that aggregate data from multiple users.” (De Brouwer, [0009]).
De Brouwer relates to cloud-based aggregation in federated learning and is analogous to the claimed invention. The existing combination teaches a hierarchical organization scheme for aggregation of model results. The claimed invention improves upon this method by provisioning aggregators in a cloud-computing environment. De Brouwer teaches a method of provisioning FL aggregators in a cloud-computing environment, applicable to the existing combination. A person of ordinary skill in the art would have recognized that running the aggregators of the existing combination with cloud computing would lead to the predictable result of dynamic and flexible resource allocation for devices requesting functions from the aggregators, and would improve the known device by simplifying aggregator usage by edge devices and neighboring aggregators (MPEP 2143 I. (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results).
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Mayyuri (ZONE-BASED FEDERATED LEARNING, filed 6/25/2021, US 2022/0417108 A1) in view of Zhang et al. (BLOCKCHAIN-BASED SECURE FEDERATED LEARNING, published 2/10/2022, US 20220044162 A1), hereafter referred to as Zhang, and further in view of Colgrove et al. (MANAGING CONNECTIVITY TO SYNCHRONOUSLY REPLICATED STORAGE SYSTEMS, published 2020, US 10,680,932 B1), hereafter referred to as Colgrove.
Regarding claim 24, the rejection of claim 1 in view of Mayyuri and Zhang is incorporated. Mayyuri further discloses a method, wherein the groupby instruction specifies a plurality of location paths: “participating devices may be grouped based on geographic location. In such an example, participating devices in Los Angeles may be grouped together and associated with a first zone and participating devices in New York may be grouped together and associated with a second zone. The machine learning model of a zone may be referred to as a zone model. By grouping participating devices based on inherent similarities“ (Mayyuri, [0033]) “Furthermore, as shown in FIG. 7, at time t7, each zone server 654, 656 transmits the respective weight updates to the global server 652. The weight updates transmitted at time t7 may be raw weight updates or aggregated weight updates, such as an averaged weight updates. Raw weight updates may be an example of weight updates received at a respective zone server 654, 656 prior to aggregation by the respective zone server 654, 656” (Mayyuri, [0089]). When grouping by geography, each zone server represents a different location path in the model.
While Mayyuri and Zhang fail to disclose the further limitations of the claim, Colgrove discloses a method, wherein the controller automatically selects, for each dataset, a realm expansion that matches one of the plurality of location paths specified in the groupby instruction when a dataset has multiple possible realm expansions due to an organization being registered at more than one location in the tree-structured database:
“FIG. 3A sets forth a diagram of a storage system 306 that is coupled for data communications with a cloud services provider 302” (Colgrove, column 24, paragraph 2)
“In an embodiment in which the cloud services provider 302 is embodied as a private cloud, the cloud services provider 302 may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where 65 the cloud services provider 302 is embodied as a public cloud, the cloud services provider 302 may provide services to multiple organizations” (Colgrove, column 25, paragraph 3)
“The storage system 306 depicted in FIG. 3B also includes software resources 314 that, when executed by processing resources 312 within the storage system 306, may perform various tasks. The software resources 314 may include, for example, one or more modules of computer program instructions that when executed by processing resources 312 within the storage system 306 are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems … Such data protection techniques can include, for example, … data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems” (Colgrove, column 29, paragraph 3)
“The example method depicted in FIG. 4 also includes identifying (408), from amongst the plurality of data communications paths (422, 426, 430) (multiple possible realm expansions) between the host (432) and the plurality of storage systems (414, 424, 428) across which a dataset (412) is synchronously replicated, one or more optimal paths (realm expansion that matches one of the plurality of location paths)“ (Colgrove, column 33, paragraph 4)
“Readers will appreciate that there may be performance advantages associated with the host (432) issuing I/0 operations to one storage system versus another storage system, as the storage systems (414, 424, 428) may be located some distance from each other” (Colgrove, column 34, paragraph 1)
Colgrove relates to realm expansions for distributed datasets and is analogous to the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the existing combination to identify an optimal path based on host location, as disclosed by Colgrove. Doing so would improve the performance of the system by minimizing latency between host and server for different hosts. See Colgrove, column 34, paragraph 1)
Response to Arguments
The following responses address arguments and remarks made in the instant remarks dated 01/20/2026.
Objections
In light of the instant amendments, previous objections to the specification have been withdrawn. However, upon further consideration, new objections have been determined.
101 Rejections
On page 9 of the instant remarks, the Applicant argues that the amended claims are allowable under 35 U.S.C. 101:
“Claims 1-20 stand rejected under 35 U.S.C. § 101 as allegedly being directed to
non-statutory subject matter. As shown above, Applicant herein amends claims 1, 11, and
20 and Applicant respectfully submits that these amendments render the § 101 rejection of
this claim moot.”
The Applicant’s arguments above with respect to 35 U.S.C. 101 have been fully considered and are persuasive. The amended independent claims recite specific instructions for defining realm values and model result grouping in a tree-structured database with location paths, limitations which are not well-understood, routine, or conventional. The unconventional additional elements of the independent claims amount to significantly more than the recited judicial exceptions, thus the claims are found eligible under 35 U.S.C. 101, and previous rejections have been withdrawn accordingly.
103 Rejections
The Applicant’s arguments with respect to rejections under 35 U.S.C. 103 of the claimed invention have been considered but are moot because the new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kenkre (Automatically identifying critical resources of an organization, published 2014, US 20130326531 A1) teaches the creation of a geographic hierarchy tree, where nodes can represent the resources of a particular organization.
Chen et al. (Mobility And Zone Management In Zone-based Federated Learning, filed 11/02/2021, US 11778484 B2) teaches a method of performing federated learning across grouped geographic zones.
Wang (MULTIPLE TREE HIERARCHICAL PORTABLE COMMUNICATION SYSTEM AND METHOD, published 1996, US5539922A) discloses representing a cell phone network with a geography tree.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron P Gormley whose telephone number is (571)272-1372. The examiner can normally be reached Monday - Friday 12:00 PM - 8:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AG/Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148