Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 5/4/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 17-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of the “computer-readable storage medium” encompasses both transitory/signal media and non-transitory media. Therefore, it could be interpreted to be a signal and is not limited to the statutory non-transitory media. On page 27, paragraph 107, the paragraph lists examples of a computer-readable storage medium, but does not limit the scope to only non-transitory embodiments. Additionally, while the preamble mentions “processors”, the processors are not positively recited as part of the claimed medium and do not save the claim from 101. It is suggested that claim 17 be amended to recite only a “non-transitory” computer-storage readable medium to overcome this rejection.
Claims 18-20 do not resolve this issue and are similarly rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-9, 10-12, 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (Federated Graph Neural Networks: Overview, Techniques and Challenges), in view of Jiang (Spatial-Temporal Graph Data Mining for IoT-Enabled Air Mobility Prediction).
Regarding claim 1, Liu discloses “A federated learning method, the federated learning method being performed by a federated learning system, the federated learning system including at least two first member devices and a second member device, each first member device having spatial-temporal data, and the federated learning method comprising:” (See [Section 2.1, paragraph 2], [Section 3.1, GNN Aggregation by the FL Server, paragraph 1]; A federated learning method is being performed by a federated learning system that includes multiple clients (first member devices) and a server (second member device))
“by each first member device, performing mining of graph nodes and mining of a relationship among graph nodes… to generate graph-structured data including graph nodes as vertices and graph node relationships as edges” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2], [Section 3.1, Updating Clients' Local Graphs, paragraph 2], [Section 1, paragraph 1]; Liu discloses using FedGNN as a potential graph node mining model for the purpose of obtaining graph node features from first model devices as well as obtaining graph node relationship features between the graph nodes from the first model devices. Liu also describes how the graph nodes act as vertices and graph node relationships as edges by using an example about molecule data that shows atoms (similar to vertices) as nodes and the bonds connecting atoms (similar to node relationships) as edges.)
“training the local graph neural network model by using the graph-structured data, to obtain update amount information for the local graph neural network model;” (See [Section 4.1, paragraphs 1, 3]; First member devices train the local graph neural network model and upload the parameters to the second member device for Federated Learning aggregation to obtain update information when it is distributed by the second member device.)
“sending the update amount information to the second member device;” (See [Section 2.1, paragraph 3]; An aggregation operation updates the second member device's model using the updates from the first member devices.)
“receiving model update information from the second member device;” (See [Section 3.1, paragraph 2]; The local graphs of the first member devices are updated by broadcasting the updates from the second member device to all first member devices.)
“updating the local graph neural network model based on the model update information” (See [Section 3.1, Updating Clients' Local Graphs, paragraphs 1, 2]; The local graphs of the first member devices are updated by using the second member device's graph model topology, which is used to update edges in the graphs of the first member devices.)
Liu fails to explicitly disclose, “performing mining of graph nodes and mining of a relationship among graph nodes on local spatial-temporal data”.
Jiang teaches “performing mining of graph nodes and mining of a relationship among graph nodes on local spatial-temporal data” (See [Section 1, Paragraph 4]; Jiang discloses conducting graph node mining on a data set that has spatial-temporal correlations).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Liu and Jiang before them to modify Liu to mine graph nodes and a relationship among graph nodes on a data set with spatial-temporal correlations. One would be motivated to require the federated learning model to use spatial-temporal data to analyze the patterns and relationships between the graph nodes and edges in the context of spatial-temporal data, see e.g., [Jiang, Section 1, Paragraphs 3, 4], where Jiang explains how one of the benefits of understanding spatial-temporal correlations between graph nodes and edges is beneficial for understanding air traffic patterns.
Regarding claim 2, Liu discloses “performing mining of graph nodes and mining of a relationship among graph nodes… to generate the graph-structured data including graph nodes as vertices and graph node relationships as edges includes:” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2], [Section 3.1, Updating Clients' Local Graphs, paragraph 2], [Section 1, paragraph 1]; Liu discloses using FedGNN as a potential graph node mining model for the purpose of obtaining graph node features from first model devices as well as obtaining graph node relationship features between the graph nodes from the first model devices. Liu also describes how the graph nodes act as vertices and graph node relationships as edges by using an example about molecule data that shows atoms (similar to vertices) as nodes and the bonds connecting atoms (similar to node relationships) as edges.)
“performing mining of graph nodes on the local spatial-temporal data by using a graph node mining model, to obtain graph node features corresponding to mined graph nodes;” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2]; FedGNN is used as a potential graph node mining model for the purpose of obtaining graph node features from first model devices.)
“performing mining of a relationship among graph nodes on the obtained graph node features by using a relationship mining model, to obtain graph node relationship features used to represent relationships among the graph nodes;” (See [Section 3.1, Updating Clients' Local Graphs, paragraphs 1, 2]; Liu discloses using FedGNN from the potential graph node mining model as a potential relationship mining model for the purpose of obtaining graph node relationship features between the graph nodes from the first model devices.)
“generating the graph-structured data based on the obtained graph node features and the obtained graph node relationship features.” (See [Section 3.1, Updating Clients' Local Graphs, paragraph 3]; Liu discloses generating an updated version of a graph based on the obtained graph node and relationship features from graph node mining and relationship mining.)
Liu fails to explicitly disclose, “performing mining of graph nodes and mining of a relationship among graph nodes on local spatial-temporal data”.
Jiang teaches “performing mining of graph nodes and mining of a relationship among graph nodes on local spatial-temporal data” (See [Section 1, paragraph 4]; Jiang discloses conducting graph node mining on a data set that has spatial-temporal correlations).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Liu and Jiang before them to modify Liu to mine graph nodes and a relationship among graph nodes on a data set with spatial-temporal correlations. One would be motivated to require the federated learning model to use spatial-temporal data to analyze the patterns and relationships between the graph nodes and edges in the context of spatial-temporal data, see e.g., [Jiang, Section 1, Paragraphs 3, 4], where Jiang explains how one of the benefits of understanding spatial-temporal correlations between graph nodes and edges is beneficial for understanding air traffic patterns.
Regarding claim 3, Liu discloses “at least one relationship mining algorithm of a Pearson correlation coefficient (PCC) algorithm, a k-nearest neighbors (K-NN) algorithm, a distance algorithm, or a phase locking value (PLV) algorithm is configured in the relationship mining model.” (See [Section 4.1, paragraph 2]; A distance algorithm is used by denoting graph edge weights (graph node relationships) as the Euclidean distance between objects.)
Regarding claim 6, Liu discloses “The federated learning method according to claim 1, comprising: by the second member device,” (See [Section 2.1, paragraph 2]; A federated learning method is being performed by a federated learning system that includes multiple clients (first member devices) and a server (second member device))
“receiving the update amount information from each first member device;” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2]; The server (second member device) receives parameter information from clients (first member devices))
“obtaining the combined update amount information based on the update amount information received from each first member device;” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2]; The server (second member device) updates the parameter information that was received from clients (first member devices))
“separately sending corresponding model update information to each first member device based on the combined update amount information.” (See [Section 3.1, page 3, Updating Clients' Local Graphs, paragraph 2]; The server (second member device) sends update information to each client (first member devices)).
Regarding claim 7, Liu discloses “obtaining the combined update amount information based on the update amount information includes:” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2]; The server (second member device) updates the parameter information that was received from clients (first member devices))
“performing average calculation on the received update amount information, to obtain averaged combined update amount information.” (See [Section 3.1, page 3, GNN Aggregation by the FL Server, paragraph 1]; Liu discloses performing FedAvg (Federated Averaging) when obtaining the updated amount information to calculate the average combined (aggregated) update info.)
Regarding claim 8, Liu discloses “separately sending the corresponding model update information to each first member device including separately sending the corresponding model update information that includes the combined update amount information,” (See [Section 3.1, GNN Aggregation by the FL Server, paragraphs 1, 2]; Model update information is sent to clients (first member devices) that have combined (aggregated) update amount information from the server (second member device).)
“wherein the updating the local graph neural network model based on the model update information includes updating the local graph neural network model based on the combined update amount information included in the model update information.” (See [Section 3.1, GNN Aggregation by the FL Server, paragraphs 1, 2]; Liu discloses updating the local graph neural network model of a client based on the combined (aggregated) update amount information.)
Regarding claim 9, Liu discloses “model structure types of the local graph neural network models of the first member devices are same;” (See [Section 3.1, Updating Clients' Local Graphs, paragraph 6]; Liu discloses a method that uses the same model structure for all the clients (first member devices) of a model.)
“the separately sending the corresponding model update information to each first member device based on the combined update amount information for each first member device to update the local graph neural network model based on the corresponding model update information includes:” (See [Section 3.1, Updating Clients' Local Graphs, paragraph 3]; Each client (first member device) is sent an updated graph neural network model to update the client's local graph network based on the received updated graph network.)
“updating a local graph neural network model of the second member device based on the combined update amount information, a model structure type of the graph neural network model of the second member device being same as the model structure types of the local graph neural network models of the first member devices;” (See [Section 3.1, GNN Aggregation by the FL Server, paragraph 2]; Liu discloses updating a local graph neural network of the server (second member device) by collecting model parameters as node features from the clients (first member devices) and updates using a the same graph structure that the clients use.)
“separately sending the updated graph neural network model to each first member device for each first member device to update the local graph neural network model based on the received graph neural network model.” (See [Section 3.1, Updating Clients' Local Graphs, paragraph 3]; Each client (first member device) is sent an updated graph neural network model to update the client's local graph network based on the received updated graph network.)
Regarding claims 10 and 17, these claims are similar in scope to claim 1.
Regarding claims 11 and 18, these claims are similar in scope to claim 2.
Regarding claims 12 and 19, these claims are similar in scope to claim 3.
Regarding claims 14 and 20, these claims are similar in scope to claim 6.
Regarding claim 15, this claim is similar in scope to claim 7.
Regarding claim 16, this claim is similar in scope to claim 8.
Claim Rejections - 35 USC § 103
Claims 4, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (Federated Graph Neural Networks: Overview, Techniques and Challenges), in view of Jiang (Spatial-Temporal Graph Data Mining for IoT-Enabled Air Mobility Prediction), and further view of Appel (US 20170185910 A1).
Regarding claim 4, Liu discloses “the performing mining of a relationship among graph nodes on the obtained graph node features by using the relationship mining model, to obtain the graph node relationship features used to represent the relationships among the graph nodes includes:” (See [Section 3.1, Updating Clients' Local Graphs, paragraphs 1, 2]; FedGNN is used to obtain graph node relationship features between the graph nodes from the first model devices.)
“separately performing mining of a relationship among graph nodes on the obtained graph node features” (See [Section 3.1, Updating Clients' Local Graphs, paragraphs 1, 2]; FedGNN is used to mine graph node relationship features among graph nodes from the first model devices)
“obtained groups of graph node relationship features” (See [Section 3.1, page 3, GNN Aggregation by the FL Server, paragraph 2]; the server (second model device) obtains model parameters from clients (first model devices) as node features)
Liu fails to explicitly disclose, “a plurality of relationship mining algorithms are configured in the relationship mining model”
“by using the plurality of relationship mining algorithms configured in the relationship mining model, to obtain a group of graph node relationship features mined by using each relationship mining algorithm;”
“comparing groups to determine a target group of graph node relationship features;”
“outputting the determined group of graph node relationship features as the graph node relationship features between the graph nodes.”
Appel teaches “a plurality of relationship mining algorithms are configured in the relationship mining model” (See [0041]; several types of graph mining algorithms are used in the given model to perform mining on the network).
“by using the plurality of relationship mining algorithms configured in the relationship mining model, to obtain a group of graph node relationship features mined by using each relationship mining algorithm;” (See [0041]; the data mining module uses different mining algorithms and uses each one to mine on the graph nodes)
“comparing groups to determine a target group of graph node relationship features;” (See [0041]; the mining algorithms can be ranked based on their results to determine a target group)
“outputting the determined group of graph node relationship features as the graph node relationship features between the graph nodes.” (See [0067]; Appel discloses an example of a determined group of graph node relationship features being outputted using a graph visualization.)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Liu and Appel before them to modify Liu to use a plurality of relationship mining algorithms in the relationship mining model, compare graph mining algorithms for comparing graph node relationship features to determine a desired target group of graph node relationship features, and output the determined group of graph node relationship features. One would be motivated to do so in order to obtain the best results by running different algorithms to determine which algorithm is best. One would also be motivated to use Appel’s idea to compare graph mining algorithms for comparing graph node relationship features to determine a desired target group of graph node relationship features. Another motivation would be to observe the results from the mining algorithm that performed the best during the graph mining operations.
Regarding claim 13, this claim is similar in scope to claim 4.
Claim Rejections - 35 USC § 103
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Liu (Federated Graph Neural Networks: Overview, Techniques and Challenges), in view of Jiang (Spatial-Temporal Graph Data Mining for IoT-Enabled Air Mobility Prediction), in further view of Zhu (Federated Learning of Molecular Properties with Graph Neural Networks in a Heterogeneous Setting).
Liu fails to explicitly disclose, “model structure types of the local graph neural network models of the first member devices are different.”
Zhu teaches “model structure types of the local graph neural network models of the first member devices are different.” (See [Section 3.1, paragraph 2]; A local graph neural network that uses MPNN or SchNet, two different local graph neural network model structure types, are used are the local graph’s model structure).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having Liu and Zhu before them to modify Liu to use different model structure types for the first member devices. One would be motivated to do so in order to run a different model depending on a certain first member device, as some first member devices may require a specific model, while others may need a different model, making compatibility more feasible.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID KIM whose telephone number is (571)272-4331. The examiner can normally be reached 7:30 AM - 4:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.K./Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141