Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 and 10-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1,
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“generating …inference outputs based on inferences locally performed on local input data samples”
“using the inference outputs to provide a standardized inference output corresponding to the local input data samples at the at least one node computer for assessing performance of the federation model locally deployed on the at least one node computer”
“wherein the assessing performance further comprises determining whether an inference output from the federation model locally deployed deviates in a predetermined manner from the standardized inference output”
“adjusting model parameters associated with the federation model locally deployed to mitigate deviation of the federation model locally deployed”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“in at least one node computer among a plurality of node computers of the computer system”
“via a federation model locally deployed in the at least one node computer”
“by the federation model”
“at least in a preliminary operating phase of the computer system, using the generated inference outputs from the locally performed inferences on the local input data samples, and the at least the subset of the inference outputs corresponding to the local input data samples from the at least the subset of other deployed federation models, to train a metamodel”
“in the computer system”
“via the control server”
“in response to determining that the inference output from the federation model locally deployed deviates from the standardized inference output …locally training the federation model locally deployed using the standardized inference output, wherein the locally training further comprises at least one of: using the standardized inference output as a training label for an input sample in a training stage for the federation model locally deployed”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“based on the generated inference outputs from the locally performed inferences on at least a portion of the local input data samples, requesting, via a control server in the computer system, at least a subset of the inference outputs corresponding to the local input data samples from at least a subset of other deployed federation models at other node computers among the plurality of node computers”
“in response to determining that the inference output from the deployed federation model deviates from the standardized inference output, alerting the at least one node computer”
“in response to determining that the performance of the federation model locally deployed deviates in another predetermined manner from the performance of the metamodel, replacing the federation model locally deployed with the metamodel”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply” and “insignificant extra-solution activity”. Specifically, the requesting alerting and replacing limitations recite the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 2,
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 2 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“using the inference outputs from the at least the subset of the respective federation models to provide the standardized inference output corresponding to the local input data samples corresponding to each node computer for assessing performance of each respective federation model deployed on each node computer”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“in each node computer among of the plurality of node computers of the computer system, deploying a respective federation model for inference on the local input data samples corresponding to a node computer among the plurality of node computers to obtain inference outputs for the local input data samples”
“in the computer system”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“providing the inference outputs for use as the inference results at the node computer”
“for at least the portion of the local input data samples at each node computer, obtaining the inference outputs from at least a subset of respective federation models based on the respective federation model in each node computer”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply” and “insignificant extra-solution activity”. Specifically, the providing and obtaining limitations recite the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 3,
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 3 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 2.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are additional details that do not apply the exception in a meaningful way (See MPEP 2106.05(e)).
The limitations:
“wherein said each node computer comprises respective edge devices in a data communications network”
As drafted, are additional elements that do not apply an exception for the abstract ideas in a meaningful way. See MPEP 2106.05(e).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible.
Regarding Claim 4,
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 4 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“producing the standardized inference output corresponding to a respective input data sample as a function of the inference outputs from each respective federation model for the respective input data sample”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 2.
Step 2B Analysis: See corresponding analysis of claim 2.
Regarding Claim 5,
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 5 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 4.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are additional details that do not apply the exception in a meaningful way (See MPEP 2106.05(e)).
The limitations:
“wherein the standardized inference output comprises one of a majority vote and an average derived from the inference outputs from each respective federation model”
As drafted, are additional elements that do not apply an exception for the abstract ideas in a meaningful way. See MPEP 2106.05(e).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible.
Regarding Claim 6,
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 6 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 4.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are additional details that do not apply the exception in a meaningful way (See MPEP 2106.05(e)).
The limitations:
“wherein the inference outputs of each respective federation model indicate a confidence value associated with a respective inference output, and wherein producing the standardized inference output from the inference outputs based on each respective federation model is dependent on the confidence value associated with the inference outputs”
As drafted, are additional elements that do not apply an exception for the abstract ideas in a meaningful way. See MPEP 2106.05(e).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible.
Regarding Claim 7,
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 7 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 2.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“in response to training the metamodel, obtaining the inference outputs for the input data samples from at least the metamodel to provide the standardized inference output corresponding to the respective input data sample”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply. Mere instructions to apply cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 8,
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 8 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“defining the respective input data sample and corresponding inference output for the respective input data sample from each node computer”
“using the inference data to request the corresponding inference output for the respective input data sample from the subset of the respective federation models”
“using the inference outputs from the subset of the respective federation models to provide the standardized inference output corresponding to the respective input data sample at each node computer”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“wherein the computer system comprises the control server for communication with the plurality of node computers via a data communications network”
“at the control server”
“on the plurality of node computers”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“at each node computer, sending to the control server inference data”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply” and “insignificant extra-solution activity”. Specifically, and sending limitation recites the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 10,
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 10 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“processing a raw input data sample to produce the inference data defining the raw input data sample such that the raw input data sample is hidden in the inference data”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“at each node computer”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply” Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 11,
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 11 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“using the inference outputs from the at least the stored subset of the other federation models to produce the standardized inference output corresponding to each input data sample associated with the local input data samples”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“in the at least one node computer of the system: storing the at least the subset of the other federation models”
“obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples in the at least one node computer”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “insignificant extra-solution activity”. Specifically, the storing limitation recites the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Additionally, the obtaining limitation recites the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 12,
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 12 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 11.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are additional details that do not apply the exception in a meaningful way (See MPEP 2106.05(e)).
The limitations:
“wherein the standardized inference output comprises one of a majority vote and an average derived from the inference outputs”
As drafted, are additional elements that do not apply an exception for the abstract ideas in a meaningful way. See MPEP 2106.05(e).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible.
Regarding Claim 13,
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 13 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“comparing the standardized inference output with an inference output from the inference outputs of the federation model locally deployed for inference at the at least one node computer”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“in the at least one node computer”
“in response to determining that the inference output of the federation model locally deployed deviates in a predetermined manner from the standardized inference output, training the federation model locally deployed using the inference outputs from the at least the stored subset of the other federation models”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply” Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 14,
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 14 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 1.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“at least in the preliminary operating phase of the computer system, using the inference outputs from the other stored models for each data sample to train the metamodel included in the federation of models”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“in the at least one node computer: storing the at least the subset of the other federation models”
“obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples at the at least one node computer”
“in response to training the metamodel, obtaining the inference outputs for each local input data sample from at least the metamodel to provide the standardized inference output”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instruction to apply” and “insignificant extra-solution activity”. Specifically, the storing and obtaining from storage limitations recite the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Additionally, the obtaining from the model limitation recites the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply an exception and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 15,
Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 15 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“comparing performance of the federation model locally deployed for inference on received input data samples with performance of the metamodel for the received input data samples”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“in the at least one node computer”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply” Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 16,
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 16 is directed to a method for federated learning among a federation of machine learning models in a computer system, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“using the inference outputs from the respective federation model and the inference outputs from the at least the stored subset of the other federation models to produce the standardized inference output corresponding to each local input data sample”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“in each node computer associated with the plurality of node computers of the computer system: deploying a respective federation model for inference on the local input data samples at a node computer associated with the plurality of node computers to obtain inference outputs for each local input data sample corresponding to the local input data samples”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“providing the inference outputs for use as inference results at the node computer”
“storing the at least the subset of the other federation models”
“obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples at the node computer”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instruction to apply” and “insignificant extra-solution activity”. Specifically, the storing and obtaining from storage limitations recite the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Additionally, the providing limitation recites the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply an exception and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 17,
Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 17 is directed to a computer system for federated learning among a federation of machine learning models, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“generating …inference outputs based on inferences locally performed on local input data samples”
“for at least the portion of the local input data samples, using the inference outputs from the federation model locally deployed and the subset of the other federation models to provide a standardized inference output corresponding to a local input data sample at the at least one node computer and for assessing performance of the federation model locally deployed at the at least one node computer”
“wherein the assessing performance further comprises determining whether an inference output from the federation model locally deployed deviates in a predetermined manner from the standardized inference output”
“adjusting model parameters associated with the federation model locally deployed to mitigate deviation of the federation model locally deployed”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“A computer system for federated learning among a federation of machine learning models”
“at least one node computer, among a plurality of node computers of the computer system”
“via a federation model locally deployed in the at least one node computer”
“by the federation model”
“a control server, in the computer system”
“at least in a preliminary operating phase of the computer system, using the generated inference outputs from the locally performed inferences on the local input data samples, and the at least the subset of the inference outputs corresponding to the local input data samples from the at least the subset of other deployed federation models, to train a metamodel”
“in response to determining that the inference output from the federation model locally deployed deviates from the standardized inference output … locally training the federation model locally deployed using the standardized inference output, wherein the locally training further comprises at least one of: using the standardized inference output as a training label for an input sample in a training stage for the federation model locally deployed”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“requesting, based on the generated inference outputs from the locally performed inferences on at least a portion of the local input data samples, at least a subset of the inference outputs corresponding to the local input data samples from at least a subset of other deployed federation models at other node computers among the plurality of node computers”
“in response to determining that the inference output from the federation model locally deployed deviates from the standardized inference output, alerting the at least one node computer”
“in response to determining that the performance of the federation model locally deployed deviates in another predetermined manner from the performance of the metamodel, replacing the federation model locally deployed with the metamodel”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instruction to apply” and “insignificant extra-solution activity”. Specifically, the requesting, alerting and replacing limitations recite the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply an exception and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 18,
Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 18 is directed to a computer system for federated learning among a federation of machine learning models, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the control server uses the inference data to request the inference output corresponding to the input data sample from the at least the subset of the other federation models at other node computers, and uses the inference outputs from the at least the subset of the other federation models to provide the standardized inference output corresponding to the input data sample at each node computer”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)) and insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“the plurality of node computers, with each node computer among the plurality of node computers deploying a respective federation model for inference on the local input data samples corresponding to a node computer to obtain an inference output for each local input data sample, and to provide the inference outputs for use as inference results at the node computer”
“the control server communicating with the plurality of node computers via a data communications network”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
The limitations:
“and with each node computer sending to the control server inference data and defining an input data sample and inference output for the inference data sample at the node computer”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instruction to apply” and “insignificant extra-solution activity”. Specifically, the sending limitation recites the well-understood, routine, and convention activity of receiving or transmitting data over a network. MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Mere instructions to apply an exception and insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 19,
Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 19 is directed to a computer system for federated learning among a federation of machine learning models, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“using the inference outputs from the at least the subset of the other federation models to produce the standardized inference output corresponding to each local input data sample”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are insignificant extra-solution activity (See MPEP 2106.05(g)).
The limitations:
“for the at least one node computer: storing the at least the subset of the other federation models”
“obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples at the at least one node computer”
As drafted, are additional elements that amount to no more than insignificant extra-solution activity. See MPEP 2106.05(g).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “insignificant extra-solution activity”. Specifically, the storing and obtaining from storage limitations recite the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Insignificant extra-solution activity cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 20,
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 20 is directed to a computer system for federated learning among a federation of machine learning models, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 17.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“in response to training the metamodel, obtaining for a local input data sample at the at least one node computer, an inference output from at least the metamodel to provide the standardized output corresponding to the local input data sample”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instruction to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-8, and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ben-Itzhak et al. (U.S. Patent Publication No. 2022/0101189) (“Ben-Itzhak”) in view of Knuesel et al. (U.S. Patent Publication No. 2022/0405651) (“Knuesel”) in further view of Marshall (U.S. Patent Publication No. 2022/0399946) (“Marshall”) and Chen et al. (U.S. Patent Publication No. 2022/0101045) (“Chen”).
Regarding claim 1, Ben-Itzhak teaches a method for federated learning among a federation of machine learning models in a computer system, the method comprising: in at least one node computer among a plurality of node computers of the computer system, generating, via a federation model locally deployed in the at least one node computer, inference outputs based on inferences locally performed on local input data samples by the federation model (Ben-Itzhak [0015] “To provide context, FIG. 1 depicts a system environment 100 comprising a plurality of computing nodes 102(1)-(n) that are configured to implement conventional federated learning.”; [0022] “Turning now to flowchart 300 of FIG. 3, during the query processing/inference phase of federated learning, a given node 102(k) can receive a query data instance q, which is an unlabeled data instance (i.e., a data instance with a feature set x but without a label y) for which a prediction is requested/desired (block 302). In response, node 102(k) can provide query data instance q as input to its copy 106(k) of the trained version of global ML model M (block 304) and copy 106(k) of M can generate a prediction p for q (block 304). Finally, at block 308, node 102(k) can output p as the final prediction result for query data instance q and the flowchart can end” Ben-Itzhak provides a plurality of nodes in a computing system with respective federated learning models that locally generate inference/prediction outputs based on inferences locally performed on local input data samples by the federation model.); based on the generated inference outputs from the locally performed inferences at least a portion of the local input data samples, requesting, via a control server in the computer system, at least a subset of the inference outputs corresponding to the local input data samples from at least a subset of other deployed federation models at other node computers among the plurality of node computers (Ben-Itzhak [0026] “Then, during a query processing/inference phase of federated inference, query server 404 can receive a query data instance for which a prediction is requested or desired and can transmit the query data instance to some or all of nodes 402(1)-(n). In response, each receiving node can provide the query data instance as input to the trained version of its local ML model and thereby generate a prediction (referred to herein as a “per-node prediction”) for the query data instance. Each receiving node can then submit its per-node prediction to query server 404 in an encrypted format, such that the per-node prediction (and in some cases, the identity of the node) cannot be learned by query server 404.” Ben-Itzhak provides query server 404 corresponding to a control server which requests at least a subset of the inference outputs corresponding to the local input data samples from at least a subset of other deployed federation models at other node computers among the plurality of node computers.) …in the computer system, using via the control server the inference outputs to provide a standardized inference output corresponding to the local input data samples at the at least one node computer for assessing performance of the federation model locally deployed on the at least one node computer (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote and generate, based on the resulting aggregation, a federated prediction for the query data instance.”; [0036] “In one set of embodiments, the aggregation performed at block 612 can comprise tallying a vote count for each distinct per-node prediction received from nodes 402(1)-(n) indicating the number of times that per-node prediction was submitted by a node at block 608. Query server 404 can then select, as the federated prediction, the distinct per-node prediction that received the highest number of votes (or in other words, was submitted by the most nodes).” Ben-Itzhak provides producing an inference output from a plurality of federation models based on a majority vote, which corresponds to providing a standardized inference output corresponding to the local input data samples at the at least one node computer for assessing performance of the federation model deployed on the at least one node computer.), wherein the assessing performance further comprises determining whether an inference output from the federation model locally deployed deviates in a predetermined manner from the standardized inference output (Ben-Itzhak [0042] “Starting with block 702, upon generating federated predictions for a batch b of query data instances and outputting those federated predictions as the final prediction results for the query data instances per blocks 612 and 614 of FIG. 6, query server 404 can, at some later point in time, receive the correct predictions for the query data instances in batch b, as well as the per-node/per-subset predictions for each query data instance submitted by the nodes or node subsets.” Ben-Itzhak provides subsequently comparing the node predictions include the majority vote to a correct prediction, corresponding to determining whether an inference output from the deployed federation model deviates in a predetermined manner from the standardized inference output.); in response to determining that the inference output from the federation model locally deployed deviates from the standardized inference output, alerting the at least one node computer (Ben-Itzhak [0043] “In response, for each query data instance q in batch b, query server 404 can use q, the correct prediction for q, and the per-node/per-subset predictions for q as training data to train a reinforcement learning-based ML model R, where the training enables R to take as input query data instances that are similar to q and predict which nodes/node subsets will generate correct predictions for those query data instances (block 704). Query server 404 can then return to block 702 in order to train R using the next batch of query data instances.”; [0045] “At block 804, the trained version of R can output (in accordance with its training shown in FIG. 7) a group of nodes that the model believes will generate correct per-node predictions for q. Query server 404 can then transmit q to the group of nodes output by R at block 806 (block 806) and the remainder of flowchart 800 (i.e., blocks 808-818) can proceed in a similar manner as blocks of 604-614 of FIG. 6.” Ben-Itzhak provides transmitting to the nodes which provide correct predictions, corresponding to in response to determining that the inference output from the deployed federation model deviates from the standardized inference output, alerting the at least one node computer.)
Ben-Itzhak fails to teach …at least in a preliminary operating phase of the computer system, using the generated inference outputs from the locally performed inferences on the local input data samples, and the at least the subset of the inference outputs corresponding to the local input data samples from the at least the subset of other deployed federation models, to train a metamodel … and locally training the federation model locally deployed using the standardized inference output, wherein the locally training further comprises at least one of: using the standardized inference output as a training label for an input sample in a training stage for the federation model locally deployed, and adjusting model parameters associated with the federation model locally deployed to mitigate deviation of the federation model locally deployed …and in response to determining that the performance of the federation model locally deployed deviates in another predetermined manner from the performance of the metamodel, replacing the federation model locally deployed with the metamodel.
However, Knuesel teaches locally training the federation model locally deployed using the standardized inference output, wherein the locally training further comprises at least one of: using the standardized inference output as a training label for an input sample in a training stage for the federation model locally deployed (Knuesel [0175] “Combining may also be done by majority voting among at least the multiple model results.”; [0183] “Federation client 400 comprises a training unit 430 configure to train the local machine learnable model on the multiple local mapped training records. The training algorithm depends on the type of model. For example, for a regression model a corresponding regression algorithm may be used. For a neural network model, a corresponding backtracking algorithm may be used.”; [0184] “The locally trained machine learnable model is trained on a first federated machine learning client from multiple training records local to the first federated machine learning client. The end result of the training is a locally trained parameter set 441. It can be stored in models 330, for later use in inference. The locally trained parameter set 441 may also be sent to the federation server 370. Federation server 370 may store locally trained parameter set 441 and later distribute it to other federation clients. Note that the data on which parameter set 441 is trained are not sent to server 370. The one or more machine learnable models received by server 370 may have been trained on further federated machine learning clients from further multiple training records local to the further federated machine learning clients.”; [0187] “After training is complete, the trained model 441 may be updated when new data is obtained, e.g., when database 410 is extended.” Knuesel provides local training of federated learning models using majority voting including the use of additional or new data for local training corresponding to using the standardized inference output as a training label for an input sample in a training stage for the deployed federation model.), and adjusting model parameters associated with the federation model locally deployed to mitigate deviation of the federation model locally deployed (Knuesel [0315] “For example, applying a model, e.g., a neural network, to data of the training data, and/or adjusting the stored parameters to train the model may be done using an electronic computing device, e.g., a computer. The models may have multiple parameters, e.g., at least 2, 5, 10, 15, 20 or 40, 1000, 10000, or more, etc.” Knuesel provides adjusting model parameters associated with the deployed federation model.);
Ben-Itzhak and Knuesel are both considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak with the above teachings of Knuesel. Doing so would allow for constructing a federated model of improved model performance, that is subsequently distributed back to all parties (Knuesel [0127] “This has the advantage that the aggregated model is trained over the combination of all data of all participating clients, but without having to collect all data at a central location. Only statistical information concerning the learned model updates are shared across the clients companies for constructing a federated model of improved model performance, that is subsequently distributed back to all parties.”).
Further, Marshall teaches …at least in a preliminary operating phase of the computer system, using the generated inference outputs from the locally performed inferences on the local input data samples, and the at least the subset of the inference outputs corresponding to the local input data samples from the at least the subset of other deployed federation models, to train a metamodel (Marshall [0038] “In some implementations, with user permission, federated learning may be utilized to update one or more trained models, e.g., where individual server devices may each perform local model training, and the updates to the models may be aggregated to update one or more central versions of the model.”; [0025] “Described implementations use cascaded models, e.g., where the outputs of one set of physics-specific models is used to train the meta-model, and/or where the output of the meta-model determines use of a specialized model that uses a partitioned data set to obtain a result.” Marshall provides a federated learning method comprising respective federation models, where each respective model output is used to train a metamodel in the federation of machine learning models.)
Ben-Itzhak, Knuesel and Marshall are all considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel with the above teachings of Marshall. Doing so would allow for a selection of the most accurate model out of multiple available models (Marshall [0076] “In block 408, the meta-model is executed to select the most accurate individual physics-specific model of the multiple available specific models that the meta-model has been trained to select from.”).
Further, Chen teaches …and in response to determining that the performance of the federation model locally deployed deviates in another predetermined manner from the performance of the metamodel, replacing the federation model locally deployed with the metamodel (Chen [0059] “In this configuration, the federated learning module 450 includes a current onboard model 452, and an onboard training module 460 (e.g., a federated learning training module).”; [0065] “At optional block 608, a current version of the traffic light recognition model is replaced with an updated version of the traffic light recognition model when a performance of the updated version of the traffic light recognition model is greater than a predetermined value. For example, the auto-evaluation module 318 of FIG. 3 automatically evaluates or measures the goodness of the traffic light detection model 314 of FIG. 3 after training using the training data from the traffic light (TL) auto-labeling module 312 of FIG. 3.” Chen provides federated learning models including replacing a traffic light recognition model corresponding to the federation model locally deployed with an updated version of the traffic light recognition model corresponding to a metamodel when a performance of the updated version of the traffic light recognition model is greater than a predetermined value, corresponding to replacing the federation model locally deployed with the metamodel in response to determining that the performance of the federation model locally deployed deviates in another predetermined manner from the performance of the metamodel.).
Ben-Itzhak, Knuesel, Marshall and Chen are all considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall with the above teachings of Chen. Doing so would overcome the need to store the data in the cloud when performing machine learning (Chen [0022] “This federated learning overcomes the need to store the data in the cloud when performing machine learning.”).
Regarding claim 2, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 1 as discussed above in the rejection of claim 1, further comprising: in each node computer among of the plurality of node computers of the computer system, deploying a respective federation model for inference on the local input data samples corresponding to a node computer among the plurality of node computers to obtain inference outputs for the local input data samples (Ben-Itzhak [0033] “FIG. 6 depicts a flowchart 600 that may be executed by query server 404 and nodes 402(1)-(n) of FIG. 4 for carrying out the query processing/inference phase of federated inference with respect to a query data instance q according to certain embodiments.”; [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).” Ben-Itzhak provides a plurality of node computers in a system, where each node deploys its own local copy of a federation model to produce their own prediction/inference on local input data, as shown in block 606 of Figure 6.), and providing the inference outputs for use as the inference results at the node computer (Ben-Itzhak [0034] “In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608). The specific type of encryption used at block 608 can vary depending on the implementation.” Ben-Itzhak provides each node submitting their respective prediction/inference to a server, as shown in block 608 of Figure 6, which corresponds to providing the inference outputs for use as inference results at the at least one node computer.); in the computer system, for at least the portion of the local input data samples at each node computer, obtaining the inference outputs from at least a subset of respective federation models based on the respective federation model in each node computer (Ben-Itzhak [0035] “At blocks 610 and 612, query server 404 can receive the per-node predictions submitted by nodes 402(1)-(n) and can employ MPC protocol 408 to aggregate the per-node predictions and generate a federated prediction based on that aggregation” Ben-Itzhak provides the server receiving the per node predictions/inferences corresponding to the computer system obtaining the inference outputs from at least a subset of other federation models.); and in the computer system, using the inference outputs from the at least the subset of the respective federation models to provide the standardized inference output corresponding to the local input data samples corresponding to each node computer for assessing performance of each respective federation model deployed on each node computer (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote and generate, based on the resulting aggregation, a federated prediction for the query data instance.”; [0036] “Query server 404 can then select, as the federated prediction, the distinct per-node prediction that received the highest number of votes (or in other words, was submitted by the most nodes). For example, if nodes 402(1) and 402(2) submitted per-node prediction “A” (resulting in two votes for “A”), node 402(3) submitted per-node prediction “B” (resulting in one vote for “B”), and node 402(4) submitted per-node prediction “C” (resulting in one vote for “C”), query server 404 would select “A” as the federated prediction at block 612 because “A” has the highest vote count.”; [0037] “Query server 404 can then select, as the federated prediction, the distinct per-node prediction with the highest average confidence level, or provide an aggregated confidence distribution vector that indicates the average confidence level for each possible prediction.” Ben-Itzhak provides server 404 using majority vote to produce a standardized inference output corresponding to the local input data samples at the at least one node computer and selecting the predictions with the highest confidence level corresponding to assessing performance of the federation model deployed on the at least one node computer.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 1.
Regarding claim 3, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 2 as discussed above in the rejection of claim 2, wherein said each node computer comprises respective edge devices in a data communications network (Ben-Itzhak [0029] “For example, although query server 404 of system environment 400 is depicted as a singular server/computer system, in some embodiments query server 404 may be implemented as a cluster of servers/computer systems for enhanced performance, reliability, and/or other reasons. One of ordinary skill in the art will recognize other variations, modifications, and alternatives.”; [0047] “Further, one or more embodiments can relate to a device or an apparatus for performing the foregoing operations…The various embodiments described herein can be practiced with other computer system configurations including handheld devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.” Ben-Itzhak provides edge devices in a data communications network.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 2.
Regarding claim 4, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 2 as discussed above in the rejection of claim 2, further comprising: producing the standardized inference output corresponding to a respective input data sample as a function of the inference outputs from each respective federation model for the respective input data sample (Ben-Itzhak [0035] “At blocks 610 and 612, query server 404 can receive the per-node predictions submitted by nodes 402(1)-(n) and can employ MPC protocol 408 to aggregate the per-node predictions and generate a federated prediction based on that aggregation.” Ben-Itzhak provides each respective node submitting their outputs to a server for aggregation of the outputs to produce a standardized output, corresponding to producing the standardized inference output corresponding to a respective input data sample as a function of the inference outputs from each respective federation model for the respective input data sample.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 2.
Regarding claim 5, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 4 as discussed above in the rejection of claim 4, wherein the standardized inference output comprises one of a majority vote and an average derived from the inference outputs from each respective federation model (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote”; [0036] “In one set of embodiments, the aggregation performed at block 612 can comprise tallying a vote count for each distinct per-node prediction received from nodes 402(1)-(n) indicating the number of times that per-node prediction was submitted by a node at block 608. Query server 404 can then select, as the federated prediction, the distinct per-node prediction that received the highest number of votes (or in other words, was submitted by the most nodes).” Ben-Itzhak provides a majority vote for the inference/prediction outputs derived from each respective node for producing the standardized inference output.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 4.
Regarding claim 6, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 4 as discussed above in the rejection of claim 4, wherein the inference outputs of each respective federation model indicate a confidence value associated with a respective inference output, and wherein producing the standardized inference output from the inference outputs based on each respective federation model is dependent on the confidence value associated with the inference outputs (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote and generate, based on the resulting aggregation, a federated prediction for the query data instance.”; [0037] “In another set of embodiments, if each per-node prediction includes an associated confidence level indicating a degree of confidence that the submitting node has in that per-node prediction, the aggregation performed at block 612 can comprise computing an average confidence level for each distinct per-node prediction.” Ben-Itzhak provides a majority vote and confidence level at each respective node where the average is taken into account for the aggregated output, corresponding to the standardized inference output.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 4.
Regarding claim 7, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 2, as discussed above in the rejection of claim 2, further comprising in response to training the metamodel, obtaining the inference outputs for the input data samples from at least the metamodel to provide the standardized inference output corresponding to the respective input data sample (Marshall [0064] “In block 312, a meta-model is trained using the aggregated training set from block 302 that includes the feature vector, and the error values generated from applying the feature vector to each of the individual physics-specific models. Thus, the meta-model can be trained based on the output of the physics-specific models. In some implementations, the meta-model feature vector may include the union of the feature vectors of the individual physics-specific models shown in FIG. 2.”; [0067] “A deployable meta-model 314 is produced by the training of block 312, that will select an accurate physics-specific model for a particular RF signal and its propagation environment.” Marshall provides a trained metamodel which selects an accurate model for a particular input in a federated learning system, corresponding to in response to training the metamodel, obtaining the inference outputs for the input data samples from at least the metamodel to provide the standardized inference output corresponding to the respective input data sample.).
Ben-Itzhak, Knuesel and Marshall and Chen are all considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen with the above teachings of Marshall. Doing so would allow for a selection of the most accurate model out of multiple available models (Marshall [0076] “In block 408, the meta-model is executed to select the most accurate individual physics-specific model of the multiple available specific models that the meta-model has been trained to select from.”).
Regarding claim 8, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 4 as discussed above in the rejection of claim 4, wherein the computer system comprises the control server for communication with the plurality of node computers via a data communications network (Ben-Itzhak [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).” Ben-Itzhak provides a query server 404, as shown in Figure 6, which corresponds to a control server for communication with the plurality of node computers via a data communications network.), and wherein the method further comprises: at each node computer, sending to the control server inference data, defining the respective input data sample and corresponding inference output for the respective input data sample from each node computer (Ben-Itzhak [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).”; Ben-Itzhak provides each node sending input data and corresponding inference/prediction output to the query server.); at the control server, using the inference data to request the corresponding inference output for the respective input data sample from the subset of the respective federation models on the plurality of node computers (Ben-Itzhak [0026] “Then, during a query processing/inference phase of federated inference, query server 404 can receive a query data instance for which a prediction is requested or desired and can transmit the query data instance to some or all of nodes 402(1)-(n).” Ben-Itzhak provides the sever requesting corresponding inference output for the respective input data sample from the subset of the respective federation models on the plurality of node computers.); and at the control server, using the inference outputs from the subset of the respective federation models to provide the standardized inference output corresponding to the respective input data sample at each node computer (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote and generate, based on the resulting aggregation, a federated prediction for the query data instance.”; [0036] “Query server 404 can then select, as the federated prediction, the distinct per-node prediction that received the highest number of votes (or in other words, was submitted by the most nodes). For example, if nodes 402(1) and 402(2) submitted per-node prediction “A” (resulting in two votes for “A”), node 402(3) submitted per-node prediction “B” (resulting in one vote for “B”), and node 402(4) submitted per-node prediction “C” (resulting in one vote for “C”), query server 404 would select “A” as the federated prediction at block 612 because “A” has the highest vote count.” Ben-Itzhak provides a majority vote corresponding to using the inference outputs from the subset of the respective federation models to provide the standardized inference output corresponding to the respective input data sample at each node computer.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 4.
Regarding claim 10, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 8 as discussed above in the rejection of claim 8, further comprising: at each node computer, processing a raw input data sample to produce the inference data defining the raw input data sample such that the raw input data sample is hidden in the inference data (Ben-Itzhak [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608). The specific type of encryption used at block 608 can vary depending on the implementation.” Ben-Itzhak provides each node computer processing input/raw input and producing inference data defining that data which is then encrypted, corresponding to the raw input data sample is hidden in the inference data.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 8.
Regarding claim 11, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 1 as discussed above in the rejection of claim 1, further comprising, in the at least one node computer of the system: storing the at least the subset of the other federation models (Ben-Itzhak [0015] “As shown, each node 102(i) for i=1, n includes a local training dataset 104(i) that resides on a storage component of node 102(i) and is private to (or in other words, is only known by) that node.”; [0039] “In certain embodiments, rather than having each individual node 402(i) train its own local ML model M.sub.i as part of the training phase of federated inference, nodes 402(1)-(n) can be split into a number of node subsets and each node subset can train a subset-specific ML model in a distributed fashion (e.g., using the training approach shown in FIG. 2). In these embodiments, each subset-specific ML model will be shared by (i.e., global to) the nodes within its corresponding node subset but will be inaccessible by nodes not within that node subset.” Ben-Itzhak provides subsets of nodes sharing a subset specific federation model which are stored locally to each node, corresponding to in the at least one node computer of the system: storing the at least the subset of the other federation models); obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples in the at least one node computer (Ben-Itzhak [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404.” Ben-Itzhak provides the node subsets producing inference/prediction outputs from the stored models from their local input data samples.); and using the inference outputs from the at least the stored subset of the other federation models to produce the standardized inference output corresponding to each input data sample associated with the local input data samples (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote and generate, based on the resulting aggregation, a federated prediction for the query data instance.”; [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404. Query server 404 can thereafter generate a federated prediction for the query data instance based on an aggregation of the per-subset (rather than per-node) predictions in a manner similar to block 612 of flowchart 600.” Ben-Itzhak provides using the outputs and aggregating them to produce a federated prediction using a majority vote corresponding to a standardized inference output corresponding to each input data sample associated with the local input data samples.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 1.
Regarding claim 12, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 11, wherein the standardized inference output comprises one of a majority vote and an average derived from the inference outputs (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote”; [0036] “In one set of embodiments, the aggregation performed at block 612 can comprise tallying a vote count for each distinct per-node prediction received from nodes 402(1)-(n) indicating the number of times that per-node prediction was submitted by a node at block 608. Query server 404 can then select, as the federated prediction, the distinct per-node prediction that received the highest number of votes (or in other words, was submitted by the most nodes).” Ben-Itzhak provides a majority vote for the inference/prediction outputs derived from each respective node for producing the standardized inference output.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 11.
Regarding claim 13, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 11 as discussed above in the rejection of claim 11, further comprising, in the at least one node computer: comparing the standardized inference output with an inference output from the inference outputs of the federation model locally deployed for inference at the at least one node computer (Ben-Itzhak [0027] “Upon receiving the per-node predictions, query server 404 can aggregate them using an ensemble technique such as majority vote and generate, based on the resulting aggregation, a federated prediction for the query data instance.”; [0036] “Query server 404 can then select, as the federated prediction, the distinct per-node prediction that received the highest number of votes (or in other words, was submitted by the most nodes). For example, if nodes 402(1) and 402(2) submitted per-node prediction “A” (resulting in two votes for “A”), node 402(3) submitted per-node prediction “B” (resulting in one vote for “B”), and node 402(4) submitted per-node prediction “C” (resulting in one vote for “C”), query server 404 would select “A” as the federated prediction at block 612 because “A” has the highest vote count.”; [0037] “Query server 404 can then select, as the federated prediction, the distinct per-node prediction with the highest average confidence level, or provide an aggregated confidence distribution vector that indicates the average confidence level for each possible prediction.” Ben-Itzhak provides a server 404 selecting the best fit prediction based on a majority vote and confidence level corresponding to comparing the standardized inference output with an inference output from the inference outputs of the deployed federation model for inference at the at least one node computer); and in response to determining that the inference output of the federation model locally deployed deviates in a predetermined manner from the standardized inference output, training the federation model locally deployed using the inference outputs from the at least the stored subset of the other federation models (Ben-Itzhak [0037] “In another set of embodiments, if each per-node prediction includes an associated confidence level indicating a degree of confidence that the submitting node has in that per-node prediction, the aggregation performed at block 612 can comprise computing an average confidence level for each distinct per-node prediction. Query server 404 can then select, as the federated prediction, the distinct per-node prediction with the highest average confidence level, or provide an aggregated confidence distribution vector that indicates the average confidence level for each possible prediction.”; [0039] “In this scenario, nodes 402(1) and 402(2) may collectively train a first subset-specific ML model M.sub.S1 using the distributed approach of FIG. 2, nodes 402(3), 402(4), and 402(5) may collectively train second subset-specific ML model M.sub.S2 using the distributed approach of FIG. 2, and node 402(6) may train a third subset-specific ML model M.sub.S3 (which involves simply training local ML model 406(6)).” Ben-Itzhak provides a confidence level corresponding to determining that the inference output of the deployed federation model deviates in a predetermined manner from the standardized inference output and training deployed federation models using the inference outputs from the at least the stored subset of the other federation models).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 11.
Regarding claim 14, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 1 as discussed above in the rejection of claim 1, where Ben-Itzhak teaches further comprising, in the at least one node computer: storing the at least the subset of the other federation models (Ben-Itzhak [0015] “As shown, each node 102(i) for i=1, n includes a local training dataset 104(i) that resides on a storage component of node 102(i) and is private to (or in other words, is only known by) that node.”; [0039] “In certain embodiments, rather than having each individual node 402(i) train its own local ML model M.sub.i as part of the training phase of federated inference, nodes 402(1)-(n) can be split into a number of node subsets and each node subset can train a subset-specific ML model in a distributed fashion (e.g., using the training approach shown in FIG. 2). In these embodiments, each subset-specific ML model will be shared by (i.e., global to) the nodes within its corresponding node subset but will be inaccessible by nodes not within that node subset.” Ben-Itzhak provides subsets of nodes sharing a subset specific federation model, corresponding to in the at least one node computer of the system: storing the at least the subset of the other federation models); obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples at the at least one node computer (Ben-Itzhak [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404.” Ben-Itzhak provides the node subsets producing inference/prediction outputs from the stored models from their local input data samples.), but fails to teach at least in a preliminary operating phase of the computer system, using the inference outputs from the other stored models for each data sample to train a metamodel included in the federation of models; and in response to training the metamodel, obtaining the inference outputs for each local input data sample from at least the metamodel to provide the standardized inference output.
However, Marshall teaches at least in the preliminary operating phase of the computer system, using the inference outputs from the other stored models for each data sample to train the metamodel included in the federation of models (Marshall [0038] “In some implementations, with user permission, federated learning may be utilized to update one or more trained models, e.g., where individual server devices may each perform local model training, and the updates to the models may be aggregated to update one or more central versions of the model.”; [0025] “Described implementations use cascaded models, e.g., where the outputs of one set of physics-specific models is used to train the meta-model, and/or where the output of the meta-model determines use of a specialized model that uses a partitioned data set to obtain a result.” Marshall provides a federated learning method comprising respective federation models, where each respective model output is used to train a metamodel in the federation of machine learning models.); and in response to training the metamodel, obtaining the inference outputs for each local input data sample from at least the metamodel to provide the standardized inference output (Marshall [0064] “In block 312, a meta-model is trained using the aggregated training set from block 302 that includes the feature vector, and the error values generated from applying the feature vector to each of the individual physics-specific models. Thus, the meta-model can be trained based on the output of the physics-specific models. In some implementations, the meta-model feature vector may include the union of the feature vectors of the individual physics-specific models shown in FIG. 2.”; [0067] “A deployable meta-model 314 is produced by the training of block 312, that will select an accurate physics-specific model for a particular RF signal and its propagation environment.” Marshall provides a trained metamodel which selects an accurate model for a particular input in a federated learning system, corresponding to in response to training the metamodel, obtaining the inference outputs for the input data samples from at least the metamodel to provide the standardized inference output corresponding to the respective input data sample.).
Ben-Itzhak, Knuesel and Marshall and Chen are all considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen with the above teachings of Marshall. Doing so would allow for a selection of the most accurate model out of multiple available models (Marshall [0076] “In block 408, the meta-model is executed to select the most accurate individual physics-specific model of the multiple available specific models that the meta-model has been trained to select from.”).
Regarding claim 15, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 14 as discussed above in the rejection of claim 14, further comprising, in the at least one node computer: comparing performance of the federation model locally deployed for inference on received input data samples with performance of the metamodel for the received input data samples (Marshall [0038] “In some implementations, with user permission, federated learning may be utilized to update one or more trained models, e.g., where individual server devices may each perform local model training, and the updates to the models may be aggregated to update one or more central versions of the model.”; [0064] “In block 312, a meta-model is trained using the aggregated training set from block 302 that includes the feature vector, and the error values generated from applying the feature vector to each of the individual physics-specific models. Thus, the meta-model can be trained based on the output of the physics-specific models. In some implementations, the meta-model feature vector may include the union of the feature vectors of the individual physics-specific models shown in FIG. 2.”; [0065] “For example, in some implementations, if a label to the optimal model is provided in the error or selection vector 310, the training of the meta-model can be performed by minimizing the number of incorrect physics-specific model selections. For example, the label indicates the correct physics-specific model to select for each feature vector.” Marshall provides training of a metamodel using error values from the physics-specific models/federation models and minimizing incorrect model selection from the metamodel, corresponding to comparing performance of the deployed federation model for inference on received input data samples with performance of the metamodel for the received input data samples).
Ben-Itzhak, Knuesel and Marshall and Chen are all considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel and Marshall and Chen with the above teachings of Marshall. Doing so would allow for a selection of the most accurate model out of multiple available models (Marshall [0076] “In block 408, the meta-model is executed to select the most accurate individual physics-specific model of the multiple available specific models that the meta-model has been trained to select from.”).
Regarding claim 16, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the method as claimed in claim 11 as discussed above in the rejection of claim 11, further comprising, in each node computer associated with the plurality of node computers of the computer system: deploying a respective federation model for inference on the local input data samples at a node computer associated with the plurality of node computers to obtain inference outputs for each local input data sample corresponding to the local input data samples (Ben-Itzhak [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).”), and providing the inference outputs for use as inference results at the node computer (Ben-Itzhak [0034] “In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608). The specific type of encryption used at block 608 can vary depending on the implementation.” Ben-Itzhak provides each node submitting their respective prediction/inference to a server, as shown in block 608 of Figure 6, which corresponds to providing the inference outputs for use as inference results at the at least one node computer.); storing the at least the subset of the other federation models (Ben-Itzhak [0015] “As shown, each node 102(i) for i=1, n includes a local training dataset 104(i) that resides on a storage component of node 102(i) and is private to (or in other words, is only known by) that node.”; [0039] “In certain embodiments, rather than having each individual node 402(i) train its own local ML model M.sub.i as part of the training phase of federated inference, nodes 402(1)-(n) can be split into a number of node subsets and each node subset can train a subset-specific ML model in a distributed fashion (e.g., using the training approach shown in FIG. 2). In these embodiments, each subset-specific ML model will be shared by (i.e., global to) the nodes within its corresponding node subset but will be inaccessible by nodes not within that node subset.” Ben-Itzhak provides subsets of nodes sharing a subset specific federation model, corresponding to in the at least one node computer of the system: storing the at least the subset of the other federation models); obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples at the node computer (Ben-Itzhak [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404.” Ben-Itzhak provides the node subsets producing inference/prediction outputs from the stored models from their local input data samples.); and using the inference outputs from the respective federation model and the inference outputs from the at least the stored subset of the other federation models to produce the standardized inference output corresponding to each local input data sample (Ben-Itzhak [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404. Query server 404 can thereafter generate a federated prediction for the query data instance based on an aggregation of the per-subset (rather than per-node) predictions in a manner similar to block 612 of flowchart 600.” Ben-Itzhak provides using the outputs and aggregating them to produce a federated prediction corresponding to a standardized inference output corresponding to each input data sample associated with the local input data samples.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 11.
Regarding claim 17, it is the computer system embodiment of claim 1 with similar limitations to claim 1 and is rejected using the same reasoning found above in the rejection of claim 1.
Regarding claim 18, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the computer system as claimed in claim 17 as discussed above in the rejection of claim 17 comprising: the plurality of node computers, with each node computer among the plurality of node computers deploying a respective federation model for inference on the local input data samples corresponding to a node computer to obtain an inference output for each local input data sample (Ben-Itzhak [0033] “FIG. 6 depicts a flowchart 600 that may be executed by query server 404 and nodes 402(1)-(n) of FIG. 4 for carrying out the query processing/inference phase of federated inference with respect to a query data instance q according to certain embodiments.”; [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).” Ben-Itzhak provides a plurality of node computers in a system, where each node deploys its own local copy of a federation model to produce their own prediction/inference on local input data, as shown in block 606 of Figure 6.), and to provide the inference outputs for use as inference results at the node computer (Ben-Itzhak [0034] “In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608). The specific type of encryption used at block 608 can vary depending on the implementation.” Ben-Itzhak provides each node submitting their respective prediction/inference to a server, as shown in block 608 of Figure 6, which corresponds to providing the inference outputs for use as inference results at the at least one node computer.); the control server communicating with the plurality of node computers via a data communications network (Ben-Itzhak [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).” Ben-Itzhak provides a query server 404, as shown in Figure 6, which corresponds to a control server for communication with the plurality of node computers via a data communications network.); and with each node computer sending to the control server inference data and defining an input data sample and inference output for the inference data sample at the node computer (Ben-Itzhak [0034] “Starting with block 602, query server 404 can receive query data instance q and can transmit q to each node 402(i). In response, each node 402(i) can provide query data instance q as input to the trained version of its local ML model 406(i) (block 604), generate, via model 406(i), a per-node prediction for q (block 606), and submit the per-node prediction in an encrypted format to query server 404 (such that the per-node prediction cannot be learned by query server 404) (block 608).”; Ben-Itzhak provides each node sending input data and corresponding inference/prediction output to the query server.), wherein the control server uses the inference data to request the inference output corresponding to the input data sample from the at least the subset of the other federation models at other node computers (Ben-Itzhak [0026] “Then, during a query processing/inference phase of federated inference, query server 404 can receive a query data instance for which a prediction is requested or desired and can transmit the query data instance to some or all of nodes 402(1)-(n).” Ben-Itzhak provides the sever requesting corresponding inference output for the respective input data sample from the subset of the respective federation models on the plurality of node computers.), and uses the inference outputs from the at least the subset of the other federation models to provide the standardized inference output corresponding to the input data sample at each node computer (Ben-Itzhak [0038] “Finally, at block 614, query server 404 can output the federated prediction as the final prediction result for query data instance q and flowchart 600 can end.” Ben-Itzhak provides a final prediction corresponding to a standardized inference output corresponding to the respective input data sample at each node computer and using the outputs from the respective federation models.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 17.
Regarding claim 19, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the computer system as claimed in claim 17 as discussed above in the rejection of claim 17, further comprising, for the at least one node computer: storing the at least the subset of the other federation models (Ben-Itzhak [0015] “As shown, each node 102(i) for i=1, n includes a local training dataset 104(i) that resides on a storage component of node 102(i) and is private to (or in other words, is only known by) that node.”; [0039] “In certain embodiments, rather than having each individual node 402(i) train its own local ML model M.sub.i as part of the training phase of federated inference, nodes 402(1)-(n) can be split into a number of node subsets and each node subset can train a subset-specific ML model in a distributed fashion (e.g., using the training approach shown in FIG. 2). In these embodiments, each subset-specific ML model will be shared by (i.e., global to) the nodes within its corresponding node subset but will be inaccessible by nodes not within that node subset.” Ben-Itzhak provides subsets of nodes sharing a subset specific federation model, corresponding to in the at least one node computer of the system: storing the at least the subset of the other federation models); obtaining the inference outputs from the at least the stored subset of the other federation models for the local input data samples at the at least one node computer (Ben-Itzhak [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404.” Ben-Itzhak provides the node subsets producing inference/prediction outputs from the stored models from their local input data samples.); and using the inference outputs from the at least the subset of the other federation models to produce the standardized inference output corresponding to each local input data sample (Ben-Itzhak [0040] “Then, during the query processing/inference phase of federated inference, some or all of the node subsets can generate per-subset predictions for a query data instance using their subset-specific ML models and submit the per-subset predictions to query server 404. Query server 404 can thereafter generate a federated prediction for the query data instance based on an aggregation of the per-subset (rather than per-node) predictions in a manner similar to block 612 of flowchart 600.” Ben-Itzhak provides using the outputs and aggregating them to produce a federated prediction corresponding to a standardized inference output corresponding to each input data sample associated with the local input data samples.).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel in further view of Marshall and Chen for the same reasons disclosed above in the rejection of claim 17.
Regarding claim 20, Ben-Itzhak in view of Knuesel in further view of Marshall and Chen teaches the computer system as claimed in claim 17 as discussed above in the rejection of claim 17, further comprising: in response to training the metamodel, obtaining for a local input data sample at the at least one node computer, an inference output from at least the metamodel to provide the standardized output corresponding to the local input data sample (Marshall [0064] “In block 312, a meta-model is trained using the aggregated training set from block 302 that includes the feature vector, and the error values generated from applying the feature vector to each of the individual physics-specific models. Thus, the meta-model can be trained based on the output of the physics-specific models. In some implementations, the meta-model feature vector may include the union of the feature vectors of the individual physics-specific models shown in FIG. 2.”; [0067] “A deployable meta-model 314 is produced by the training of block 312, that will select an accurate physics-specific model for a particular RF signal and its propagation environment.” Marshall provides a trained metamodel which selects an accurate model for a particular input in a federated learning system, corresponding to in response to training the metamodel, obtaining the inference outputs for the input data samples from at least the metamodel to provide the standardized inference output corresponding to the respective input data sample.).
Ben-Itzhak, Knuesel and Marshall and Chen are all considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically federated learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ben-Itzhak in view of Knuesel with the above teachings of Marshall and Chen. Doing so would allow for a selection of the most accurate model out of multiple available models (Marshall [0076] “In block 408, the meta-model is executed to select the most accurate individual physics-specific model of the multiple available specific models that the meta-model has been trained to select from.”).
Response to Arguments
Regarding the rejection applied under 35 U.S.C. 112, Applicant’s amendments overcome the rejection.
Regarding the rejection applied under 35 U.S.C. 101, Applicant firstly asserts that generating “inference outputs” involves complexity beyond the human mind, including complex multidimensional matrix calculations, and therefore asserts that the “generating …inference outputs based on inferences locally performed on local input data samples” limitation is not mentally performable, including using pen and paper (“Remarks”, Page 16).
However, generating inference outputs is a mentally performable process. For example, an “inference” is generally defined as a conclusion based on evidence and reasoning. Therefore, even without the assistance of paper and paper, generating an inference can be performed mentally. Further, the claim does not recite any “complex multidimensional matrix calculations”. Even assuming that the claim did recite “matrix calculations” explicitly, the limitation would then be an abstract idea corresponding to a mathematical calculation. Therefore, the claims recite at least an abstract idea.
Applicant further asserts that training/improving any neural network, and specifically involving model parameters would inevitably render improvements to AI/model inferences and inference output (“Remarks”, Page 17). Applicant further asserts that the claimed invention is directed to an improvement in the technical field of training machine learning models, as described in paragraph [0047] of the specification (“Remarks”, Page 18).
However, as discussed above in the 35 U.S.C. 101 rejection of claim 1 above, “adjusting model parameters…” is an abstract idea (i.e., a mental process). However, even assuming that the claims did recite an improvement, it would be an improvement in the abstract idea of “adjusting model parameters…”. The claim itself even recites “adjusting model parameters… to mitigate deviation of the federation model locally deployed”. The MPEP notes that it is important to keep in mind that an improvement in the abstract idea itself is not an improvement in technology. MPEP 2106.05(a)(II).
Applicant further asserts that the claimed invention is similar to Example 39, and therefore does not contain any abstract ideas (“Remarks”, Pages 18-21). However, as discussed above, the claims contain at least the abstract ideas of generating inference outputs and adjusting model parameters. Therefore the present claims are not similar to the claims of Example 37.
Applicant further asserts that the limitations analyzed under the Step 2A Prong Two Analysis that were characterized as mere instructions to apply an exception for the abstract ideas (MPEP 2106.05(f)) in the previous office action are not insignificant extra-solution activity (MPEP 2106.05(g)). (“Remarks”, Pages 26-28). However, as discussed above in the 35 U.S.C. 101 rejection above and the previous office action, the claims are characterized as mere instructions to apply an exception for the abstract ideas, consistent with MPEP 2106.05(f), and not as insignificant extra-solution activity.
Regarding the rejection applied under 35 U.S.C. 103, Applicant’s arguments with respect to the “replacing” limitation have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant further asserts that the applied reference Knuesel fails to teach “based on the generated inference outputs from the locally performed inferences at least a portion of the local input data samples, requesting, via a control server in the computer system, at least a subset of the inference outputs corresponding to the local input data samples from at least a subset of other deployed federation models at other node computers among the plurality of node computers” (“Remarks”, Pages 31-33). However, as discussed above in the 35 U.S.C. 103 rejection of claim 1 above, Ben-Itzhak teaches the limitation. Therefore, the claims remain rejected under 35 U.S.C. 101 and 35 U.S.C. 103.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURT NICHOLAS PRESSLY whose telephone number is (703)756-4639. The examiner can normally be reached M-F 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KURT NICHOLAS PRESSLY/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125